Apr 16 00:15:11.807289 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 16 00:15:11.807317 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Wed Apr 15 22:28:48 -00 2026 Apr 16 00:15:11.807329 kernel: KASLR enabled Apr 16 00:15:11.807336 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Apr 16 00:15:11.807342 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390b8118 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Apr 16 00:15:11.807349 kernel: random: crng init done Apr 16 00:15:11.807356 kernel: secureboot: Secure boot disabled Apr 16 00:15:11.807361 kernel: ACPI: Early table checksum verification disabled Apr 16 00:15:11.807367 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Apr 16 00:15:11.807373 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Apr 16 00:15:11.807381 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 00:15:11.807387 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 00:15:11.807392 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 00:15:11.807398 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 00:15:11.807405 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 00:15:11.807412 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 00:15:11.807418 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 00:15:11.807424 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 00:15:11.807430 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 00:15:11.807436 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Apr 16 00:15:11.807442 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Apr 16 00:15:11.807448 kernel: ACPI: Use ACPI SPCR as default console: Yes Apr 16 00:15:11.807454 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Apr 16 00:15:11.807460 kernel: NODE_DATA(0) allocated [mem 0x13967da00-0x139684fff] Apr 16 00:15:11.807466 kernel: Zone ranges: Apr 16 00:15:11.807472 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 16 00:15:11.807479 kernel: DMA32 empty Apr 16 00:15:11.807485 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Apr 16 00:15:11.807491 kernel: Device empty Apr 16 00:15:11.807497 kernel: Movable zone start for each node Apr 16 00:15:11.807503 kernel: Early memory node ranges Apr 16 00:15:11.807509 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Apr 16 00:15:11.807515 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Apr 16 00:15:11.807521 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Apr 16 00:15:11.807526 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Apr 16 00:15:11.807532 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Apr 16 00:15:11.807538 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Apr 16 00:15:11.807544 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Apr 16 00:15:11.807551 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Apr 16 00:15:11.807557 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Apr 16 00:15:11.807567 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Apr 16 00:15:11.807611 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Apr 16 00:15:11.807618 kernel: cma: Reserved 16 MiB at 0x00000000ff000000 on node -1 Apr 16 00:15:11.807628 kernel: psci: probing for conduit method from ACPI. Apr 16 00:15:11.807634 kernel: psci: PSCIv1.1 detected in firmware. Apr 16 00:15:11.807642 kernel: psci: Using standard PSCI v0.2 function IDs Apr 16 00:15:11.807650 kernel: psci: Trusted OS migration not required Apr 16 00:15:11.807656 kernel: psci: SMC Calling Convention v1.1 Apr 16 00:15:11.807662 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Apr 16 00:15:11.807669 kernel: percpu: Embedded 33 pages/cpu s97752 r8192 d29224 u135168 Apr 16 00:15:11.807676 kernel: pcpu-alloc: s97752 r8192 d29224 u135168 alloc=33*4096 Apr 16 00:15:11.807682 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 16 00:15:11.807689 kernel: Detected PIPT I-cache on CPU0 Apr 16 00:15:11.807695 kernel: CPU features: detected: GIC system register CPU interface Apr 16 00:15:11.807703 kernel: CPU features: detected: Spectre-v4 Apr 16 00:15:11.807709 kernel: CPU features: detected: Spectre-BHB Apr 16 00:15:11.807716 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 16 00:15:11.807722 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 16 00:15:11.807729 kernel: CPU features: detected: ARM erratum 1418040 Apr 16 00:15:11.807735 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 16 00:15:11.807741 kernel: alternatives: applying boot alternatives Apr 16 00:15:11.807749 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=172170ce4924328797d9dee52d97e9cb5061c8270599cff4bddece75ce644e31 Apr 16 00:15:11.807757 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 16 00:15:11.807765 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 16 00:15:11.807772 kernel: Fallback order for Node 0: 0 Apr 16 00:15:11.807781 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1024000 Apr 16 00:15:11.807788 kernel: Policy zone: Normal Apr 16 00:15:11.807796 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 16 00:15:11.807804 kernel: software IO TLB: area num 2. Apr 16 00:15:11.807810 kernel: software IO TLB: mapped [mem 0x00000000fb000000-0x00000000ff000000] (64MB) Apr 16 00:15:11.807817 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 16 00:15:11.807823 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 16 00:15:11.807830 kernel: rcu: RCU event tracing is enabled. Apr 16 00:15:11.807837 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 16 00:15:11.807843 kernel: Trampoline variant of Tasks RCU enabled. Apr 16 00:15:11.807850 kernel: Tracing variant of Tasks RCU enabled. Apr 16 00:15:11.807856 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 16 00:15:11.807865 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 16 00:15:11.807871 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 16 00:15:11.807878 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 16 00:15:11.807884 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 16 00:15:11.807891 kernel: GICv3: 256 SPIs implemented Apr 16 00:15:11.807897 kernel: GICv3: 0 Extended SPIs implemented Apr 16 00:15:11.807903 kernel: Root IRQ handler: gic_handle_irq Apr 16 00:15:11.807910 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Apr 16 00:15:11.807916 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Apr 16 00:15:11.807922 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Apr 16 00:15:11.807928 kernel: ITS [mem 0x08080000-0x0809ffff] Apr 16 00:15:11.807936 kernel: ITS@0x0000000008080000: allocated 8192 Devices @100100000 (indirect, esz 8, psz 64K, shr 1) Apr 16 00:15:11.807943 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @100110000 (flat, esz 8, psz 64K, shr 1) Apr 16 00:15:11.807949 kernel: GICv3: using LPI property table @0x0000000100120000 Apr 16 00:15:11.807956 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000100130000 Apr 16 00:15:11.807962 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 16 00:15:11.807969 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 16 00:15:11.807975 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 16 00:15:11.807982 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 16 00:15:11.807988 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 16 00:15:11.807995 kernel: Console: colour dummy device 80x25 Apr 16 00:15:11.808002 kernel: ACPI: Core revision 20240827 Apr 16 00:15:11.808012 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 16 00:15:11.808018 kernel: pid_max: default: 32768 minimum: 301 Apr 16 00:15:11.808025 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 16 00:15:11.808032 kernel: landlock: Up and running. Apr 16 00:15:11.808038 kernel: SELinux: Initializing. Apr 16 00:15:11.808045 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 16 00:15:11.808052 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 16 00:15:11.808058 kernel: rcu: Hierarchical SRCU implementation. Apr 16 00:15:11.808065 kernel: rcu: Max phase no-delay instances is 400. Apr 16 00:15:11.808073 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 16 00:15:11.808080 kernel: Remapping and enabling EFI services. Apr 16 00:15:11.808087 kernel: smp: Bringing up secondary CPUs ... Apr 16 00:15:11.808093 kernel: Detected PIPT I-cache on CPU1 Apr 16 00:15:11.808100 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Apr 16 00:15:11.808107 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100140000 Apr 16 00:15:11.808114 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 16 00:15:11.808120 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 16 00:15:11.808127 kernel: smp: Brought up 1 node, 2 CPUs Apr 16 00:15:11.808135 kernel: SMP: Total of 2 processors activated. Apr 16 00:15:11.808150 kernel: CPU: All CPU(s) started at EL1 Apr 16 00:15:11.808158 kernel: CPU features: detected: 32-bit EL0 Support Apr 16 00:15:11.808168 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 16 00:15:11.810193 kernel: CPU features: detected: Common not Private translations Apr 16 00:15:11.810218 kernel: CPU features: detected: CRC32 instructions Apr 16 00:15:11.810226 kernel: CPU features: detected: Enhanced Virtualization Traps Apr 16 00:15:11.810233 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 16 00:15:11.810245 kernel: CPU features: detected: LSE atomic instructions Apr 16 00:15:11.810253 kernel: CPU features: detected: Privileged Access Never Apr 16 00:15:11.810260 kernel: CPU features: detected: RAS Extension Support Apr 16 00:15:11.810267 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Apr 16 00:15:11.810275 kernel: alternatives: applying system-wide alternatives Apr 16 00:15:11.810282 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Apr 16 00:15:11.810290 kernel: Memory: 3858780K/4096000K available (11200K kernel code, 2458K rwdata, 9092K rodata, 39552K init, 1038K bss, 215732K reserved, 16384K cma-reserved) Apr 16 00:15:11.810298 kernel: devtmpfs: initialized Apr 16 00:15:11.810307 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 16 00:15:11.810316 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 16 00:15:11.810324 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 16 00:15:11.810330 kernel: 0 pages in range for non-PLT usage Apr 16 00:15:11.810337 kernel: 508384 pages in range for PLT usage Apr 16 00:15:11.810344 kernel: pinctrl core: initialized pinctrl subsystem Apr 16 00:15:11.810351 kernel: SMBIOS 3.0.0 present. Apr 16 00:15:11.810359 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Apr 16 00:15:11.810366 kernel: DMI: Memory slots populated: 1/1 Apr 16 00:15:11.810373 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 16 00:15:11.810383 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 16 00:15:11.810390 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 16 00:15:11.810399 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 16 00:15:11.810406 kernel: audit: initializing netlink subsys (disabled) Apr 16 00:15:11.810413 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Apr 16 00:15:11.810420 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 16 00:15:11.810428 kernel: cpuidle: using governor menu Apr 16 00:15:11.810435 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 16 00:15:11.810442 kernel: ASID allocator initialised with 32768 entries Apr 16 00:15:11.810451 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 16 00:15:11.810459 kernel: Serial: AMBA PL011 UART driver Apr 16 00:15:11.810467 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 16 00:15:11.810475 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 16 00:15:11.810482 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 16 00:15:11.810490 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 16 00:15:11.810498 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 16 00:15:11.810506 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 16 00:15:11.810513 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 16 00:15:11.810522 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 16 00:15:11.810529 kernel: ACPI: Added _OSI(Module Device) Apr 16 00:15:11.810536 kernel: ACPI: Added _OSI(Processor Device) Apr 16 00:15:11.810543 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 16 00:15:11.810550 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 16 00:15:11.810557 kernel: ACPI: Interpreter enabled Apr 16 00:15:11.810564 kernel: ACPI: Using GIC for interrupt routing Apr 16 00:15:11.810600 kernel: ACPI: MCFG table detected, 1 entries Apr 16 00:15:11.810608 kernel: ACPI: CPU0 has been hot-added Apr 16 00:15:11.810619 kernel: ACPI: CPU1 has been hot-added Apr 16 00:15:11.810626 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Apr 16 00:15:11.810633 kernel: printk: legacy console [ttyAMA0] enabled Apr 16 00:15:11.810640 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 16 00:15:11.810819 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 16 00:15:11.810888 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 16 00:15:11.810948 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 16 00:15:11.811014 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Apr 16 00:15:11.811077 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Apr 16 00:15:11.811086 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Apr 16 00:15:11.811093 kernel: PCI host bridge to bus 0000:00 Apr 16 00:15:11.811162 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Apr 16 00:15:11.811246 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 16 00:15:11.811314 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Apr 16 00:15:11.811366 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 16 00:15:11.811449 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Apr 16 00:15:11.811520 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 conventional PCI endpoint Apr 16 00:15:11.811601 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11289000-0x11289fff] Apr 16 00:15:11.811673 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref] Apr 16 00:15:11.811748 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 00:15:11.811810 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11288000-0x11288fff] Apr 16 00:15:11.811879 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 16 00:15:11.811941 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff] Apr 16 00:15:11.812005 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80000fffff 64bit pref] Apr 16 00:15:11.812073 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 00:15:11.812142 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11287000-0x11287fff] Apr 16 00:15:11.814406 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 16 00:15:11.814496 kernel: pci 0000:00:02.1: bridge window [mem 0x10e00000-0x10ffffff] Apr 16 00:15:11.814607 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 00:15:11.814682 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11286000-0x11286fff] Apr 16 00:15:11.814742 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 16 00:15:11.814804 kernel: pci 0000:00:02.2: bridge window [mem 0x10c00000-0x10dfffff] Apr 16 00:15:11.814871 kernel: pci 0000:00:02.2: bridge window [mem 0x8000100000-0x80001fffff 64bit pref] Apr 16 00:15:11.814937 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 00:15:11.814998 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11285000-0x11285fff] Apr 16 00:15:11.815069 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 16 00:15:11.815138 kernel: pci 0000:00:02.3: bridge window [mem 0x10a00000-0x10bfffff] Apr 16 00:15:11.815212 kernel: pci 0000:00:02.3: bridge window [mem 0x8000200000-0x80002fffff 64bit pref] Apr 16 00:15:11.815286 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 00:15:11.815351 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11284000-0x11284fff] Apr 16 00:15:11.815411 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 16 00:15:11.815471 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Apr 16 00:15:11.815534 kernel: pci 0000:00:02.4: bridge window [mem 0x8000300000-0x80003fffff 64bit pref] Apr 16 00:15:11.815642 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 00:15:11.815709 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11283000-0x11283fff] Apr 16 00:15:11.815770 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 16 00:15:11.815828 kernel: pci 0000:00:02.5: bridge window [mem 0x10600000-0x107fffff] Apr 16 00:15:11.815895 kernel: pci 0000:00:02.5: bridge window [mem 0x8000400000-0x80004fffff 64bit pref] Apr 16 00:15:11.815962 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 00:15:11.816024 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11282000-0x11282fff] Apr 16 00:15:11.816086 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 16 00:15:11.816145 kernel: pci 0000:00:02.6: bridge window [mem 0x10400000-0x105fffff] Apr 16 00:15:11.817335 kernel: pci 0000:00:02.6: bridge window [mem 0x8000500000-0x80005fffff 64bit pref] Apr 16 00:15:11.817431 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 00:15:11.817494 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11281000-0x11281fff] Apr 16 00:15:11.817563 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 16 00:15:11.817654 kernel: pci 0000:00:02.7: bridge window [mem 0x10200000-0x103fffff] Apr 16 00:15:11.817726 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 00:15:11.817799 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11280000-0x11280fff] Apr 16 00:15:11.817878 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 16 00:15:11.817951 kernel: pci 0000:00:03.0: bridge window [mem 0x10000000-0x101fffff] Apr 16 00:15:11.818032 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 conventional PCI endpoint Apr 16 00:15:11.818100 kernel: pci 0000:00:04.0: BAR 0 [io 0x0000-0x0007] Apr 16 00:15:11.818218 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Apr 16 00:15:11.818291 kernel: pci 0000:01:00.0: BAR 1 [mem 0x11000000-0x11000fff] Apr 16 00:15:11.818353 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Apr 16 00:15:11.818413 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref] Apr 16 00:15:11.818485 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Apr 16 00:15:11.818546 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10e00000-0x10e03fff 64bit] Apr 16 00:15:11.818667 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 PCIe Endpoint Apr 16 00:15:11.818741 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10c00000-0x10c00fff] Apr 16 00:15:11.818844 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000100000-0x8000103fff 64bit pref] Apr 16 00:15:11.818966 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Apr 16 00:15:11.819046 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000200000-0x8000203fff 64bit pref] Apr 16 00:15:11.819146 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Apr 16 00:15:11.819282 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff] Apr 16 00:15:11.819380 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000300000-0x8000303fff 64bit pref] Apr 16 00:15:11.819472 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 PCIe Endpoint Apr 16 00:15:11.819561 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10600000-0x10600fff] Apr 16 00:15:11.819694 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref] Apr 16 00:15:11.819789 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Apr 16 00:15:11.819870 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10400000-0x10400fff] Apr 16 00:15:11.819956 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000500000-0x8000503fff 64bit pref] Apr 16 00:15:11.820034 kernel: pci 0000:07:00.0: ROM [mem 0xfff80000-0xffffffff pref] Apr 16 00:15:11.820117 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Apr 16 00:15:11.820213 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Apr 16 00:15:11.820296 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Apr 16 00:15:11.820376 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Apr 16 00:15:11.820453 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Apr 16 00:15:11.820535 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Apr 16 00:15:11.820680 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Apr 16 00:15:11.820773 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Apr 16 00:15:11.820943 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Apr 16 00:15:11.821036 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Apr 16 00:15:11.821125 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Apr 16 00:15:11.821237 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Apr 16 00:15:11.821325 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Apr 16 00:15:11.821403 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Apr 16 00:15:11.821479 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Apr 16 00:15:11.821566 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Apr 16 00:15:11.821701 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Apr 16 00:15:11.821801 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Apr 16 00:15:11.823310 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 16 00:15:11.823453 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Apr 16 00:15:11.823520 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Apr 16 00:15:11.823611 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 16 00:15:11.823679 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Apr 16 00:15:11.823774 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Apr 16 00:15:11.823849 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 16 00:15:11.823927 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Apr 16 00:15:11.823995 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Apr 16 00:15:11.824066 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff]: assigned Apr 16 00:15:11.824133 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref]: assigned Apr 16 00:15:11.825341 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff]: assigned Apr 16 00:15:11.825436 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref]: assigned Apr 16 00:15:11.825510 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff]: assigned Apr 16 00:15:11.825638 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref]: assigned Apr 16 00:15:11.825722 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff]: assigned Apr 16 00:15:11.825792 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref]: assigned Apr 16 00:15:11.825866 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff]: assigned Apr 16 00:15:11.825935 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref]: assigned Apr 16 00:15:11.826005 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff]: assigned Apr 16 00:15:11.826074 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref]: assigned Apr 16 00:15:11.826146 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff]: assigned Apr 16 00:15:11.827329 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref]: assigned Apr 16 00:15:11.827425 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff]: assigned Apr 16 00:15:11.827495 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref]: assigned Apr 16 00:15:11.827566 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff]: assigned Apr 16 00:15:11.827655 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref]: assigned Apr 16 00:15:11.827729 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8001200000-0x8001203fff 64bit pref]: assigned Apr 16 00:15:11.827796 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11200000-0x11200fff]: assigned Apr 16 00:15:11.827866 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11201000-0x11201fff]: assigned Apr 16 00:15:11.827939 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Apr 16 00:15:11.828009 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11202000-0x11202fff]: assigned Apr 16 00:15:11.828076 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Apr 16 00:15:11.828145 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11203000-0x11203fff]: assigned Apr 16 00:15:11.830283 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Apr 16 00:15:11.830371 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11204000-0x11204fff]: assigned Apr 16 00:15:11.830434 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Apr 16 00:15:11.830499 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11205000-0x11205fff]: assigned Apr 16 00:15:11.830560 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Apr 16 00:15:11.830674 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11206000-0x11206fff]: assigned Apr 16 00:15:11.830743 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Apr 16 00:15:11.830807 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11207000-0x11207fff]: assigned Apr 16 00:15:11.830876 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Apr 16 00:15:11.830940 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11208000-0x11208fff]: assigned Apr 16 00:15:11.830999 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Apr 16 00:15:11.831061 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11209000-0x11209fff]: assigned Apr 16 00:15:11.831119 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff]: assigned Apr 16 00:15:11.831201 kernel: pci 0000:00:04.0: BAR 0 [io 0xa000-0xa007]: assigned Apr 16 00:15:11.831274 kernel: pci 0000:01:00.0: ROM [mem 0x10000000-0x1007ffff pref]: assigned Apr 16 00:15:11.831336 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Apr 16 00:15:11.831400 kernel: pci 0000:01:00.0: BAR 1 [mem 0x10080000-0x10080fff]: assigned Apr 16 00:15:11.831461 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 16 00:15:11.831520 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Apr 16 00:15:11.831593 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Apr 16 00:15:11.831657 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Apr 16 00:15:11.831728 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10200000-0x10203fff 64bit]: assigned Apr 16 00:15:11.831801 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 16 00:15:11.831869 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Apr 16 00:15:11.831928 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Apr 16 00:15:11.831988 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Apr 16 00:15:11.832063 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref]: assigned Apr 16 00:15:11.832126 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10400000-0x10400fff]: assigned Apr 16 00:15:11.833310 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 16 00:15:11.833413 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Apr 16 00:15:11.833481 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Apr 16 00:15:11.833541 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Apr 16 00:15:11.833627 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref]: assigned Apr 16 00:15:11.833694 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 16 00:15:11.833754 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Apr 16 00:15:11.833812 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Apr 16 00:15:11.833870 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Apr 16 00:15:11.833939 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000800000-0x8000803fff 64bit pref]: assigned Apr 16 00:15:11.833999 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff]: assigned Apr 16 00:15:11.834060 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 16 00:15:11.834119 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Apr 16 00:15:11.834729 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Apr 16 00:15:11.834835 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Apr 16 00:15:11.834905 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000a00000-0x8000a03fff 64bit pref]: assigned Apr 16 00:15:11.834990 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10a00000-0x10a00fff]: assigned Apr 16 00:15:11.835055 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 16 00:15:11.835130 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Apr 16 00:15:11.835205 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Apr 16 00:15:11.835265 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 16 00:15:11.835333 kernel: pci 0000:07:00.0: ROM [mem 0x10c00000-0x10c7ffff pref]: assigned Apr 16 00:15:11.835396 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000c00000-0x8000c03fff 64bit pref]: assigned Apr 16 00:15:11.835461 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10c80000-0x10c80fff]: assigned Apr 16 00:15:11.835521 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 16 00:15:11.835624 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Apr 16 00:15:11.835694 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Apr 16 00:15:11.835757 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 16 00:15:11.835828 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 16 00:15:11.835890 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Apr 16 00:15:11.835947 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Apr 16 00:15:11.836006 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 16 00:15:11.836068 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 16 00:15:11.836127 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Apr 16 00:15:11.836242 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Apr 16 00:15:11.836316 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Apr 16 00:15:11.836380 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Apr 16 00:15:11.836433 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 16 00:15:11.836486 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Apr 16 00:15:11.836551 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Apr 16 00:15:11.836622 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Apr 16 00:15:11.836683 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Apr 16 00:15:11.836754 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Apr 16 00:15:11.836809 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Apr 16 00:15:11.836863 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Apr 16 00:15:11.837523 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Apr 16 00:15:11.837654 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Apr 16 00:15:11.837727 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Apr 16 00:15:11.837792 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Apr 16 00:15:11.837849 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Apr 16 00:15:11.837905 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Apr 16 00:15:11.837968 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Apr 16 00:15:11.838023 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Apr 16 00:15:11.838076 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Apr 16 00:15:11.838159 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Apr 16 00:15:11.838271 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Apr 16 00:15:11.838332 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 16 00:15:11.838397 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Apr 16 00:15:11.838453 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Apr 16 00:15:11.838508 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 16 00:15:11.838765 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Apr 16 00:15:11.838846 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Apr 16 00:15:11.838901 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 16 00:15:11.838980 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Apr 16 00:15:11.839040 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Apr 16 00:15:11.839094 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Apr 16 00:15:11.839104 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 16 00:15:11.839111 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 16 00:15:11.839122 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 16 00:15:11.839130 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 16 00:15:11.839137 kernel: iommu: Default domain type: Translated Apr 16 00:15:11.839145 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 16 00:15:11.839159 kernel: efivars: Registered efivars operations Apr 16 00:15:11.839168 kernel: vgaarb: loaded Apr 16 00:15:11.839196 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 16 00:15:11.839205 kernel: VFS: Disk quotas dquot_6.6.0 Apr 16 00:15:11.839212 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 16 00:15:11.839222 kernel: pnp: PnP ACPI init Apr 16 00:15:11.839320 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Apr 16 00:15:11.839333 kernel: pnp: PnP ACPI: found 1 devices Apr 16 00:15:11.839341 kernel: NET: Registered PF_INET protocol family Apr 16 00:15:11.839348 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 16 00:15:11.839355 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 16 00:15:11.839363 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 16 00:15:11.839370 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 16 00:15:11.839380 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 16 00:15:11.839387 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 16 00:15:11.839395 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 16 00:15:11.839402 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 16 00:15:11.839410 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 16 00:15:11.839478 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Apr 16 00:15:11.839490 kernel: PCI: CLS 0 bytes, default 64 Apr 16 00:15:11.839497 kernel: kvm [1]: HYP mode not available Apr 16 00:15:11.839505 kernel: Initialise system trusted keyrings Apr 16 00:15:11.839515 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 16 00:15:11.839523 kernel: Key type asymmetric registered Apr 16 00:15:11.839530 kernel: Asymmetric key parser 'x509' registered Apr 16 00:15:11.839538 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Apr 16 00:15:11.839545 kernel: io scheduler mq-deadline registered Apr 16 00:15:11.839553 kernel: io scheduler kyber registered Apr 16 00:15:11.839560 kernel: io scheduler bfq registered Apr 16 00:15:11.839577 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 16 00:15:11.839668 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Apr 16 00:15:11.839738 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Apr 16 00:15:11.839798 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 00:15:11.839859 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Apr 16 00:15:11.839919 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Apr 16 00:15:11.839978 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 00:15:11.840040 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Apr 16 00:15:11.840099 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Apr 16 00:15:11.840157 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 00:15:11.840847 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Apr 16 00:15:11.840926 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Apr 16 00:15:11.840987 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 00:15:11.841050 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Apr 16 00:15:11.841110 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Apr 16 00:15:11.841171 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 00:15:11.841306 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Apr 16 00:15:11.841380 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Apr 16 00:15:11.841444 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 00:15:11.841510 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Apr 16 00:15:11.841582 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Apr 16 00:15:11.841646 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 00:15:11.841712 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Apr 16 00:15:11.841772 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Apr 16 00:15:11.841835 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 00:15:11.841847 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Apr 16 00:15:11.841910 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Apr 16 00:15:11.841971 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Apr 16 00:15:11.842029 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 00:15:11.842039 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 16 00:15:11.842046 kernel: ACPI: button: Power Button [PWRB] Apr 16 00:15:11.842054 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 16 00:15:11.842120 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Apr 16 00:15:11.842234 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Apr 16 00:15:11.842247 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 16 00:15:11.842256 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 16 00:15:11.843351 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Apr 16 00:15:11.843372 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Apr 16 00:15:11.843380 kernel: thunder_xcv, ver 1.0 Apr 16 00:15:11.843388 kernel: thunder_bgx, ver 1.0 Apr 16 00:15:11.843395 kernel: nicpf, ver 1.0 Apr 16 00:15:11.843402 kernel: nicvf, ver 1.0 Apr 16 00:15:11.843488 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 16 00:15:11.843546 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-04-16T00:15:11 UTC (1776298511) Apr 16 00:15:11.843556 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 16 00:15:11.843563 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Apr 16 00:15:11.843584 kernel: watchdog: NMI not fully supported Apr 16 00:15:11.843592 kernel: watchdog: Hard watchdog permanently disabled Apr 16 00:15:11.843600 kernel: NET: Registered PF_INET6 protocol family Apr 16 00:15:11.843607 kernel: Segment Routing with IPv6 Apr 16 00:15:11.843618 kernel: In-situ OAM (IOAM) with IPv6 Apr 16 00:15:11.843625 kernel: NET: Registered PF_PACKET protocol family Apr 16 00:15:11.843633 kernel: Key type dns_resolver registered Apr 16 00:15:11.843640 kernel: registered taskstats version 1 Apr 16 00:15:11.843648 kernel: Loading compiled-in X.509 certificates Apr 16 00:15:11.843655 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: 458be50765be8c6d94340d3d282c3e0743840df9' Apr 16 00:15:11.843662 kernel: Demotion targets for Node 0: null Apr 16 00:15:11.843670 kernel: Key type .fscrypt registered Apr 16 00:15:11.843677 kernel: Key type fscrypt-provisioning registered Apr 16 00:15:11.843686 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 16 00:15:11.843693 kernel: ima: Allocated hash algorithm: sha1 Apr 16 00:15:11.843701 kernel: ima: No architecture policies found Apr 16 00:15:11.843708 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 16 00:15:11.843715 kernel: clk: Disabling unused clocks Apr 16 00:15:11.843722 kernel: PM: genpd: Disabling unused power domains Apr 16 00:15:11.843730 kernel: Warning: unable to open an initial console. Apr 16 00:15:11.843738 kernel: Freeing unused kernel memory: 39552K Apr 16 00:15:11.843745 kernel: Run /init as init process Apr 16 00:15:11.843752 kernel: with arguments: Apr 16 00:15:11.843761 kernel: /init Apr 16 00:15:11.843768 kernel: with environment: Apr 16 00:15:11.843775 kernel: HOME=/ Apr 16 00:15:11.843782 kernel: TERM=linux Apr 16 00:15:11.843790 systemd[1]: Successfully made /usr/ read-only. Apr 16 00:15:11.843801 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 16 00:15:11.843810 systemd[1]: Detected virtualization kvm. Apr 16 00:15:11.843819 systemd[1]: Detected architecture arm64. Apr 16 00:15:11.843826 systemd[1]: Running in initrd. Apr 16 00:15:11.843834 systemd[1]: No hostname configured, using default hostname. Apr 16 00:15:11.843842 systemd[1]: Hostname set to . Apr 16 00:15:11.843850 systemd[1]: Initializing machine ID from VM UUID. Apr 16 00:15:11.843857 systemd[1]: Queued start job for default target initrd.target. Apr 16 00:15:11.843865 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 00:15:11.843873 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 00:15:11.843883 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 16 00:15:11.843891 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 00:15:11.843899 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 16 00:15:11.843910 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 16 00:15:11.843919 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 16 00:15:11.843927 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 16 00:15:11.843934 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 00:15:11.843944 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 00:15:11.843951 systemd[1]: Reached target paths.target - Path Units. Apr 16 00:15:11.843959 systemd[1]: Reached target slices.target - Slice Units. Apr 16 00:15:11.843967 systemd[1]: Reached target swap.target - Swaps. Apr 16 00:15:11.843975 systemd[1]: Reached target timers.target - Timer Units. Apr 16 00:15:11.843983 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 00:15:11.843990 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 00:15:11.843998 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 16 00:15:11.844006 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 16 00:15:11.844015 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 00:15:11.844023 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 00:15:11.844031 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 00:15:11.844039 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 00:15:11.844046 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 16 00:15:11.844054 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 00:15:11.844062 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 16 00:15:11.844070 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 16 00:15:11.844079 systemd[1]: Starting systemd-fsck-usr.service... Apr 16 00:15:11.844087 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 00:15:11.844095 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 00:15:11.844103 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 00:15:11.844110 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 16 00:15:11.844119 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 00:15:11.844129 systemd[1]: Finished systemd-fsck-usr.service. Apr 16 00:15:11.844137 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 16 00:15:11.844172 systemd-journald[245]: Collecting audit messages is disabled. Apr 16 00:15:11.844244 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 00:15:11.844253 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 16 00:15:11.844261 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 16 00:15:11.844269 kernel: Bridge firewalling registered Apr 16 00:15:11.844277 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 00:15:11.844285 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 00:15:11.844293 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 00:15:11.844303 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 00:15:11.844312 systemd-journald[245]: Journal started Apr 16 00:15:11.844331 systemd-journald[245]: Runtime Journal (/run/log/journal/0a1acdabf14f41d899e97bc4fef4e07f) is 8M, max 76.5M, 68.5M free. Apr 16 00:15:11.796894 systemd-modules-load[246]: Inserted module 'overlay' Apr 16 00:15:11.849513 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 00:15:11.821030 systemd-modules-load[246]: Inserted module 'br_netfilter' Apr 16 00:15:11.855962 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 00:15:11.857938 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 00:15:11.864361 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 16 00:15:11.867331 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 00:15:11.870260 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 00:15:11.885769 systemd-tmpfiles[282]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 16 00:15:11.889479 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 00:15:11.891795 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 00:15:11.901994 dracut-cmdline[281]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=172170ce4924328797d9dee52d97e9cb5061c8270599cff4bddece75ce644e31 Apr 16 00:15:11.940102 systemd-resolved[292]: Positive Trust Anchors: Apr 16 00:15:11.940123 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 00:15:11.940155 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 00:15:11.945895 systemd-resolved[292]: Defaulting to hostname 'linux'. Apr 16 00:15:11.946922 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 00:15:11.950346 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 00:15:12.008246 kernel: SCSI subsystem initialized Apr 16 00:15:12.013222 kernel: Loading iSCSI transport class v2.0-870. Apr 16 00:15:12.021230 kernel: iscsi: registered transport (tcp) Apr 16 00:15:12.034272 kernel: iscsi: registered transport (qla4xxx) Apr 16 00:15:12.034394 kernel: QLogic iSCSI HBA Driver Apr 16 00:15:12.057399 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 16 00:15:12.079394 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 00:15:12.084695 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 16 00:15:12.142491 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 16 00:15:12.144326 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 16 00:15:12.208224 kernel: raid6: neonx8 gen() 15745 MB/s Apr 16 00:15:12.225227 kernel: raid6: neonx4 gen() 15760 MB/s Apr 16 00:15:12.242305 kernel: raid6: neonx2 gen() 13158 MB/s Apr 16 00:15:12.259258 kernel: raid6: neonx1 gen() 10409 MB/s Apr 16 00:15:12.276255 kernel: raid6: int64x8 gen() 6861 MB/s Apr 16 00:15:12.293233 kernel: raid6: int64x4 gen() 7324 MB/s Apr 16 00:15:12.310253 kernel: raid6: int64x2 gen() 6084 MB/s Apr 16 00:15:12.327242 kernel: raid6: int64x1 gen() 5033 MB/s Apr 16 00:15:12.327332 kernel: raid6: using algorithm neonx4 gen() 15760 MB/s Apr 16 00:15:12.344265 kernel: raid6: .... xor() 12314 MB/s, rmw enabled Apr 16 00:15:12.344342 kernel: raid6: using neon recovery algorithm Apr 16 00:15:12.350296 kernel: xor: measuring software checksum speed Apr 16 00:15:12.350368 kernel: 8regs : 21596 MB/sec Apr 16 00:15:12.350384 kernel: 32regs : 21704 MB/sec Apr 16 00:15:12.350409 kernel: arm64_neon : 28109 MB/sec Apr 16 00:15:12.351213 kernel: xor: using function: arm64_neon (28109 MB/sec) Apr 16 00:15:12.405306 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 16 00:15:12.413300 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 16 00:15:12.416961 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 00:15:12.447927 systemd-udevd[496]: Using default interface naming scheme 'v255'. Apr 16 00:15:12.452273 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 00:15:12.456142 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 16 00:15:12.483767 dracut-pre-trigger[504]: rd.md=0: removing MD RAID activation Apr 16 00:15:12.514343 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 00:15:12.516627 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 00:15:12.582540 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 00:15:12.587359 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 16 00:15:12.663315 kernel: virtio_scsi virtio5: 2/0/0 default/read/poll queues Apr 16 00:15:12.664397 kernel: scsi host0: Virtio SCSI HBA Apr 16 00:15:12.668202 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 16 00:15:12.669196 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 16 00:15:12.691378 kernel: ACPI: bus type USB registered Apr 16 00:15:12.691450 kernel: usbcore: registered new interface driver usbfs Apr 16 00:15:12.691462 kernel: usbcore: registered new interface driver hub Apr 16 00:15:12.691470 kernel: usbcore: registered new device driver usb Apr 16 00:15:12.725494 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 00:15:12.725608 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 00:15:12.728224 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 00:15:12.731806 kernel: sd 0:0:0:1: Power-on or device reset occurred Apr 16 00:15:12.731979 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Apr 16 00:15:12.732066 kernel: sd 0:0:0:1: [sda] Write Protect is off Apr 16 00:15:12.732140 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Apr 16 00:15:12.732976 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 16 00:15:12.732310 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 00:15:12.736213 kernel: sr 0:0:0:0: Power-on or device reset occurred Apr 16 00:15:12.740200 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Apr 16 00:15:12.740370 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 16 00:15:12.742389 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 16 00:15:12.742439 kernel: GPT:17805311 != 80003071 Apr 16 00:15:12.742459 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 16 00:15:12.743602 kernel: GPT:17805311 != 80003071 Apr 16 00:15:12.743643 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 16 00:15:12.743655 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 16 00:15:12.743666 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Apr 16 00:15:12.746214 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Apr 16 00:15:12.754663 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 16 00:15:12.754848 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 16 00:15:12.756212 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 16 00:15:12.759852 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 16 00:15:12.760087 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 16 00:15:12.761245 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 16 00:15:12.763693 kernel: hub 1-0:1.0: USB hub found Apr 16 00:15:12.763879 kernel: hub 1-0:1.0: 4 ports detected Apr 16 00:15:12.763970 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 16 00:15:12.766124 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 00:15:12.767757 kernel: hub 2-0:1.0: USB hub found Apr 16 00:15:12.769278 kernel: hub 2-0:1.0: 4 ports detected Apr 16 00:15:12.825139 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 16 00:15:12.835213 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 16 00:15:12.844259 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 16 00:15:12.854856 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 16 00:15:12.855790 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 16 00:15:12.859075 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 16 00:15:12.873812 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 16 00:15:12.877482 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 00:15:12.879537 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 00:15:12.882472 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 00:15:12.885776 disk-uuid[600]: Primary Header is updated. Apr 16 00:15:12.885776 disk-uuid[600]: Secondary Entries is updated. Apr 16 00:15:12.885776 disk-uuid[600]: Secondary Header is updated. Apr 16 00:15:12.888350 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 16 00:15:12.899206 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 16 00:15:12.909413 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 16 00:15:13.004639 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 16 00:15:13.133205 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Apr 16 00:15:13.134193 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 16 00:15:13.135314 kernel: usbcore: registered new interface driver usbhid Apr 16 00:15:13.135342 kernel: usbhid: USB HID core driver Apr 16 00:15:13.239257 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Apr 16 00:15:13.367242 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Apr 16 00:15:13.420249 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Apr 16 00:15:13.914312 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 16 00:15:13.915870 disk-uuid[602]: The operation has completed successfully. Apr 16 00:15:13.966280 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 16 00:15:13.966412 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 16 00:15:14.001096 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 16 00:15:14.018108 sh[625]: Success Apr 16 00:15:14.035406 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 16 00:15:14.035461 kernel: device-mapper: uevent: version 1.0.3 Apr 16 00:15:14.035473 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 16 00:15:14.044217 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Apr 16 00:15:14.091810 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 16 00:15:14.097328 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 16 00:15:14.107834 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 16 00:15:14.118268 kernel: BTRFS: device fsid 1005f484-fa07-42ff-96a1-c2f162506f15 devid 1 transid 33 /dev/mapper/usr (254:0) scanned by mount (637) Apr 16 00:15:14.120508 kernel: BTRFS info (device dm-0): first mount of filesystem 1005f484-fa07-42ff-96a1-c2f162506f15 Apr 16 00:15:14.120603 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 16 00:15:14.131785 kernel: BTRFS info (device dm-0 state E): enabling ssd optimizations Apr 16 00:15:14.131851 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 16 00:15:14.131865 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 16 00:15:14.135282 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 16 00:15:14.136884 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 16 00:15:14.138576 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 16 00:15:14.140894 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 16 00:15:14.142686 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 16 00:15:14.168207 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (667) Apr 16 00:15:14.169511 kernel: BTRFS info (device sda6): first mount of filesystem 5d984f6c-8904-4549-88de-a3d0a0f4ead8 Apr 16 00:15:14.169581 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 16 00:15:14.174474 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 16 00:15:14.174553 kernel: BTRFS info (device sda6): turning on async discard Apr 16 00:15:14.174571 kernel: BTRFS info (device sda6): enabling free space tree Apr 16 00:15:14.180234 kernel: BTRFS info (device sda6): last unmount of filesystem 5d984f6c-8904-4549-88de-a3d0a0f4ead8 Apr 16 00:15:14.181374 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 16 00:15:14.184467 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 16 00:15:14.289463 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 00:15:14.293632 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 00:15:14.334237 ignition[715]: Ignition 2.22.0 Apr 16 00:15:14.334250 ignition[715]: Stage: fetch-offline Apr 16 00:15:14.334280 ignition[715]: no configs at "/usr/lib/ignition/base.d" Apr 16 00:15:14.334287 ignition[715]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 16 00:15:14.334364 ignition[715]: parsed url from cmdline: "" Apr 16 00:15:14.334367 ignition[715]: no config URL provided Apr 16 00:15:14.334371 ignition[715]: reading system config file "/usr/lib/ignition/user.ign" Apr 16 00:15:14.334377 ignition[715]: no config at "/usr/lib/ignition/user.ign" Apr 16 00:15:14.334382 ignition[715]: failed to fetch config: resource requires networking Apr 16 00:15:14.334532 ignition[715]: Ignition finished successfully Apr 16 00:15:14.338599 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 00:15:14.347844 systemd-networkd[812]: lo: Link UP Apr 16 00:15:14.347864 systemd-networkd[812]: lo: Gained carrier Apr 16 00:15:14.350077 systemd-networkd[812]: Enumeration completed Apr 16 00:15:14.351023 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 00:15:14.351051 systemd-networkd[812]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 00:15:14.351055 systemd-networkd[812]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 00:15:14.352267 systemd-networkd[812]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 00:15:14.352271 systemd-networkd[812]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 00:15:14.352730 systemd[1]: Reached target network.target - Network. Apr 16 00:15:14.353395 systemd-networkd[812]: eth0: Link UP Apr 16 00:15:14.353881 systemd-networkd[812]: eth1: Link UP Apr 16 00:15:14.354061 systemd-networkd[812]: eth0: Gained carrier Apr 16 00:15:14.354073 systemd-networkd[812]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 00:15:14.357991 systemd-networkd[812]: eth1: Gained carrier Apr 16 00:15:14.358002 systemd-networkd[812]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 00:15:14.359076 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 16 00:15:14.393684 ignition[817]: Ignition 2.22.0 Apr 16 00:15:14.394383 ignition[817]: Stage: fetch Apr 16 00:15:14.394590 ignition[817]: no configs at "/usr/lib/ignition/base.d" Apr 16 00:15:14.394601 ignition[817]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 16 00:15:14.396358 systemd-networkd[812]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 16 00:15:14.394697 ignition[817]: parsed url from cmdline: "" Apr 16 00:15:14.394700 ignition[817]: no config URL provided Apr 16 00:15:14.394705 ignition[817]: reading system config file "/usr/lib/ignition/user.ign" Apr 16 00:15:14.394712 ignition[817]: no config at "/usr/lib/ignition/user.ign" Apr 16 00:15:14.394755 ignition[817]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 16 00:15:14.395204 ignition[817]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 16 00:15:14.408302 systemd-networkd[812]: eth0: DHCPv4 address 88.198.131.37/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 16 00:15:14.596061 ignition[817]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Apr 16 00:15:14.602063 ignition[817]: GET result: OK Apr 16 00:15:14.602390 ignition[817]: parsing config with SHA512: 52ed1dfbc1b679efb2907371d30380e3082773c5d1c0d35cbd09570eb3ef141c72878fc08f39b5ed07bc3294e32362e0c800b984124f1de05a3624fc910c9168 Apr 16 00:15:14.608997 unknown[817]: fetched base config from "system" Apr 16 00:15:14.609008 unknown[817]: fetched base config from "system" Apr 16 00:15:14.609525 ignition[817]: fetch: fetch complete Apr 16 00:15:14.609017 unknown[817]: fetched user config from "hetzner" Apr 16 00:15:14.609530 ignition[817]: fetch: fetch passed Apr 16 00:15:14.612701 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 16 00:15:14.609600 ignition[817]: Ignition finished successfully Apr 16 00:15:14.614788 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 16 00:15:14.651103 ignition[824]: Ignition 2.22.0 Apr 16 00:15:14.651829 ignition[824]: Stage: kargs Apr 16 00:15:14.652367 ignition[824]: no configs at "/usr/lib/ignition/base.d" Apr 16 00:15:14.652899 ignition[824]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 16 00:15:14.653725 ignition[824]: kargs: kargs passed Apr 16 00:15:14.653779 ignition[824]: Ignition finished successfully Apr 16 00:15:14.657160 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 16 00:15:14.660611 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 16 00:15:14.701781 ignition[831]: Ignition 2.22.0 Apr 16 00:15:14.702403 ignition[831]: Stage: disks Apr 16 00:15:14.702663 ignition[831]: no configs at "/usr/lib/ignition/base.d" Apr 16 00:15:14.702673 ignition[831]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 16 00:15:14.703664 ignition[831]: disks: disks passed Apr 16 00:15:14.703716 ignition[831]: Ignition finished successfully Apr 16 00:15:14.706959 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 16 00:15:14.707915 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 16 00:15:14.708931 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 16 00:15:14.710155 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 00:15:14.711430 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 00:15:14.712412 systemd[1]: Reached target basic.target - Basic System. Apr 16 00:15:14.714385 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 16 00:15:14.746136 systemd-fsck[839]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Apr 16 00:15:14.753668 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 16 00:15:14.756243 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 16 00:15:14.834240 kernel: EXT4-fs (sda9): mounted filesystem 56eb5b2c-8c84-48e6-9302-66af062fcb94 r/w with ordered data mode. Quota mode: none. Apr 16 00:15:14.835705 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 16 00:15:14.836321 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 16 00:15:14.839223 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 00:15:14.841318 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 16 00:15:14.856262 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 16 00:15:14.861896 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 16 00:15:14.863310 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 00:15:14.865966 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (847) Apr 16 00:15:14.865987 kernel: BTRFS info (device sda6): first mount of filesystem 5d984f6c-8904-4549-88de-a3d0a0f4ead8 Apr 16 00:15:14.865997 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 16 00:15:14.868454 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 16 00:15:14.871603 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 16 00:15:14.879696 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 16 00:15:14.879768 kernel: BTRFS info (device sda6): turning on async discard Apr 16 00:15:14.880767 kernel: BTRFS info (device sda6): enabling free space tree Apr 16 00:15:14.884944 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 00:15:14.929141 coreos-metadata[849]: Apr 16 00:15:14.929 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 16 00:15:14.930412 initrd-setup-root[874]: cut: /sysroot/etc/passwd: No such file or directory Apr 16 00:15:14.933324 coreos-metadata[849]: Apr 16 00:15:14.932 INFO Fetch successful Apr 16 00:15:14.933324 coreos-metadata[849]: Apr 16 00:15:14.932 INFO wrote hostname ci-4459-2-4-n-0840528111 to /sysroot/etc/hostname Apr 16 00:15:14.935199 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 16 00:15:14.938043 initrd-setup-root[881]: cut: /sysroot/etc/group: No such file or directory Apr 16 00:15:14.942466 initrd-setup-root[889]: cut: /sysroot/etc/shadow: No such file or directory Apr 16 00:15:14.947568 initrd-setup-root[896]: cut: /sysroot/etc/gshadow: No such file or directory Apr 16 00:15:15.053048 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 16 00:15:15.054952 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 16 00:15:15.057048 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 16 00:15:15.080216 kernel: BTRFS info (device sda6): last unmount of filesystem 5d984f6c-8904-4549-88de-a3d0a0f4ead8 Apr 16 00:15:15.100929 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 16 00:15:15.111500 ignition[964]: INFO : Ignition 2.22.0 Apr 16 00:15:15.112327 ignition[964]: INFO : Stage: mount Apr 16 00:15:15.113601 ignition[964]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 00:15:15.113601 ignition[964]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 16 00:15:15.119346 ignition[964]: INFO : mount: mount passed Apr 16 00:15:15.119346 ignition[964]: INFO : Ignition finished successfully Apr 16 00:15:15.120661 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 16 00:15:15.121509 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 16 00:15:15.125750 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 16 00:15:15.152299 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 00:15:15.175469 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (975) Apr 16 00:15:15.175538 kernel: BTRFS info (device sda6): first mount of filesystem 5d984f6c-8904-4549-88de-a3d0a0f4ead8 Apr 16 00:15:15.176838 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 16 00:15:15.180477 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 16 00:15:15.180551 kernel: BTRFS info (device sda6): turning on async discard Apr 16 00:15:15.180564 kernel: BTRFS info (device sda6): enabling free space tree Apr 16 00:15:15.183551 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 00:15:15.215800 ignition[992]: INFO : Ignition 2.22.0 Apr 16 00:15:15.216647 ignition[992]: INFO : Stage: files Apr 16 00:15:15.217261 ignition[992]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 00:15:15.217967 ignition[992]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 16 00:15:15.219929 ignition[992]: DEBUG : files: compiled without relabeling support, skipping Apr 16 00:15:15.221921 ignition[992]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 16 00:15:15.221921 ignition[992]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 16 00:15:15.226512 ignition[992]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 16 00:15:15.227544 ignition[992]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 16 00:15:15.229266 ignition[992]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 16 00:15:15.228578 unknown[992]: wrote ssh authorized keys file for user: core Apr 16 00:15:15.232049 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 16 00:15:15.232049 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Apr 16 00:15:15.320513 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 16 00:15:15.488407 systemd-networkd[812]: eth1: Gained IPv6LL Apr 16 00:15:15.517875 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 16 00:15:15.519372 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 16 00:15:15.519372 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 16 00:15:15.519372 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 16 00:15:15.519372 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 16 00:15:15.519372 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 00:15:15.519372 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 00:15:15.519372 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 00:15:15.519372 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 00:15:15.519372 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 00:15:15.519372 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 00:15:15.519372 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Apr 16 00:15:15.519372 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Apr 16 00:15:15.519372 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Apr 16 00:15:15.519372 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-arm64.raw: attempt #1 Apr 16 00:15:15.862489 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 16 00:15:16.000511 systemd-networkd[812]: eth0: Gained IPv6LL Apr 16 00:15:16.365741 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Apr 16 00:15:16.365741 ignition[992]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 16 00:15:16.368433 ignition[992]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 00:15:16.368433 ignition[992]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 00:15:16.368433 ignition[992]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 16 00:15:16.368433 ignition[992]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 16 00:15:16.368433 ignition[992]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 16 00:15:16.368433 ignition[992]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 16 00:15:16.368433 ignition[992]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 16 00:15:16.368433 ignition[992]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Apr 16 00:15:16.368433 ignition[992]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Apr 16 00:15:16.368433 ignition[992]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 16 00:15:16.368433 ignition[992]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 16 00:15:16.368433 ignition[992]: INFO : files: files passed Apr 16 00:15:16.368433 ignition[992]: INFO : Ignition finished successfully Apr 16 00:15:16.372296 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 16 00:15:16.378782 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 16 00:15:16.384753 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 16 00:15:16.406170 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 16 00:15:16.407226 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 16 00:15:16.416751 initrd-setup-root-after-ignition[1021]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 00:15:16.416751 initrd-setup-root-after-ignition[1021]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 16 00:15:16.419856 initrd-setup-root-after-ignition[1025]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 00:15:16.423052 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 00:15:16.425386 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 16 00:15:16.427910 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 16 00:15:16.480633 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 16 00:15:16.480791 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 16 00:15:16.482929 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 16 00:15:16.484282 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 16 00:15:16.485451 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 16 00:15:16.486345 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 16 00:15:16.512277 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 00:15:16.515074 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 16 00:15:16.547277 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 16 00:15:16.547987 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 00:15:16.549504 systemd[1]: Stopped target timers.target - Timer Units. Apr 16 00:15:16.550863 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 16 00:15:16.550982 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 00:15:16.552327 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 16 00:15:16.553625 systemd[1]: Stopped target basic.target - Basic System. Apr 16 00:15:16.554648 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 16 00:15:16.555725 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 00:15:16.557017 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 16 00:15:16.558796 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 16 00:15:16.559975 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 16 00:15:16.560683 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 00:15:16.561898 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 16 00:15:16.563061 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 16 00:15:16.564147 systemd[1]: Stopped target swap.target - Swaps. Apr 16 00:15:16.565089 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 16 00:15:16.565303 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 16 00:15:16.566607 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 16 00:15:16.567249 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 00:15:16.568419 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 16 00:15:16.568492 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 00:15:16.569633 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 16 00:15:16.569737 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 16 00:15:16.571391 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 16 00:15:16.571492 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 00:15:16.572929 systemd[1]: ignition-files.service: Deactivated successfully. Apr 16 00:15:16.573023 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 16 00:15:16.573883 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 16 00:15:16.573975 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 16 00:15:16.575731 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 16 00:15:16.578245 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 16 00:15:16.578366 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 00:15:16.583911 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 16 00:15:16.584918 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 16 00:15:16.585039 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 00:15:16.585917 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 16 00:15:16.586006 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 00:15:16.593934 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 16 00:15:16.594705 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 16 00:15:16.611639 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 16 00:15:16.615174 ignition[1045]: INFO : Ignition 2.22.0 Apr 16 00:15:16.615174 ignition[1045]: INFO : Stage: umount Apr 16 00:15:16.616370 ignition[1045]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 00:15:16.616370 ignition[1045]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 16 00:15:16.618754 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 16 00:15:16.618869 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 16 00:15:16.622716 ignition[1045]: INFO : umount: umount passed Apr 16 00:15:16.622716 ignition[1045]: INFO : Ignition finished successfully Apr 16 00:15:16.626638 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 16 00:15:16.626774 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 16 00:15:16.628044 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 16 00:15:16.628102 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 16 00:15:16.628823 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 16 00:15:16.628866 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 16 00:15:16.629993 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 16 00:15:16.630031 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 16 00:15:16.631163 systemd[1]: Stopped target network.target - Network. Apr 16 00:15:16.632325 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 16 00:15:16.632379 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 00:15:16.633367 systemd[1]: Stopped target paths.target - Path Units. Apr 16 00:15:16.634170 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 16 00:15:16.638305 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 00:15:16.640935 systemd[1]: Stopped target slices.target - Slice Units. Apr 16 00:15:16.641617 systemd[1]: Stopped target sockets.target - Socket Units. Apr 16 00:15:16.642795 systemd[1]: iscsid.socket: Deactivated successfully. Apr 16 00:15:16.642843 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 00:15:16.644036 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 16 00:15:16.644072 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 00:15:16.645002 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 16 00:15:16.645055 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 16 00:15:16.645956 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 16 00:15:16.645996 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 16 00:15:16.646874 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 16 00:15:16.646913 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 16 00:15:16.647991 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 16 00:15:16.649276 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 16 00:15:16.653634 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 16 00:15:16.653763 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 16 00:15:16.658831 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 16 00:15:16.659449 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 16 00:15:16.659558 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 00:15:16.664478 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 16 00:15:16.664743 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 16 00:15:16.664859 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 16 00:15:16.667159 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 16 00:15:16.668042 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 16 00:15:16.669368 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 16 00:15:16.669410 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 16 00:15:16.671577 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 16 00:15:16.672278 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 16 00:15:16.672334 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 00:15:16.673367 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 16 00:15:16.673414 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 16 00:15:16.676382 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 16 00:15:16.676433 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 16 00:15:16.677089 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 00:15:16.678769 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 16 00:15:16.695964 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 16 00:15:16.696206 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 00:15:16.697874 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 16 00:15:16.697938 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 16 00:15:16.700715 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 16 00:15:16.700768 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 00:15:16.701884 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 16 00:15:16.701934 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 16 00:15:16.704908 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 16 00:15:16.704962 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 16 00:15:16.706840 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 16 00:15:16.706894 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 00:15:16.709164 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 16 00:15:16.711824 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 16 00:15:16.711900 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 00:15:16.715349 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 16 00:15:16.715408 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 00:15:16.716975 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 00:15:16.717028 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 00:15:16.719574 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 16 00:15:16.720318 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 16 00:15:16.727525 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 16 00:15:16.727744 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 16 00:15:16.730721 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 16 00:15:16.732437 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 16 00:15:16.758222 systemd[1]: Switching root. Apr 16 00:15:16.788903 systemd-journald[245]: Journal stopped Apr 16 00:15:17.716245 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Apr 16 00:15:17.716325 kernel: SELinux: policy capability network_peer_controls=1 Apr 16 00:15:17.716342 kernel: SELinux: policy capability open_perms=1 Apr 16 00:15:17.716351 kernel: SELinux: policy capability extended_socket_class=1 Apr 16 00:15:17.716364 kernel: SELinux: policy capability always_check_network=0 Apr 16 00:15:17.716376 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 16 00:15:17.716385 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 16 00:15:17.716397 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 16 00:15:17.716406 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 16 00:15:17.716418 kernel: SELinux: policy capability userspace_initial_context=0 Apr 16 00:15:17.716427 kernel: audit: type=1403 audit(1776298516.966:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 16 00:15:17.716438 systemd[1]: Successfully loaded SELinux policy in 51.071ms. Apr 16 00:15:17.716454 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.153ms. Apr 16 00:15:17.716467 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 16 00:15:17.716478 systemd[1]: Detected virtualization kvm. Apr 16 00:15:17.716488 systemd[1]: Detected architecture arm64. Apr 16 00:15:17.716538 systemd[1]: Detected first boot. Apr 16 00:15:17.716551 systemd[1]: Hostname set to . Apr 16 00:15:17.716561 systemd[1]: Initializing machine ID from VM UUID. Apr 16 00:15:17.716576 zram_generator::config[1089]: No configuration found. Apr 16 00:15:17.716590 kernel: NET: Registered PF_VSOCK protocol family Apr 16 00:15:17.716603 systemd[1]: Populated /etc with preset unit settings. Apr 16 00:15:17.716615 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 16 00:15:17.716624 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 16 00:15:17.716634 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 16 00:15:17.716644 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 16 00:15:17.716656 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 16 00:15:17.716669 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 16 00:15:17.716679 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 16 00:15:17.716689 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 16 00:15:17.716699 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 16 00:15:17.716709 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 16 00:15:17.716718 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 16 00:15:17.716728 systemd[1]: Created slice user.slice - User and Session Slice. Apr 16 00:15:17.716742 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 00:15:17.716754 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 00:15:17.716764 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 16 00:15:17.716774 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 16 00:15:17.716784 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 16 00:15:17.716793 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 00:15:17.716804 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Apr 16 00:15:17.716816 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 00:15:17.716826 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 00:15:17.716837 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 16 00:15:17.716846 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 16 00:15:17.716856 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 16 00:15:17.716866 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 16 00:15:17.716876 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 00:15:17.716886 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 00:15:17.716897 systemd[1]: Reached target slices.target - Slice Units. Apr 16 00:15:17.716908 systemd[1]: Reached target swap.target - Swaps. Apr 16 00:15:17.716918 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 16 00:15:17.716928 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 16 00:15:17.716938 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 16 00:15:17.716948 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 00:15:17.716958 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 00:15:17.716967 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 00:15:17.716977 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 16 00:15:17.716986 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 16 00:15:17.716996 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 16 00:15:17.717008 systemd[1]: Mounting media.mount - External Media Directory... Apr 16 00:15:17.717017 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 16 00:15:17.717028 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 16 00:15:17.717038 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 16 00:15:17.717048 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 16 00:15:17.717058 systemd[1]: Reached target machines.target - Containers. Apr 16 00:15:17.717067 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 16 00:15:17.717078 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 00:15:17.717090 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 00:15:17.717101 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 16 00:15:17.717112 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 00:15:17.717122 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 00:15:17.717132 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 00:15:17.717142 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 16 00:15:17.717151 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 00:15:17.717161 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 16 00:15:17.717173 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 16 00:15:17.717545 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 16 00:15:17.717563 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 16 00:15:17.717577 systemd[1]: Stopped systemd-fsck-usr.service. Apr 16 00:15:17.717588 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 00:15:17.717599 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 00:15:17.717610 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 00:15:17.717621 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 16 00:15:17.717633 kernel: loop: module loaded Apr 16 00:15:17.717643 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 16 00:15:17.717653 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 16 00:15:17.717663 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 00:15:17.717673 systemd[1]: verity-setup.service: Deactivated successfully. Apr 16 00:15:17.717684 systemd[1]: Stopped verity-setup.service. Apr 16 00:15:17.717694 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 16 00:15:17.717705 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 16 00:15:17.717726 systemd[1]: Mounted media.mount - External Media Directory. Apr 16 00:15:17.717737 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 16 00:15:17.718282 systemd-journald[1157]: Collecting audit messages is disabled. Apr 16 00:15:17.718325 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 16 00:15:17.718336 kernel: fuse: init (API version 7.41) Apr 16 00:15:17.718346 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 16 00:15:17.718358 systemd-journald[1157]: Journal started Apr 16 00:15:17.718380 systemd-journald[1157]: Runtime Journal (/run/log/journal/0a1acdabf14f41d899e97bc4fef4e07f) is 8M, max 76.5M, 68.5M free. Apr 16 00:15:17.476837 systemd[1]: Queued start job for default target multi-user.target. Apr 16 00:15:17.491719 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 16 00:15:17.492293 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 16 00:15:17.727214 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 00:15:17.728373 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 00:15:17.729223 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 16 00:15:17.733213 kernel: ACPI: bus type drm_connector registered Apr 16 00:15:17.735473 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 16 00:15:17.737774 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 00:15:17.737947 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 00:15:17.739587 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 00:15:17.739946 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 00:15:17.741578 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 00:15:17.742434 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 00:15:17.743944 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 16 00:15:17.744174 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 16 00:15:17.749290 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 00:15:17.749469 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 00:15:17.751256 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 00:15:17.752147 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 00:15:17.754860 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 16 00:15:17.766112 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 16 00:15:17.773438 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 16 00:15:17.776105 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 16 00:15:17.782324 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 16 00:15:17.783429 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 16 00:15:17.783579 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 00:15:17.786049 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 16 00:15:17.793417 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 16 00:15:17.794207 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 00:15:17.799641 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 16 00:15:17.805348 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 16 00:15:17.806006 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 00:15:17.809406 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 16 00:15:17.810069 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 00:15:17.812584 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 00:15:17.815446 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 16 00:15:17.819113 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 16 00:15:17.822737 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 16 00:15:17.823428 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 16 00:15:17.839743 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 16 00:15:17.850707 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 16 00:15:17.854678 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 16 00:15:17.859342 systemd-journald[1157]: Time spent on flushing to /var/log/journal/0a1acdabf14f41d899e97bc4fef4e07f is 49.739ms for 1171 entries. Apr 16 00:15:17.859342 systemd-journald[1157]: System Journal (/var/log/journal/0a1acdabf14f41d899e97bc4fef4e07f) is 8M, max 584.8M, 576.8M free. Apr 16 00:15:17.914624 systemd-journald[1157]: Received client request to flush runtime journal. Apr 16 00:15:17.914668 kernel: loop0: detected capacity change from 0 to 200864 Apr 16 00:15:17.876447 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 16 00:15:17.901237 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 00:15:17.918617 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 16 00:15:17.939667 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 16 00:15:17.941256 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 16 00:15:17.961268 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 00:15:17.965986 kernel: loop1: detected capacity change from 0 to 8 Apr 16 00:15:17.966578 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 16 00:15:17.971604 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 00:15:17.989679 kernel: loop2: detected capacity change from 0 to 100632 Apr 16 00:15:18.028212 kernel: loop3: detected capacity change from 0 to 119840 Apr 16 00:15:18.029974 systemd-tmpfiles[1228]: ACLs are not supported, ignoring. Apr 16 00:15:18.030353 systemd-tmpfiles[1228]: ACLs are not supported, ignoring. Apr 16 00:15:18.037080 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 00:15:18.062767 kernel: loop4: detected capacity change from 0 to 200864 Apr 16 00:15:18.086313 kernel: loop5: detected capacity change from 0 to 8 Apr 16 00:15:18.090214 kernel: loop6: detected capacity change from 0 to 100632 Apr 16 00:15:18.113474 kernel: loop7: detected capacity change from 0 to 119840 Apr 16 00:15:18.135569 (sd-merge)[1234]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 16 00:15:18.136982 (sd-merge)[1234]: Merged extensions into '/usr'. Apr 16 00:15:18.143286 systemd[1]: Reload requested from client PID 1207 ('systemd-sysext') (unit systemd-sysext.service)... Apr 16 00:15:18.143312 systemd[1]: Reloading... Apr 16 00:15:18.283207 zram_generator::config[1260]: No configuration found. Apr 16 00:15:18.307219 ldconfig[1202]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 16 00:15:18.472301 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 16 00:15:18.472474 systemd[1]: Reloading finished in 328 ms. Apr 16 00:15:18.487638 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 16 00:15:18.492048 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 16 00:15:18.504435 systemd[1]: Starting ensure-sysext.service... Apr 16 00:15:18.508441 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 00:15:18.539311 systemd[1]: Reload requested from client PID 1297 ('systemctl') (unit ensure-sysext.service)... Apr 16 00:15:18.539328 systemd[1]: Reloading... Apr 16 00:15:18.540381 systemd-tmpfiles[1298]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 16 00:15:18.540800 systemd-tmpfiles[1298]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 16 00:15:18.541467 systemd-tmpfiles[1298]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 16 00:15:18.541742 systemd-tmpfiles[1298]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 16 00:15:18.542673 systemd-tmpfiles[1298]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 16 00:15:18.542975 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. Apr 16 00:15:18.543054 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. Apr 16 00:15:18.547168 systemd-tmpfiles[1298]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 00:15:18.547199 systemd-tmpfiles[1298]: Skipping /boot Apr 16 00:15:18.555014 systemd-tmpfiles[1298]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 00:15:18.555032 systemd-tmpfiles[1298]: Skipping /boot Apr 16 00:15:18.605219 zram_generator::config[1328]: No configuration found. Apr 16 00:15:18.762284 systemd[1]: Reloading finished in 222 ms. Apr 16 00:15:18.776236 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 16 00:15:18.781930 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 00:15:18.792452 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 16 00:15:18.797431 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 16 00:15:18.801572 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 16 00:15:18.805156 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 00:15:18.811869 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 00:15:18.819528 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 16 00:15:18.826665 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 00:15:18.832583 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 00:15:18.837269 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 00:15:18.840644 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 00:15:18.841703 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 00:15:18.841828 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 00:15:18.849756 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 16 00:15:18.854107 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 00:15:18.854288 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 00:15:18.854367 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 00:15:18.856758 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 00:15:18.859907 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 00:15:18.861410 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 00:15:18.861560 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 00:15:18.864574 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 16 00:15:18.875633 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 16 00:15:18.878383 systemd[1]: Finished ensure-sysext.service. Apr 16 00:15:18.894615 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 16 00:15:18.895673 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 00:15:18.896814 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 00:15:18.911326 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 16 00:15:18.913700 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 00:15:18.915292 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 00:15:18.918584 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 00:15:18.920433 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 00:15:18.922537 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 00:15:18.923373 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 00:15:18.926805 systemd-udevd[1368]: Using default interface naming scheme 'v255'. Apr 16 00:15:18.946073 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 16 00:15:18.947467 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 00:15:18.947667 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 00:15:18.957410 augenrules[1404]: No rules Apr 16 00:15:18.958352 systemd[1]: audit-rules.service: Deactivated successfully. Apr 16 00:15:18.960237 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 16 00:15:18.970827 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 16 00:15:18.973083 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 16 00:15:18.974043 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 00:15:18.978654 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 00:15:19.007581 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 16 00:15:19.128016 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Apr 16 00:15:19.263425 systemd-networkd[1414]: lo: Link UP Apr 16 00:15:19.264211 systemd-networkd[1414]: lo: Gained carrier Apr 16 00:15:19.265994 systemd-networkd[1414]: Enumeration completed Apr 16 00:15:19.266312 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 00:15:19.266832 systemd-networkd[1414]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 00:15:19.266951 systemd-networkd[1414]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 00:15:19.267821 systemd-networkd[1414]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 00:15:19.267903 systemd-networkd[1414]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 00:15:19.268347 systemd-networkd[1414]: eth0: Link UP Apr 16 00:15:19.268917 systemd-networkd[1414]: eth0: Gained carrier Apr 16 00:15:19.269008 systemd-networkd[1414]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 00:15:19.269740 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 16 00:15:19.275401 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 16 00:15:19.292252 kernel: mousedev: PS/2 mouse device common for all mice Apr 16 00:15:19.315461 systemd-networkd[1414]: eth1: Link UP Apr 16 00:15:19.316368 systemd-networkd[1414]: eth1: Gained carrier Apr 16 00:15:19.316547 systemd-networkd[1414]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 00:15:19.336082 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 16 00:15:19.337264 systemd[1]: Reached target time-set.target - System Time Set. Apr 16 00:15:19.349999 systemd-resolved[1367]: Positive Trust Anchors: Apr 16 00:15:19.350018 systemd-resolved[1367]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 00:15:19.350049 systemd-resolved[1367]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 00:15:19.351863 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 16 00:15:19.356982 systemd-resolved[1367]: Using system hostname 'ci-4459-2-4-n-0840528111'. Apr 16 00:15:19.358945 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 00:15:19.359720 systemd[1]: Reached target network.target - Network. Apr 16 00:15:19.360191 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 00:15:19.360795 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 00:15:19.362805 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 16 00:15:19.364353 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 16 00:15:19.366507 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 16 00:15:19.367158 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 16 00:15:19.368885 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 16 00:15:19.370264 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 16 00:15:19.370298 systemd[1]: Reached target paths.target - Path Units. Apr 16 00:15:19.371247 systemd[1]: Reached target timers.target - Timer Units. Apr 16 00:15:19.374322 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 16 00:15:19.377574 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 16 00:15:19.382113 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 16 00:15:19.384517 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 16 00:15:19.387714 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 16 00:15:19.389340 systemd-networkd[1414]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 16 00:15:19.395167 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 16 00:15:19.397504 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 16 00:15:19.399683 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 16 00:15:19.403108 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 00:15:19.403377 systemd-networkd[1414]: eth0: DHCPv4 address 88.198.131.37/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 16 00:15:19.405273 systemd-timesyncd[1384]: Network configuration changed, trying to establish connection. Apr 16 00:15:19.406282 systemd[1]: Reached target basic.target - Basic System. Apr 16 00:15:19.407856 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 16 00:15:19.407882 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 16 00:15:19.410263 systemd[1]: Starting containerd.service - containerd container runtime... Apr 16 00:15:19.413923 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 16 00:15:19.420283 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 16 00:15:19.428711 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 16 00:15:19.430746 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 16 00:15:19.435586 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 16 00:15:19.436152 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 16 00:15:19.438135 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 16 00:15:19.442655 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 16 00:15:19.449082 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 16 00:15:19.451332 systemd-timesyncd[1384]: Contacted time server 116.203.96.227:123 (0.flatcar.pool.ntp.org). Apr 16 00:15:19.451401 systemd-timesyncd[1384]: Initial clock synchronization to Thu 2026-04-16 00:15:19.466694 UTC. Apr 16 00:15:19.469067 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 16 00:15:19.473449 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 16 00:15:19.475008 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 16 00:15:19.475809 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 16 00:15:19.482206 jq[1478]: false Apr 16 00:15:19.481631 systemd[1]: Starting update-engine.service - Update Engine... Apr 16 00:15:19.486129 coreos-metadata[1475]: Apr 16 00:15:19.485 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 16 00:15:19.487410 coreos-metadata[1475]: Apr 16 00:15:19.487 INFO Fetch successful Apr 16 00:15:19.489947 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 16 00:15:19.496275 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 16 00:15:19.498461 coreos-metadata[1475]: Apr 16 00:15:19.497 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 16 00:15:19.498461 coreos-metadata[1475]: Apr 16 00:15:19.497 INFO Fetch successful Apr 16 00:15:19.497341 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 16 00:15:19.503126 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 16 00:15:19.512818 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 16 00:15:19.513026 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 16 00:15:19.519042 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 16 00:15:19.525160 jq[1488]: true Apr 16 00:15:19.539998 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Apr 16 00:15:19.548465 extend-filesystems[1479]: Found /dev/sda6 Apr 16 00:15:19.559631 update_engine[1487]: I20260416 00:15:19.556726 1487 main.cc:92] Flatcar Update Engine starting Apr 16 00:15:19.559921 extend-filesystems[1479]: Found /dev/sda9 Apr 16 00:15:19.567697 extend-filesystems[1479]: Checking size of /dev/sda9 Apr 16 00:15:19.572813 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 16 00:15:19.576575 extend-filesystems[1479]: Resized partition /dev/sda9 Apr 16 00:15:19.582942 extend-filesystems[1523]: resize2fs 1.47.3 (8-Jul-2025) Apr 16 00:15:19.597767 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Apr 16 00:15:19.577306 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 16 00:15:19.592454 systemd[1]: motdgen.service: Deactivated successfully. Apr 16 00:15:19.594562 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 16 00:15:19.624734 tar[1497]: linux-arm64/LICENSE Apr 16 00:15:19.643791 jq[1508]: true Apr 16 00:15:19.641784 dbus-daemon[1476]: [system] SELinux support is enabled Apr 16 00:15:19.650511 tar[1497]: linux-arm64/helm Apr 16 00:15:19.646703 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 16 00:15:19.650720 (ntainerd)[1514]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 16 00:15:19.662292 update_engine[1487]: I20260416 00:15:19.661523 1487 update_check_scheduler.cc:74] Next update check in 4m27s Apr 16 00:15:19.656738 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 16 00:15:19.660732 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 16 00:15:19.660769 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 16 00:15:19.663556 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 16 00:15:19.663684 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 16 00:15:19.702669 systemd[1]: Started update-engine.service - Update Engine. Apr 16 00:15:19.732213 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Apr 16 00:15:19.746331 extend-filesystems[1523]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 16 00:15:19.746331 extend-filesystems[1523]: old_desc_blocks = 1, new_desc_blocks = 5 Apr 16 00:15:19.746331 extend-filesystems[1523]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Apr 16 00:15:19.750257 extend-filesystems[1479]: Resized filesystem in /dev/sda9 Apr 16 00:15:19.787855 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 16 00:15:19.788979 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 16 00:15:19.790249 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 16 00:15:19.824773 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Apr 16 00:15:19.824869 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 16 00:15:19.824881 kernel: [drm] features: -context_init Apr 16 00:15:19.826237 kernel: [drm] number of scanouts: 1 Apr 16 00:15:19.826312 kernel: [drm] number of cap sets: 0 Apr 16 00:15:19.828195 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Apr 16 00:15:19.833617 kernel: Console: switching to colour frame buffer device 160x50 Apr 16 00:15:19.837215 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 16 00:15:19.846013 systemd-logind[1486]: New seat seat0. Apr 16 00:15:19.849591 systemd[1]: Started systemd-logind.service - User Login Management. Apr 16 00:15:19.863760 bash[1571]: Updated "/home/core/.ssh/authorized_keys" Apr 16 00:15:19.866168 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 16 00:15:19.872792 systemd[1]: Starting sshkeys.service... Apr 16 00:15:19.880975 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 16 00:15:19.882137 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 16 00:15:19.930957 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 16 00:15:19.934637 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 16 00:15:19.978753 coreos-metadata[1575]: Apr 16 00:15:19.978 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 16 00:15:19.983210 coreos-metadata[1575]: Apr 16 00:15:19.981 INFO Fetch successful Apr 16 00:15:19.984505 unknown[1575]: wrote ssh authorized keys file for user: core Apr 16 00:15:20.034030 containerd[1514]: time="2026-04-16T00:15:20Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 16 00:15:20.036199 containerd[1514]: time="2026-04-16T00:15:20.035942045Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Apr 16 00:15:20.054131 containerd[1514]: time="2026-04-16T00:15:20.054077576Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.569µs" Apr 16 00:15:20.054534 containerd[1514]: time="2026-04-16T00:15:20.054473117Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 16 00:15:20.054581 containerd[1514]: time="2026-04-16T00:15:20.054538700Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 16 00:15:20.055099 containerd[1514]: time="2026-04-16T00:15:20.054717553Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 16 00:15:20.055099 containerd[1514]: time="2026-04-16T00:15:20.054750945Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 16 00:15:20.055099 containerd[1514]: time="2026-04-16T00:15:20.054782215Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 16 00:15:20.055099 containerd[1514]: time="2026-04-16T00:15:20.054846677Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 16 00:15:20.055099 containerd[1514]: time="2026-04-16T00:15:20.054860291Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 16 00:15:20.059072 containerd[1514]: time="2026-04-16T00:15:20.059013216Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 16 00:15:20.059072 containerd[1514]: time="2026-04-16T00:15:20.059056257Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 16 00:15:20.066971 containerd[1514]: time="2026-04-16T00:15:20.063310160Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 16 00:15:20.066971 containerd[1514]: time="2026-04-16T00:15:20.063360889Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 16 00:15:20.066971 containerd[1514]: time="2026-04-16T00:15:20.063492936Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 16 00:15:20.066971 containerd[1514]: time="2026-04-16T00:15:20.063759754Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 16 00:15:20.066971 containerd[1514]: time="2026-04-16T00:15:20.063811964Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 16 00:15:20.066971 containerd[1514]: time="2026-04-16T00:15:20.063823175Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 16 00:15:20.066971 containerd[1514]: time="2026-04-16T00:15:20.063851202Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 16 00:15:20.066971 containerd[1514]: time="2026-04-16T00:15:20.064064648Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 16 00:15:20.066971 containerd[1514]: time="2026-04-16T00:15:20.064123264Z" level=info msg="metadata content store policy set" policy=shared Apr 16 00:15:20.070524 update-ssh-keys[1580]: Updated "/home/core/.ssh/authorized_keys" Apr 16 00:15:20.073250 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 16 00:15:20.076492 containerd[1514]: time="2026-04-16T00:15:20.076378644Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 16 00:15:20.076492 containerd[1514]: time="2026-04-16T00:15:20.076459082Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 16 00:15:20.076492 containerd[1514]: time="2026-04-16T00:15:20.076480222Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 16 00:15:20.076492 containerd[1514]: time="2026-04-16T00:15:20.076492714Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 16 00:15:20.076646 containerd[1514]: time="2026-04-16T00:15:20.076506487Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 16 00:15:20.076646 containerd[1514]: time="2026-04-16T00:15:20.076518379Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 16 00:15:20.076646 containerd[1514]: time="2026-04-16T00:15:20.076530911Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 16 00:15:20.076646 containerd[1514]: time="2026-04-16T00:15:20.076554894Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 16 00:15:20.076646 containerd[1514]: time="2026-04-16T00:15:20.076571510Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 16 00:15:20.076646 containerd[1514]: time="2026-04-16T00:15:20.076587606Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 16 00:15:20.076646 containerd[1514]: time="2026-04-16T00:15:20.076597415Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 16 00:15:20.076646 containerd[1514]: time="2026-04-16T00:15:20.076611949Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 16 00:15:20.076774 containerd[1514]: time="2026-04-16T00:15:20.076759812Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 16 00:15:20.076791 containerd[1514]: time="2026-04-16T00:15:20.076779070Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 16 00:15:20.076811 containerd[1514]: time="2026-04-16T00:15:20.076793404Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 16 00:15:20.076811 containerd[1514]: time="2026-04-16T00:15:20.076805616Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 16 00:15:20.076843 containerd[1514]: time="2026-04-16T00:15:20.076816827Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 16 00:15:20.076843 containerd[1514]: time="2026-04-16T00:15:20.076827637Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 16 00:15:20.076877 containerd[1514]: time="2026-04-16T00:15:20.076843532Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 16 00:15:20.076877 containerd[1514]: time="2026-04-16T00:15:20.076854183Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 16 00:15:20.076877 containerd[1514]: time="2026-04-16T00:15:20.076866234Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 16 00:15:20.076929 containerd[1514]: time="2026-04-16T00:15:20.076877485Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 16 00:15:20.076929 containerd[1514]: time="2026-04-16T00:15:20.076890738Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 16 00:15:20.076996 systemd[1]: Finished sshkeys.service. Apr 16 00:15:20.078311 containerd[1514]: time="2026-04-16T00:15:20.077061663Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 16 00:15:20.078311 containerd[1514]: time="2026-04-16T00:15:20.077078239Z" level=info msg="Start snapshots syncer" Apr 16 00:15:20.078311 containerd[1514]: time="2026-04-16T00:15:20.077100180Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 16 00:15:20.082196 containerd[1514]: time="2026-04-16T00:15:20.081428354Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 16 00:15:20.082196 containerd[1514]: time="2026-04-16T00:15:20.081506109Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 16 00:15:20.082388 containerd[1514]: time="2026-04-16T00:15:20.081589390Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 16 00:15:20.082388 containerd[1514]: time="2026-04-16T00:15:20.081716152Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 16 00:15:20.082388 containerd[1514]: time="2026-04-16T00:15:20.081741817Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 16 00:15:20.082388 containerd[1514]: time="2026-04-16T00:15:20.081752667Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 16 00:15:20.082388 containerd[1514]: time="2026-04-16T00:15:20.081764078Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 16 00:15:20.082388 containerd[1514]: time="2026-04-16T00:15:20.081776610Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 16 00:15:20.082388 containerd[1514]: time="2026-04-16T00:15:20.081787140Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 16 00:15:20.082388 containerd[1514]: time="2026-04-16T00:15:20.081797951Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 16 00:15:20.082388 containerd[1514]: time="2026-04-16T00:15:20.081835747Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 16 00:15:20.082388 containerd[1514]: time="2026-04-16T00:15:20.081849360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 16 00:15:20.082388 containerd[1514]: time="2026-04-16T00:15:20.081860331Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 16 00:15:20.082388 containerd[1514]: time="2026-04-16T00:15:20.081892602Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 16 00:15:20.082388 containerd[1514]: time="2026-04-16T00:15:20.081912101Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 16 00:15:20.082388 containerd[1514]: time="2026-04-16T00:15:20.081920469Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 16 00:15:20.082639 containerd[1514]: time="2026-04-16T00:15:20.081929317Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 16 00:15:20.082639 containerd[1514]: time="2026-04-16T00:15:20.081937966Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 16 00:15:20.082639 containerd[1514]: time="2026-04-16T00:15:20.081947295Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 16 00:15:20.082639 containerd[1514]: time="2026-04-16T00:15:20.081957264Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 16 00:15:20.082639 containerd[1514]: time="2026-04-16T00:15:20.082033138Z" level=info msg="runtime interface created" Apr 16 00:15:20.082639 containerd[1514]: time="2026-04-16T00:15:20.082038262Z" level=info msg="created NRI interface" Apr 16 00:15:20.082639 containerd[1514]: time="2026-04-16T00:15:20.082046510Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 16 00:15:20.082639 containerd[1514]: time="2026-04-16T00:15:20.082059643Z" level=info msg="Connect containerd service" Apr 16 00:15:20.082639 containerd[1514]: time="2026-04-16T00:15:20.082085508Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 16 00:15:20.086192 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 00:15:20.093064 containerd[1514]: time="2026-04-16T00:15:20.093012166Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 16 00:15:20.125315 systemd-logind[1486]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Apr 16 00:15:20.130528 systemd-logind[1486]: Watching system buttons on /dev/input/event0 (Power Button) Apr 16 00:15:20.150574 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 00:15:20.153249 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 00:15:20.162446 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 00:15:20.384029 containerd[1514]: time="2026-04-16T00:15:20.383464254Z" level=info msg="Start subscribing containerd event" Apr 16 00:15:20.384029 containerd[1514]: time="2026-04-16T00:15:20.383577404Z" level=info msg="Start recovering state" Apr 16 00:15:20.384029 containerd[1514]: time="2026-04-16T00:15:20.383673696Z" level=info msg="Start event monitor" Apr 16 00:15:20.384029 containerd[1514]: time="2026-04-16T00:15:20.383688951Z" level=info msg="Start cni network conf syncer for default" Apr 16 00:15:20.384029 containerd[1514]: time="2026-04-16T00:15:20.383720221Z" level=info msg="Start streaming server" Apr 16 00:15:20.384029 containerd[1514]: time="2026-04-16T00:15:20.383753814Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 16 00:15:20.384029 containerd[1514]: time="2026-04-16T00:15:20.383762062Z" level=info msg="runtime interface starting up..." Apr 16 00:15:20.384029 containerd[1514]: time="2026-04-16T00:15:20.383768188Z" level=info msg="starting plugins..." Apr 16 00:15:20.384029 containerd[1514]: time="2026-04-16T00:15:20.383783042Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 16 00:15:20.384029 containerd[1514]: time="2026-04-16T00:15:20.383473623Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 16 00:15:20.384029 containerd[1514]: time="2026-04-16T00:15:20.383986198Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 16 00:15:20.385662 containerd[1514]: time="2026-04-16T00:15:20.384862163Z" level=info msg="containerd successfully booted in 0.351277s" Apr 16 00:15:20.384972 systemd[1]: Started containerd.service - containerd container runtime. Apr 16 00:15:20.387065 sshd_keygen[1509]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 16 00:15:20.433201 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 16 00:15:20.439510 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 16 00:15:20.448885 locksmithd[1543]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 16 00:15:20.457778 systemd[1]: issuegen.service: Deactivated successfully. Apr 16 00:15:20.458219 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 16 00:15:20.463631 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 16 00:15:20.484073 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 16 00:15:20.489377 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 16 00:15:20.493723 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Apr 16 00:15:20.496437 systemd[1]: Reached target getty.target - Login Prompts. Apr 16 00:15:20.499266 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 00:15:20.532924 tar[1497]: linux-arm64/README.md Apr 16 00:15:20.553425 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 16 00:15:20.672478 systemd-networkd[1414]: eth1: Gained IPv6LL Apr 16 00:15:20.676215 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 16 00:15:20.677944 systemd[1]: Reached target network-online.target - Network is Online. Apr 16 00:15:20.681173 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 00:15:20.684435 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 16 00:15:20.715661 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 16 00:15:21.056627 systemd-networkd[1414]: eth0: Gained IPv6LL Apr 16 00:15:21.449431 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 00:15:21.452615 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 16 00:15:21.454384 systemd[1]: Startup finished in 2.370s (kernel) + 5.360s (initrd) + 4.539s (userspace) = 12.270s. Apr 16 00:15:21.460339 (kubelet)[1654]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 00:15:21.925006 kubelet[1654]: E0416 00:15:21.924878 1654 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 00:15:21.929107 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 00:15:21.929387 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 00:15:21.931401 systemd[1]: kubelet.service: Consumed 812ms CPU time, 247.8M memory peak. Apr 16 00:15:24.780140 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 16 00:15:24.782482 systemd[1]: Started sshd@0-88.198.131.37:22-4.175.71.9:53524.service - OpenSSH per-connection server daemon (4.175.71.9:53524). Apr 16 00:15:24.931666 sshd[1666]: Accepted publickey for core from 4.175.71.9 port 53524 ssh2: RSA SHA256:FiR974sjzPvW6bQBEQo8MAp+1XsnBVWQJvv311fPFzI Apr 16 00:15:24.934990 sshd-session[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:15:24.948500 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 16 00:15:24.950064 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 16 00:15:24.953214 systemd-logind[1486]: New session 1 of user core. Apr 16 00:15:24.979447 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 16 00:15:24.983081 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 16 00:15:25.008062 (systemd)[1671]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 16 00:15:25.012558 systemd-logind[1486]: New session c1 of user core. Apr 16 00:15:25.158561 systemd[1671]: Queued start job for default target default.target. Apr 16 00:15:25.170702 systemd[1671]: Created slice app.slice - User Application Slice. Apr 16 00:15:25.170757 systemd[1671]: Reached target paths.target - Paths. Apr 16 00:15:25.170833 systemd[1671]: Reached target timers.target - Timers. Apr 16 00:15:25.172714 systemd[1671]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 16 00:15:25.199924 systemd[1671]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 16 00:15:25.200123 systemd[1671]: Reached target sockets.target - Sockets. Apr 16 00:15:25.200346 systemd[1671]: Reached target basic.target - Basic System. Apr 16 00:15:25.200484 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 16 00:15:25.202315 systemd[1671]: Reached target default.target - Main User Target. Apr 16 00:15:25.202388 systemd[1671]: Startup finished in 180ms. Apr 16 00:15:25.208607 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 16 00:15:25.277344 systemd[1]: Started sshd@1-88.198.131.37:22-4.175.71.9:52562.service - OpenSSH per-connection server daemon (4.175.71.9:52562). Apr 16 00:15:25.414694 sshd[1682]: Accepted publickey for core from 4.175.71.9 port 52562 ssh2: RSA SHA256:FiR974sjzPvW6bQBEQo8MAp+1XsnBVWQJvv311fPFzI Apr 16 00:15:25.417132 sshd-session[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:15:25.423393 systemd-logind[1486]: New session 2 of user core. Apr 16 00:15:25.432527 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 16 00:15:25.478933 sshd[1685]: Connection closed by 4.175.71.9 port 52562 Apr 16 00:15:25.479826 sshd-session[1682]: pam_unix(sshd:session): session closed for user core Apr 16 00:15:25.484904 systemd[1]: sshd@1-88.198.131.37:22-4.175.71.9:52562.service: Deactivated successfully. Apr 16 00:15:25.487289 systemd[1]: session-2.scope: Deactivated successfully. Apr 16 00:15:25.489215 systemd-logind[1486]: Session 2 logged out. Waiting for processes to exit. Apr 16 00:15:25.490619 systemd-logind[1486]: Removed session 2. Apr 16 00:15:25.514742 systemd[1]: Started sshd@2-88.198.131.37:22-4.175.71.9:52568.service - OpenSSH per-connection server daemon (4.175.71.9:52568). Apr 16 00:15:25.654253 sshd[1691]: Accepted publickey for core from 4.175.71.9 port 52568 ssh2: RSA SHA256:FiR974sjzPvW6bQBEQo8MAp+1XsnBVWQJvv311fPFzI Apr 16 00:15:25.655902 sshd-session[1691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:15:25.661969 systemd-logind[1486]: New session 3 of user core. Apr 16 00:15:25.666511 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 16 00:15:25.706930 sshd[1694]: Connection closed by 4.175.71.9 port 52568 Apr 16 00:15:25.707879 sshd-session[1691]: pam_unix(sshd:session): session closed for user core Apr 16 00:15:25.713851 systemd[1]: sshd@2-88.198.131.37:22-4.175.71.9:52568.service: Deactivated successfully. Apr 16 00:15:25.715895 systemd[1]: session-3.scope: Deactivated successfully. Apr 16 00:15:25.717849 systemd-logind[1486]: Session 3 logged out. Waiting for processes to exit. Apr 16 00:15:25.719040 systemd-logind[1486]: Removed session 3. Apr 16 00:15:25.730407 systemd[1]: Started sshd@3-88.198.131.37:22-4.175.71.9:52584.service - OpenSSH per-connection server daemon (4.175.71.9:52584). Apr 16 00:15:25.874437 sshd[1700]: Accepted publickey for core from 4.175.71.9 port 52584 ssh2: RSA SHA256:FiR974sjzPvW6bQBEQo8MAp+1XsnBVWQJvv311fPFzI Apr 16 00:15:25.877047 sshd-session[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:15:25.882706 systemd-logind[1486]: New session 4 of user core. Apr 16 00:15:25.891606 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 16 00:15:25.938439 sshd[1703]: Connection closed by 4.175.71.9 port 52584 Apr 16 00:15:25.939441 sshd-session[1700]: pam_unix(sshd:session): session closed for user core Apr 16 00:15:25.946085 systemd[1]: sshd@3-88.198.131.37:22-4.175.71.9:52584.service: Deactivated successfully. Apr 16 00:15:25.948821 systemd[1]: session-4.scope: Deactivated successfully. Apr 16 00:15:25.950559 systemd-logind[1486]: Session 4 logged out. Waiting for processes to exit. Apr 16 00:15:25.952466 systemd-logind[1486]: Removed session 4. Apr 16 00:15:25.965465 systemd[1]: Started sshd@4-88.198.131.37:22-4.175.71.9:52590.service - OpenSSH per-connection server daemon (4.175.71.9:52590). Apr 16 00:15:26.102904 sshd[1710]: Accepted publickey for core from 4.175.71.9 port 52590 ssh2: RSA SHA256:FiR974sjzPvW6bQBEQo8MAp+1XsnBVWQJvv311fPFzI Apr 16 00:15:26.105208 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:15:26.113190 systemd-logind[1486]: New session 5 of user core. Apr 16 00:15:26.120588 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 16 00:15:26.159823 sudo[1714]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 16 00:15:26.160117 sudo[1714]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 00:15:26.177128 sudo[1714]: pam_unix(sudo:session): session closed for user root Apr 16 00:15:26.194687 sshd[1713]: Connection closed by 4.175.71.9 port 52590 Apr 16 00:15:26.196103 sshd-session[1710]: pam_unix(sshd:session): session closed for user core Apr 16 00:15:26.202605 systemd[1]: sshd@4-88.198.131.37:22-4.175.71.9:52590.service: Deactivated successfully. Apr 16 00:15:26.206105 systemd[1]: session-5.scope: Deactivated successfully. Apr 16 00:15:26.207313 systemd-logind[1486]: Session 5 logged out. Waiting for processes to exit. Apr 16 00:15:26.209030 systemd-logind[1486]: Removed session 5. Apr 16 00:15:26.226258 systemd[1]: Started sshd@5-88.198.131.37:22-4.175.71.9:52604.service - OpenSSH per-connection server daemon (4.175.71.9:52604). Apr 16 00:15:26.367246 sshd[1720]: Accepted publickey for core from 4.175.71.9 port 52604 ssh2: RSA SHA256:FiR974sjzPvW6bQBEQo8MAp+1XsnBVWQJvv311fPFzI Apr 16 00:15:26.368878 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:15:26.375162 systemd-logind[1486]: New session 6 of user core. Apr 16 00:15:26.383563 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 16 00:15:26.412927 sudo[1725]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 16 00:15:26.413636 sudo[1725]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 00:15:26.418806 sudo[1725]: pam_unix(sudo:session): session closed for user root Apr 16 00:15:26.425868 sudo[1724]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 16 00:15:26.426206 sudo[1724]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 00:15:26.440512 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 16 00:15:26.494200 augenrules[1747]: No rules Apr 16 00:15:26.496247 systemd[1]: audit-rules.service: Deactivated successfully. Apr 16 00:15:26.498256 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 16 00:15:26.500267 sudo[1724]: pam_unix(sudo:session): session closed for user root Apr 16 00:15:26.517562 sshd[1723]: Connection closed by 4.175.71.9 port 52604 Apr 16 00:15:26.518557 sshd-session[1720]: pam_unix(sshd:session): session closed for user core Apr 16 00:15:26.523644 systemd[1]: sshd@5-88.198.131.37:22-4.175.71.9:52604.service: Deactivated successfully. Apr 16 00:15:26.526042 systemd[1]: session-6.scope: Deactivated successfully. Apr 16 00:15:26.527334 systemd-logind[1486]: Session 6 logged out. Waiting for processes to exit. Apr 16 00:15:26.530020 systemd-logind[1486]: Removed session 6. Apr 16 00:15:26.544433 systemd[1]: Started sshd@6-88.198.131.37:22-4.175.71.9:52606.service - OpenSSH per-connection server daemon (4.175.71.9:52606). Apr 16 00:15:26.682840 sshd[1756]: Accepted publickey for core from 4.175.71.9 port 52606 ssh2: RSA SHA256:FiR974sjzPvW6bQBEQo8MAp+1XsnBVWQJvv311fPFzI Apr 16 00:15:26.684914 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:15:26.691895 systemd-logind[1486]: New session 7 of user core. Apr 16 00:15:26.699561 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 16 00:15:26.732628 sudo[1760]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 16 00:15:26.733003 sudo[1760]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 00:15:27.058953 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 16 00:15:27.072827 (dockerd)[1777]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 16 00:15:27.300234 dockerd[1777]: time="2026-04-16T00:15:27.300113368Z" level=info msg="Starting up" Apr 16 00:15:27.301232 dockerd[1777]: time="2026-04-16T00:15:27.301162052Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 16 00:15:27.317265 dockerd[1777]: time="2026-04-16T00:15:27.317108004Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 16 00:15:27.340760 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1775771895-merged.mount: Deactivated successfully. Apr 16 00:15:27.365989 dockerd[1777]: time="2026-04-16T00:15:27.365920619Z" level=info msg="Loading containers: start." Apr 16 00:15:27.375203 kernel: Initializing XFRM netlink socket Apr 16 00:15:27.627902 systemd-networkd[1414]: docker0: Link UP Apr 16 00:15:27.635275 dockerd[1777]: time="2026-04-16T00:15:27.635172565Z" level=info msg="Loading containers: done." Apr 16 00:15:27.658291 dockerd[1777]: time="2026-04-16T00:15:27.658229564Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 16 00:15:27.658583 dockerd[1777]: time="2026-04-16T00:15:27.658350518Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Apr 16 00:15:27.658583 dockerd[1777]: time="2026-04-16T00:15:27.658514459Z" level=info msg="Initializing buildkit" Apr 16 00:15:27.685057 dockerd[1777]: time="2026-04-16T00:15:27.685003125Z" level=info msg="Completed buildkit initialization" Apr 16 00:15:27.695797 dockerd[1777]: time="2026-04-16T00:15:27.695712902Z" level=info msg="Daemon has completed initialization" Apr 16 00:15:27.695797 dockerd[1777]: time="2026-04-16T00:15:27.695846504Z" level=info msg="API listen on /run/docker.sock" Apr 16 00:15:27.698000 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 16 00:15:28.166150 containerd[1514]: time="2026-04-16T00:15:28.166053210Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\"" Apr 16 00:15:28.708498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3371344686.mount: Deactivated successfully. Apr 16 00:15:29.693025 containerd[1514]: time="2026-04-16T00:15:29.692938451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:15:29.694618 containerd[1514]: time="2026-04-16T00:15:29.694544317Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.7: active requests=0, bytes read=24193866" Apr 16 00:15:29.697070 containerd[1514]: time="2026-04-16T00:15:29.696685113Z" level=info msg="ImageCreate event name:\"sha256:bf3fdee5548e267fd53c67a79d712e896d47f48203512415518d59da7f985228\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:15:29.699653 containerd[1514]: time="2026-04-16T00:15:29.698416648Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:15:29.699653 containerd[1514]: time="2026-04-16T00:15:29.699455528Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.7\" with image id \"sha256:bf3fdee5548e267fd53c67a79d712e896d47f48203512415518d59da7f985228\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\", size \"24190367\" in 1.533355091s" Apr 16 00:15:29.699653 containerd[1514]: time="2026-04-16T00:15:29.699489507Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\" returns image reference \"sha256:bf3fdee5548e267fd53c67a79d712e896d47f48203512415518d59da7f985228\"" Apr 16 00:15:29.700167 containerd[1514]: time="2026-04-16T00:15:29.700133494Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\"" Apr 16 00:15:30.760086 containerd[1514]: time="2026-04-16T00:15:30.760003069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:15:30.761692 containerd[1514]: time="2026-04-16T00:15:30.761631933Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.7: active requests=0, bytes read=18901464" Apr 16 00:15:30.762934 containerd[1514]: time="2026-04-16T00:15:30.762890330Z" level=info msg="ImageCreate event name:\"sha256:161b12aee2701d72b2e8a7d114f5f83122603d8c5d1d3cd7f72aa6fac5d9524c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:15:30.767665 containerd[1514]: time="2026-04-16T00:15:30.766605290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:15:30.768254 containerd[1514]: time="2026-04-16T00:15:30.768219067Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.7\" with image id \"sha256:161b12aee2701d72b2e8a7d114f5f83122603d8c5d1d3cd7f72aa6fac5d9524c\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\", size \"20408083\" in 1.067967789s" Apr 16 00:15:30.768254 containerd[1514]: time="2026-04-16T00:15:30.768251883Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\" returns image reference \"sha256:161b12aee2701d72b2e8a7d114f5f83122603d8c5d1d3cd7f72aa6fac5d9524c\"" Apr 16 00:15:30.769504 containerd[1514]: time="2026-04-16T00:15:30.769478424Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\"" Apr 16 00:15:31.652046 containerd[1514]: time="2026-04-16T00:15:31.651947026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:15:31.657356 containerd[1514]: time="2026-04-16T00:15:31.657290401Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.7: active requests=0, bytes read=14047965" Apr 16 00:15:31.658669 containerd[1514]: time="2026-04-16T00:15:31.658598222Z" level=info msg="ImageCreate event name:\"sha256:85bc0b83d6779f309f0f2d8724ee225e2a061dc60b1b127f8a9b8843bad36e14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:15:31.669738 containerd[1514]: time="2026-04-16T00:15:31.667882306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:15:31.669862 containerd[1514]: time="2026-04-16T00:15:31.669768841Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.7\" with image id \"sha256:85bc0b83d6779f309f0f2d8724ee225e2a061dc60b1b127f8a9b8843bad36e14\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\", size \"15554602\" in 900.258682ms" Apr 16 00:15:31.669862 containerd[1514]: time="2026-04-16T00:15:31.669811702Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\" returns image reference \"sha256:85bc0b83d6779f309f0f2d8724ee225e2a061dc60b1b127f8a9b8843bad36e14\"" Apr 16 00:15:31.670542 containerd[1514]: time="2026-04-16T00:15:31.670511754Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\"" Apr 16 00:15:32.179632 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 16 00:15:32.181972 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 00:15:32.338025 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 00:15:32.349506 (kubelet)[2065]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 00:15:32.401611 kubelet[2065]: E0416 00:15:32.401569 2065 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 00:15:32.405024 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 00:15:32.405157 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 00:15:32.408290 systemd[1]: kubelet.service: Consumed 163ms CPU time, 106.5M memory peak. Apr 16 00:15:32.615585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4224057988.mount: Deactivated successfully. Apr 16 00:15:32.836568 containerd[1514]: time="2026-04-16T00:15:32.836514760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:15:32.837818 containerd[1514]: time="2026-04-16T00:15:32.837700527Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.7: active requests=0, bytes read=22606312" Apr 16 00:15:32.838726 containerd[1514]: time="2026-04-16T00:15:32.838684205Z" level=info msg="ImageCreate event name:\"sha256:c63683691df94ddfb3e7b1449f68fd9df087b1bda7cdecd1e9292214f6adc745\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:15:32.841061 containerd[1514]: time="2026-04-16T00:15:32.840967860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:15:32.841629 containerd[1514]: time="2026-04-16T00:15:32.841601542Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.7\" with image id \"sha256:c63683691df94ddfb3e7b1449f68fd9df087b1bda7cdecd1e9292214f6adc745\", repo tag \"registry.k8s.io/kube-proxy:v1.34.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\", size \"22605305\" in 1.170959967s" Apr 16 00:15:32.841718 containerd[1514]: time="2026-04-16T00:15:32.841702947Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\" returns image reference \"sha256:c63683691df94ddfb3e7b1449f68fd9df087b1bda7cdecd1e9292214f6adc745\"" Apr 16 00:15:32.842432 containerd[1514]: time="2026-04-16T00:15:32.842400858Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 16 00:15:33.350151 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3612535450.mount: Deactivated successfully. Apr 16 00:15:34.212331 containerd[1514]: time="2026-04-16T00:15:34.212266339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:15:34.213943 containerd[1514]: time="2026-04-16T00:15:34.213902459Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395498" Apr 16 00:15:34.214832 containerd[1514]: time="2026-04-16T00:15:34.214395332Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:15:34.218296 containerd[1514]: time="2026-04-16T00:15:34.218252720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:15:34.220293 containerd[1514]: time="2026-04-16T00:15:34.220241257Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.377788977s" Apr 16 00:15:34.220486 containerd[1514]: time="2026-04-16T00:15:34.220452340Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Apr 16 00:15:34.221357 containerd[1514]: time="2026-04-16T00:15:34.221335885Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 16 00:15:34.674351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount221541532.mount: Deactivated successfully. Apr 16 00:15:34.681203 containerd[1514]: time="2026-04-16T00:15:34.681132806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:15:34.682088 containerd[1514]: time="2026-04-16T00:15:34.682059168Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268729" Apr 16 00:15:34.685203 containerd[1514]: time="2026-04-16T00:15:34.683903689Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:15:34.686929 containerd[1514]: time="2026-04-16T00:15:34.686884695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:15:34.688165 containerd[1514]: time="2026-04-16T00:15:34.687713059Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 466.243281ms" Apr 16 00:15:34.688509 containerd[1514]: time="2026-04-16T00:15:34.688471835Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Apr 16 00:15:34.689073 containerd[1514]: time="2026-04-16T00:15:34.689020810Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 16 00:15:35.161056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount666205860.mount: Deactivated successfully. Apr 16 00:15:35.866871 containerd[1514]: time="2026-04-16T00:15:35.866822096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:15:35.868386 containerd[1514]: time="2026-04-16T00:15:35.868350937Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=21139756" Apr 16 00:15:35.869513 containerd[1514]: time="2026-04-16T00:15:35.869485713Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:15:35.875003 containerd[1514]: time="2026-04-16T00:15:35.874942153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:15:35.876595 containerd[1514]: time="2026-04-16T00:15:35.876558865Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"21136588\" in 1.187394759s" Apr 16 00:15:35.876910 containerd[1514]: time="2026-04-16T00:15:35.876731048Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\"" Apr 16 00:15:41.251511 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 00:15:41.252100 systemd[1]: kubelet.service: Consumed 163ms CPU time, 106.5M memory peak. Apr 16 00:15:41.254906 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 00:15:41.288964 systemd[1]: Reload requested from client PID 2221 ('systemctl') (unit session-7.scope)... Apr 16 00:15:41.288985 systemd[1]: Reloading... Apr 16 00:15:41.409207 zram_generator::config[2265]: No configuration found. Apr 16 00:15:41.606651 systemd[1]: Reloading finished in 317 ms. Apr 16 00:15:41.667787 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 16 00:15:41.667944 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 16 00:15:41.668463 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 00:15:41.668541 systemd[1]: kubelet.service: Consumed 107ms CPU time, 94.9M memory peak. Apr 16 00:15:41.671101 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 00:15:41.824685 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 00:15:41.838629 (kubelet)[2313]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 00:15:41.890703 kubelet[2313]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 00:15:41.891386 kubelet[2313]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 00:15:41.892338 kubelet[2313]: I0416 00:15:41.892291 2313 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 00:15:43.345086 kubelet[2313]: I0416 00:15:43.345038 2313 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 16 00:15:43.345086 kubelet[2313]: I0416 00:15:43.345079 2313 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 00:15:43.345687 kubelet[2313]: I0416 00:15:43.345116 2313 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 16 00:15:43.345687 kubelet[2313]: I0416 00:15:43.345125 2313 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 00:15:43.345687 kubelet[2313]: I0416 00:15:43.345493 2313 server.go:956] "Client rotation is on, will bootstrap in background" Apr 16 00:15:43.358233 kubelet[2313]: E0416 00:15:43.357685 2313 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://88.198.131.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 88.198.131.37:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 00:15:43.359296 kubelet[2313]: I0416 00:15:43.359266 2313 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 00:15:43.365290 kubelet[2313]: I0416 00:15:43.365171 2313 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 16 00:15:43.368209 kubelet[2313]: I0416 00:15:43.368153 2313 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 16 00:15:43.368612 kubelet[2313]: I0416 00:15:43.368463 2313 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 00:15:43.368692 kubelet[2313]: I0416 00:15:43.368497 2313 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-4-n-0840528111","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 00:15:43.368692 kubelet[2313]: I0416 00:15:43.368655 2313 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 00:15:43.368692 kubelet[2313]: I0416 00:15:43.368663 2313 container_manager_linux.go:306] "Creating device plugin manager" Apr 16 00:15:43.368873 kubelet[2313]: I0416 00:15:43.368765 2313 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 16 00:15:43.371703 kubelet[2313]: I0416 00:15:43.371656 2313 state_mem.go:36] "Initialized new in-memory state store" Apr 16 00:15:43.373871 kubelet[2313]: I0416 00:15:43.373148 2313 kubelet.go:475] "Attempting to sync node with API server" Apr 16 00:15:43.373871 kubelet[2313]: I0416 00:15:43.373173 2313 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 00:15:43.373871 kubelet[2313]: E0416 00:15:43.373818 2313 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://88.198.131.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-4-n-0840528111&limit=500&resourceVersion=0\": dial tcp 88.198.131.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 00:15:43.373871 kubelet[2313]: I0416 00:15:43.373865 2313 kubelet.go:387] "Adding apiserver pod source" Apr 16 00:15:43.373871 kubelet[2313]: I0416 00:15:43.373881 2313 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 00:15:43.375158 kubelet[2313]: E0416 00:15:43.375113 2313 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://88.198.131.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 88.198.131.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 00:15:43.376621 kubelet[2313]: I0416 00:15:43.376557 2313 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 16 00:15:43.377268 kubelet[2313]: I0416 00:15:43.377208 2313 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 00:15:43.377376 kubelet[2313]: I0416 00:15:43.377275 2313 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 16 00:15:43.377376 kubelet[2313]: W0416 00:15:43.377332 2313 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 16 00:15:43.381328 kubelet[2313]: I0416 00:15:43.381301 2313 server.go:1262] "Started kubelet" Apr 16 00:15:43.384362 kubelet[2313]: I0416 00:15:43.384324 2313 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 00:15:43.385087 kubelet[2313]: I0416 00:15:43.385013 2313 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 00:15:43.385087 kubelet[2313]: I0416 00:15:43.385087 2313 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 16 00:15:43.385468 kubelet[2313]: I0416 00:15:43.385441 2313 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 00:15:43.385883 kubelet[2313]: I0416 00:15:43.385858 2313 server.go:310] "Adding debug handlers to kubelet server" Apr 16 00:15:43.389289 kubelet[2313]: I0416 00:15:43.389265 2313 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 00:15:43.390564 kubelet[2313]: E0416 00:15:43.388171 2313 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://88.198.131.37:6443/api/v1/namespaces/default/events\": dial tcp 88.198.131.37:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-2-4-n-0840528111.18a6ae1d2d48c9ac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-4-n-0840528111,UID:ci-4459-2-4-n-0840528111,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-4-n-0840528111,},FirstTimestamp:2026-04-16 00:15:43.381268908 +0000 UTC m=+1.539059598,LastTimestamp:2026-04-16 00:15:43.381268908 +0000 UTC m=+1.539059598,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-4-n-0840528111,}" Apr 16 00:15:43.391418 kubelet[2313]: I0416 00:15:43.391377 2313 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 00:15:43.394964 kubelet[2313]: E0416 00:15:43.394721 2313 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-0840528111\" not found" Apr 16 00:15:43.395072 kubelet[2313]: I0416 00:15:43.394974 2313 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 16 00:15:43.395320 kubelet[2313]: I0416 00:15:43.395170 2313 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 16 00:15:43.395320 kubelet[2313]: I0416 00:15:43.395279 2313 reconciler.go:29] "Reconciler: start to sync state" Apr 16 00:15:43.395813 kubelet[2313]: E0416 00:15:43.395690 2313 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://88.198.131.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 88.198.131.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 00:15:43.400030 kubelet[2313]: I0416 00:15:43.399725 2313 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 00:15:43.401531 kubelet[2313]: E0416 00:15:43.401489 2313 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 16 00:15:43.402405 kubelet[2313]: I0416 00:15:43.401771 2313 factory.go:223] Registration of the containerd container factory successfully Apr 16 00:15:43.402405 kubelet[2313]: I0416 00:15:43.401791 2313 factory.go:223] Registration of the systemd container factory successfully Apr 16 00:15:43.405385 kubelet[2313]: E0416 00:15:43.405328 2313 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://88.198.131.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-0840528111?timeout=10s\": dial tcp 88.198.131.37:6443: connect: connection refused" interval="200ms" Apr 16 00:15:43.424493 kubelet[2313]: I0416 00:15:43.424442 2313 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 16 00:15:43.425842 kubelet[2313]: I0416 00:15:43.425816 2313 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 16 00:15:43.425973 kubelet[2313]: I0416 00:15:43.425960 2313 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 16 00:15:43.426078 kubelet[2313]: I0416 00:15:43.426065 2313 kubelet.go:2428] "Starting kubelet main sync loop" Apr 16 00:15:43.426250 kubelet[2313]: E0416 00:15:43.426174 2313 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 00:15:43.431920 kubelet[2313]: E0416 00:15:43.431878 2313 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://88.198.131.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 88.198.131.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 00:15:43.437044 kubelet[2313]: I0416 00:15:43.436767 2313 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 16 00:15:43.437044 kubelet[2313]: I0416 00:15:43.436787 2313 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 16 00:15:43.437044 kubelet[2313]: I0416 00:15:43.436807 2313 state_mem.go:36] "Initialized new in-memory state store" Apr 16 00:15:43.439619 kubelet[2313]: I0416 00:15:43.439585 2313 policy_none.go:49] "None policy: Start" Apr 16 00:15:43.439765 kubelet[2313]: I0416 00:15:43.439753 2313 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 16 00:15:43.439837 kubelet[2313]: I0416 00:15:43.439826 2313 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 16 00:15:43.441121 kubelet[2313]: I0416 00:15:43.441099 2313 policy_none.go:47] "Start" Apr 16 00:15:43.446675 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 16 00:15:43.460889 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 16 00:15:43.466349 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 16 00:15:43.474748 kubelet[2313]: E0416 00:15:43.474678 2313 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 00:15:43.476724 kubelet[2313]: I0416 00:15:43.475567 2313 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 00:15:43.476724 kubelet[2313]: I0416 00:15:43.475595 2313 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 00:15:43.479224 kubelet[2313]: I0416 00:15:43.478274 2313 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 00:15:43.479224 kubelet[2313]: E0416 00:15:43.478609 2313 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 00:15:43.479224 kubelet[2313]: E0416 00:15:43.478809 2313 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-2-4-n-0840528111\" not found" Apr 16 00:15:43.577859 kubelet[2313]: I0416 00:15:43.577798 2313 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-0840528111" Apr 16 00:15:43.578579 kubelet[2313]: E0416 00:15:43.578543 2313 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://88.198.131.37:6443/api/v1/nodes\": dial tcp 88.198.131.37:6443: connect: connection refused" node="ci-4459-2-4-n-0840528111" Apr 16 00:15:43.599342 kubelet[2313]: I0416 00:15:43.596644 2313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b69a6d1f0ae6951255b9a57956fb7a56-ca-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-0840528111\" (UID: \"b69a6d1f0ae6951255b9a57956fb7a56\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-0840528111" Apr 16 00:15:43.599342 kubelet[2313]: I0416 00:15:43.596709 2313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b69a6d1f0ae6951255b9a57956fb7a56-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-4-n-0840528111\" (UID: \"b69a6d1f0ae6951255b9a57956fb7a56\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-0840528111" Apr 16 00:15:43.599342 kubelet[2313]: I0416 00:15:43.596751 2313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b69a6d1f0ae6951255b9a57956fb7a56-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-4-n-0840528111\" (UID: \"b69a6d1f0ae6951255b9a57956fb7a56\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-0840528111" Apr 16 00:15:43.599342 kubelet[2313]: I0416 00:15:43.596788 2313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7d850b37f29f4e4ac64e24378c6c3a72-kubeconfig\") pod \"kube-scheduler-ci-4459-2-4-n-0840528111\" (UID: \"7d850b37f29f4e4ac64e24378c6c3a72\") " pod="kube-system/kube-scheduler-ci-4459-2-4-n-0840528111" Apr 16 00:15:43.599342 kubelet[2313]: I0416 00:15:43.596822 2313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5fde35acccd2d66f7431a41f7e967688-ca-certs\") pod \"kube-apiserver-ci-4459-2-4-n-0840528111\" (UID: \"5fde35acccd2d66f7431a41f7e967688\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-0840528111" Apr 16 00:15:43.597516 systemd[1]: Created slice kubepods-burstable-pod5fde35acccd2d66f7431a41f7e967688.slice - libcontainer container kubepods-burstable-pod5fde35acccd2d66f7431a41f7e967688.slice. Apr 16 00:15:43.599894 kubelet[2313]: I0416 00:15:43.596864 2313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5fde35acccd2d66f7431a41f7e967688-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-4-n-0840528111\" (UID: \"5fde35acccd2d66f7431a41f7e967688\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-0840528111" Apr 16 00:15:43.599894 kubelet[2313]: I0416 00:15:43.596897 2313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b69a6d1f0ae6951255b9a57956fb7a56-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-4-n-0840528111\" (UID: \"b69a6d1f0ae6951255b9a57956fb7a56\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-0840528111" Apr 16 00:15:43.599894 kubelet[2313]: I0416 00:15:43.596931 2313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b69a6d1f0ae6951255b9a57956fb7a56-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-0840528111\" (UID: \"b69a6d1f0ae6951255b9a57956fb7a56\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-0840528111" Apr 16 00:15:43.599894 kubelet[2313]: I0416 00:15:43.596964 2313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5fde35acccd2d66f7431a41f7e967688-k8s-certs\") pod \"kube-apiserver-ci-4459-2-4-n-0840528111\" (UID: \"5fde35acccd2d66f7431a41f7e967688\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-0840528111" Apr 16 00:15:43.606269 kubelet[2313]: E0416 00:15:43.606145 2313 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://88.198.131.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-0840528111?timeout=10s\": dial tcp 88.198.131.37:6443: connect: connection refused" interval="400ms" Apr 16 00:15:43.609684 kubelet[2313]: E0416 00:15:43.609657 2313 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-0840528111\" not found" node="ci-4459-2-4-n-0840528111" Apr 16 00:15:43.617207 systemd[1]: Created slice kubepods-burstable-podb69a6d1f0ae6951255b9a57956fb7a56.slice - libcontainer container kubepods-burstable-podb69a6d1f0ae6951255b9a57956fb7a56.slice. Apr 16 00:15:43.620963 kubelet[2313]: E0416 00:15:43.620869 2313 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-0840528111\" not found" node="ci-4459-2-4-n-0840528111" Apr 16 00:15:43.623761 systemd[1]: Created slice kubepods-burstable-pod7d850b37f29f4e4ac64e24378c6c3a72.slice - libcontainer container kubepods-burstable-pod7d850b37f29f4e4ac64e24378c6c3a72.slice. Apr 16 00:15:43.625881 kubelet[2313]: E0416 00:15:43.625849 2313 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-0840528111\" not found" node="ci-4459-2-4-n-0840528111" Apr 16 00:15:43.782207 kubelet[2313]: I0416 00:15:43.781806 2313 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-0840528111" Apr 16 00:15:43.782576 kubelet[2313]: E0416 00:15:43.782539 2313 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://88.198.131.37:6443/api/v1/nodes\": dial tcp 88.198.131.37:6443: connect: connection refused" node="ci-4459-2-4-n-0840528111" Apr 16 00:15:43.914572 containerd[1514]: time="2026-04-16T00:15:43.914431817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-4-n-0840528111,Uid:5fde35acccd2d66f7431a41f7e967688,Namespace:kube-system,Attempt:0,}" Apr 16 00:15:43.924087 containerd[1514]: time="2026-04-16T00:15:43.924042159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-4-n-0840528111,Uid:b69a6d1f0ae6951255b9a57956fb7a56,Namespace:kube-system,Attempt:0,}" Apr 16 00:15:43.929428 containerd[1514]: time="2026-04-16T00:15:43.929080301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-4-n-0840528111,Uid:7d850b37f29f4e4ac64e24378c6c3a72,Namespace:kube-system,Attempt:0,}" Apr 16 00:15:44.007430 kubelet[2313]: E0416 00:15:44.007240 2313 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://88.198.131.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-0840528111?timeout=10s\": dial tcp 88.198.131.37:6443: connect: connection refused" interval="800ms" Apr 16 00:15:44.186596 kubelet[2313]: I0416 00:15:44.186318 2313 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-0840528111" Apr 16 00:15:44.187438 kubelet[2313]: E0416 00:15:44.187353 2313 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://88.198.131.37:6443/api/v1/nodes\": dial tcp 88.198.131.37:6443: connect: connection refused" node="ci-4459-2-4-n-0840528111" Apr 16 00:15:44.319471 kubelet[2313]: E0416 00:15:44.319374 2313 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://88.198.131.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 88.198.131.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 00:15:44.357135 kubelet[2313]: E0416 00:15:44.357049 2313 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://88.198.131.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-4-n-0840528111&limit=500&resourceVersion=0\": dial tcp 88.198.131.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 00:15:44.377909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount769403829.mount: Deactivated successfully. Apr 16 00:15:44.385452 containerd[1514]: time="2026-04-16T00:15:44.385386492Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 00:15:44.386290 containerd[1514]: time="2026-04-16T00:15:44.386261632Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Apr 16 00:15:44.389210 containerd[1514]: time="2026-04-16T00:15:44.388727177Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 00:15:44.390475 containerd[1514]: time="2026-04-16T00:15:44.390446050Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 00:15:44.391351 containerd[1514]: time="2026-04-16T00:15:44.391321869Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 00:15:44.392146 containerd[1514]: time="2026-04-16T00:15:44.392106790Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 16 00:15:44.394223 containerd[1514]: time="2026-04-16T00:15:44.393434223Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 16 00:15:44.397512 containerd[1514]: time="2026-04-16T00:15:44.397459888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 00:15:44.399236 containerd[1514]: time="2026-04-16T00:15:44.399149035Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 480.837769ms" Apr 16 00:15:44.405880 containerd[1514]: time="2026-04-16T00:15:44.405811881Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 475.382885ms" Apr 16 00:15:44.406561 containerd[1514]: time="2026-04-16T00:15:44.406507424Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 480.832788ms" Apr 16 00:15:44.436578 kubelet[2313]: E0416 00:15:44.436526 2313 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://88.198.131.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 88.198.131.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 00:15:44.438747 containerd[1514]: time="2026-04-16T00:15:44.438633172Z" level=info msg="connecting to shim 1caaeb925628109b8b6fc7f7ac4e76ec20c664fa0fbf030496f2b8e5be05fee2" address="unix:///run/containerd/s/0e05020d38bde6932908e4499812ca30da26501f7f476e42ca749a28b5d8eeca" namespace=k8s.io protocol=ttrpc version=3 Apr 16 00:15:44.456678 containerd[1514]: time="2026-04-16T00:15:44.456625662Z" level=info msg="connecting to shim f086c69188baa2b60dd98aa1e9c84ab17728c17b81928d23703bd9c7aa3cdc97" address="unix:///run/containerd/s/5ab51530026fa9ff2b5c4bbd33ee7edde52e3ab52713b20a5c55e89c61da06a7" namespace=k8s.io protocol=ttrpc version=3 Apr 16 00:15:44.460780 containerd[1514]: time="2026-04-16T00:15:44.460730744Z" level=info msg="connecting to shim 379b0159edd0af464f965f62cb7a63362c4eb36c2e012072c9e91f249a0fdfc7" address="unix:///run/containerd/s/788c33d8b49947a327258bf8225ab72166f266ce4847340ece134ca534d0b896" namespace=k8s.io protocol=ttrpc version=3 Apr 16 00:15:44.483347 systemd[1]: Started cri-containerd-1caaeb925628109b8b6fc7f7ac4e76ec20c664fa0fbf030496f2b8e5be05fee2.scope - libcontainer container 1caaeb925628109b8b6fc7f7ac4e76ec20c664fa0fbf030496f2b8e5be05fee2. Apr 16 00:15:44.497531 systemd[1]: Started cri-containerd-f086c69188baa2b60dd98aa1e9c84ab17728c17b81928d23703bd9c7aa3cdc97.scope - libcontainer container f086c69188baa2b60dd98aa1e9c84ab17728c17b81928d23703bd9c7aa3cdc97. Apr 16 00:15:44.505505 systemd[1]: Started cri-containerd-379b0159edd0af464f965f62cb7a63362c4eb36c2e012072c9e91f249a0fdfc7.scope - libcontainer container 379b0159edd0af464f965f62cb7a63362c4eb36c2e012072c9e91f249a0fdfc7. Apr 16 00:15:44.560170 containerd[1514]: time="2026-04-16T00:15:44.560119046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-4-n-0840528111,Uid:5fde35acccd2d66f7431a41f7e967688,Namespace:kube-system,Attempt:0,} returns sandbox id \"1caaeb925628109b8b6fc7f7ac4e76ec20c664fa0fbf030496f2b8e5be05fee2\"" Apr 16 00:15:44.572577 containerd[1514]: time="2026-04-16T00:15:44.572524830Z" level=info msg="CreateContainer within sandbox \"1caaeb925628109b8b6fc7f7ac4e76ec20c664fa0fbf030496f2b8e5be05fee2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 16 00:15:44.574273 containerd[1514]: time="2026-04-16T00:15:44.574230900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-4-n-0840528111,Uid:7d850b37f29f4e4ac64e24378c6c3a72,Namespace:kube-system,Attempt:0,} returns sandbox id \"f086c69188baa2b60dd98aa1e9c84ab17728c17b81928d23703bd9c7aa3cdc97\"" Apr 16 00:15:44.581264 containerd[1514]: time="2026-04-16T00:15:44.581107311Z" level=info msg="CreateContainer within sandbox \"f086c69188baa2b60dd98aa1e9c84ab17728c17b81928d23703bd9c7aa3cdc97\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 16 00:15:44.581742 containerd[1514]: time="2026-04-16T00:15:44.581656583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-4-n-0840528111,Uid:b69a6d1f0ae6951255b9a57956fb7a56,Namespace:kube-system,Attempt:0,} returns sandbox id \"379b0159edd0af464f965f62cb7a63362c4eb36c2e012072c9e91f249a0fdfc7\"" Apr 16 00:15:44.584569 containerd[1514]: time="2026-04-16T00:15:44.584340013Z" level=info msg="Container 291f3fed2e021437028abe771819bd0486836ad6f9be8c7f43d9d144228021d2: CDI devices from CRI Config.CDIDevices: []" Apr 16 00:15:44.587943 containerd[1514]: time="2026-04-16T00:15:44.587907225Z" level=info msg="CreateContainer within sandbox \"379b0159edd0af464f965f62cb7a63362c4eb36c2e012072c9e91f249a0fdfc7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 16 00:15:44.596216 containerd[1514]: time="2026-04-16T00:15:44.596142954Z" level=info msg="CreateContainer within sandbox \"1caaeb925628109b8b6fc7f7ac4e76ec20c664fa0fbf030496f2b8e5be05fee2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"291f3fed2e021437028abe771819bd0486836ad6f9be8c7f43d9d144228021d2\"" Apr 16 00:15:44.597758 containerd[1514]: time="2026-04-16T00:15:44.597516476Z" level=info msg="StartContainer for \"291f3fed2e021437028abe771819bd0486836ad6f9be8c7f43d9d144228021d2\"" Apr 16 00:15:44.598893 containerd[1514]: time="2026-04-16T00:15:44.598835906Z" level=info msg="connecting to shim 291f3fed2e021437028abe771819bd0486836ad6f9be8c7f43d9d144228021d2" address="unix:///run/containerd/s/0e05020d38bde6932908e4499812ca30da26501f7f476e42ca749a28b5d8eeca" protocol=ttrpc version=3 Apr 16 00:15:44.600028 containerd[1514]: time="2026-04-16T00:15:44.599975460Z" level=info msg="Container 442e772b84cf155ace36a3737d3f5cd05baf527dd069915bad00c84c92dc5ce9: CDI devices from CRI Config.CDIDevices: []" Apr 16 00:15:44.606641 containerd[1514]: time="2026-04-16T00:15:44.606539206Z" level=info msg="Container f8422208af7bb30e997dee1fa6921007f04c1019467a8ef4acbc1979fd20de44: CDI devices from CRI Config.CDIDevices: []" Apr 16 00:15:44.612790 containerd[1514]: time="2026-04-16T00:15:44.612666383Z" level=info msg="CreateContainer within sandbox \"f086c69188baa2b60dd98aa1e9c84ab17728c17b81928d23703bd9c7aa3cdc97\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"442e772b84cf155ace36a3737d3f5cd05baf527dd069915bad00c84c92dc5ce9\"" Apr 16 00:15:44.613964 containerd[1514]: time="2026-04-16T00:15:44.613807217Z" level=info msg="StartContainer for \"442e772b84cf155ace36a3737d3f5cd05baf527dd069915bad00c84c92dc5ce9\"" Apr 16 00:15:44.616766 containerd[1514]: time="2026-04-16T00:15:44.616729776Z" level=info msg="connecting to shim 442e772b84cf155ace36a3737d3f5cd05baf527dd069915bad00c84c92dc5ce9" address="unix:///run/containerd/s/5ab51530026fa9ff2b5c4bbd33ee7edde52e3ab52713b20a5c55e89c61da06a7" protocol=ttrpc version=3 Apr 16 00:15:44.617942 containerd[1514]: time="2026-04-16T00:15:44.617112014Z" level=info msg="CreateContainer within sandbox \"379b0159edd0af464f965f62cb7a63362c4eb36c2e012072c9e91f249a0fdfc7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f8422208af7bb30e997dee1fa6921007f04c1019467a8ef4acbc1979fd20de44\"" Apr 16 00:15:44.623323 containerd[1514]: time="2026-04-16T00:15:44.621921681Z" level=info msg="StartContainer for \"f8422208af7bb30e997dee1fa6921007f04c1019467a8ef4acbc1979fd20de44\"" Apr 16 00:15:44.623323 containerd[1514]: time="2026-04-16T00:15:44.623078078Z" level=info msg="connecting to shim f8422208af7bb30e997dee1fa6921007f04c1019467a8ef4acbc1979fd20de44" address="unix:///run/containerd/s/788c33d8b49947a327258bf8225ab72166f266ce4847340ece134ca534d0b896" protocol=ttrpc version=3 Apr 16 00:15:44.628418 systemd[1]: Started cri-containerd-291f3fed2e021437028abe771819bd0486836ad6f9be8c7f43d9d144228021d2.scope - libcontainer container 291f3fed2e021437028abe771819bd0486836ad6f9be8c7f43d9d144228021d2. Apr 16 00:15:44.661798 systemd[1]: Started cri-containerd-442e772b84cf155ace36a3737d3f5cd05baf527dd069915bad00c84c92dc5ce9.scope - libcontainer container 442e772b84cf155ace36a3737d3f5cd05baf527dd069915bad00c84c92dc5ce9. Apr 16 00:15:44.665260 systemd[1]: Started cri-containerd-f8422208af7bb30e997dee1fa6921007f04c1019467a8ef4acbc1979fd20de44.scope - libcontainer container f8422208af7bb30e997dee1fa6921007f04c1019467a8ef4acbc1979fd20de44. Apr 16 00:15:44.711737 containerd[1514]: time="2026-04-16T00:15:44.711632239Z" level=info msg="StartContainer for \"291f3fed2e021437028abe771819bd0486836ad6f9be8c7f43d9d144228021d2\" returns successfully" Apr 16 00:15:44.753977 containerd[1514]: time="2026-04-16T00:15:44.753939035Z" level=info msg="StartContainer for \"442e772b84cf155ace36a3737d3f5cd05baf527dd069915bad00c84c92dc5ce9\" returns successfully" Apr 16 00:15:44.766813 containerd[1514]: time="2026-04-16T00:15:44.766777948Z" level=info msg="StartContainer for \"f8422208af7bb30e997dee1fa6921007f04c1019467a8ef4acbc1979fd20de44\" returns successfully" Apr 16 00:15:44.808750 kubelet[2313]: E0416 00:15:44.808703 2313 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://88.198.131.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-0840528111?timeout=10s\": dial tcp 88.198.131.37:6443: connect: connection refused" interval="1.6s" Apr 16 00:15:44.992201 kubelet[2313]: I0416 00:15:44.990581 2313 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-0840528111" Apr 16 00:15:45.450703 kubelet[2313]: E0416 00:15:45.450269 2313 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-0840528111\" not found" node="ci-4459-2-4-n-0840528111" Apr 16 00:15:45.457362 kubelet[2313]: E0416 00:15:45.454778 2313 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-0840528111\" not found" node="ci-4459-2-4-n-0840528111" Apr 16 00:15:45.476103 kubelet[2313]: E0416 00:15:45.476069 2313 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-0840528111\" not found" node="ci-4459-2-4-n-0840528111" Apr 16 00:15:46.462800 kubelet[2313]: E0416 00:15:46.462738 2313 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459-2-4-n-0840528111\" not found" node="ci-4459-2-4-n-0840528111" Apr 16 00:15:46.471343 kubelet[2313]: E0416 00:15:46.471117 2313 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-0840528111\" not found" node="ci-4459-2-4-n-0840528111" Apr 16 00:15:46.471493 kubelet[2313]: E0416 00:15:46.471444 2313 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-0840528111\" not found" node="ci-4459-2-4-n-0840528111" Apr 16 00:15:46.649989 kubelet[2313]: I0416 00:15:46.649932 2313 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-4-n-0840528111" Apr 16 00:15:46.649989 kubelet[2313]: E0416 00:15:46.649979 2313 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4459-2-4-n-0840528111\": node \"ci-4459-2-4-n-0840528111\" not found" Apr 16 00:15:46.675459 kubelet[2313]: E0416 00:15:46.675384 2313 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-0840528111\" not found" Apr 16 00:15:46.776295 kubelet[2313]: E0416 00:15:46.775804 2313 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-0840528111\" not found" Apr 16 00:15:46.876704 kubelet[2313]: E0416 00:15:46.876641 2313 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-0840528111\" not found" Apr 16 00:15:46.977051 kubelet[2313]: E0416 00:15:46.976990 2313 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-0840528111\" not found" Apr 16 00:15:47.097570 kubelet[2313]: I0416 00:15:47.097231 2313 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-4-n-0840528111" Apr 16 00:15:47.106173 kubelet[2313]: E0416 00:15:47.106117 2313 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-4-n-0840528111\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-2-4-n-0840528111" Apr 16 00:15:47.106173 kubelet[2313]: I0416 00:15:47.106168 2313 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-0840528111" Apr 16 00:15:47.109073 kubelet[2313]: E0416 00:15:47.109009 2313 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-4-n-0840528111\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-0840528111" Apr 16 00:15:47.109073 kubelet[2313]: I0416 00:15:47.109049 2313 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-4-n-0840528111" Apr 16 00:15:47.111257 kubelet[2313]: E0416 00:15:47.111213 2313 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-4-n-0840528111\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-2-4-n-0840528111" Apr 16 00:15:47.381983 kubelet[2313]: I0416 00:15:47.381730 2313 apiserver.go:52] "Watching apiserver" Apr 16 00:15:47.395513 kubelet[2313]: I0416 00:15:47.395445 2313 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 16 00:15:48.795884 systemd[1]: Reload requested from client PID 2595 ('systemctl') (unit session-7.scope)... Apr 16 00:15:48.796360 systemd[1]: Reloading... Apr 16 00:15:48.832103 kubelet[2313]: I0416 00:15:48.831780 2313 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-0840528111" Apr 16 00:15:48.936289 zram_generator::config[2642]: No configuration found. Apr 16 00:15:49.150939 systemd[1]: Reloading finished in 354 ms. Apr 16 00:15:49.175568 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 00:15:49.189088 systemd[1]: kubelet.service: Deactivated successfully. Apr 16 00:15:49.189748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 00:15:49.189924 systemd[1]: kubelet.service: Consumed 1.969s CPU time, 124M memory peak. Apr 16 00:15:49.193546 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 00:15:49.352641 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 00:15:49.364685 (kubelet)[2684]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 00:15:49.425442 kubelet[2684]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 00:15:49.425442 kubelet[2684]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 00:15:49.425442 kubelet[2684]: I0416 00:15:49.425036 2684 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 00:15:49.434679 kubelet[2684]: I0416 00:15:49.434627 2684 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 16 00:15:49.434679 kubelet[2684]: I0416 00:15:49.434658 2684 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 00:15:49.434679 kubelet[2684]: I0416 00:15:49.434688 2684 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 16 00:15:49.434679 kubelet[2684]: I0416 00:15:49.434694 2684 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 00:15:49.435513 kubelet[2684]: I0416 00:15:49.434947 2684 server.go:956] "Client rotation is on, will bootstrap in background" Apr 16 00:15:49.437289 kubelet[2684]: I0416 00:15:49.437241 2684 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 16 00:15:49.443211 kubelet[2684]: I0416 00:15:49.442742 2684 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 00:15:49.450345 kubelet[2684]: I0416 00:15:49.450304 2684 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 16 00:15:49.453520 kubelet[2684]: I0416 00:15:49.453488 2684 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 16 00:15:49.453726 kubelet[2684]: I0416 00:15:49.453694 2684 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 00:15:49.453884 kubelet[2684]: I0416 00:15:49.453725 2684 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-4-n-0840528111","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 00:15:49.453884 kubelet[2684]: I0416 00:15:49.453883 2684 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 00:15:49.454005 kubelet[2684]: I0416 00:15:49.453892 2684 container_manager_linux.go:306] "Creating device plugin manager" Apr 16 00:15:49.454005 kubelet[2684]: I0416 00:15:49.453914 2684 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 16 00:15:49.454111 kubelet[2684]: I0416 00:15:49.454099 2684 state_mem.go:36] "Initialized new in-memory state store" Apr 16 00:15:49.454346 kubelet[2684]: I0416 00:15:49.454322 2684 kubelet.go:475] "Attempting to sync node with API server" Apr 16 00:15:49.456225 kubelet[2684]: I0416 00:15:49.454354 2684 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 00:15:49.456225 kubelet[2684]: I0416 00:15:49.454386 2684 kubelet.go:387] "Adding apiserver pod source" Apr 16 00:15:49.456225 kubelet[2684]: I0416 00:15:49.454398 2684 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 00:15:49.457982 kubelet[2684]: I0416 00:15:49.457961 2684 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 16 00:15:49.459426 kubelet[2684]: I0416 00:15:49.459397 2684 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 00:15:49.459709 kubelet[2684]: I0416 00:15:49.459691 2684 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 16 00:15:49.464570 kubelet[2684]: I0416 00:15:49.464546 2684 server.go:1262] "Started kubelet" Apr 16 00:15:49.469846 kubelet[2684]: I0416 00:15:49.469814 2684 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 00:15:49.483371 kubelet[2684]: I0416 00:15:49.483324 2684 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 00:15:49.484875 kubelet[2684]: I0416 00:15:49.484853 2684 server.go:310] "Adding debug handlers to kubelet server" Apr 16 00:15:49.490496 kubelet[2684]: I0416 00:15:49.490415 2684 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 00:15:49.490683 kubelet[2684]: I0416 00:15:49.490668 2684 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 16 00:15:49.490946 kubelet[2684]: I0416 00:15:49.490929 2684 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 00:15:49.491396 kubelet[2684]: I0416 00:15:49.491373 2684 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 00:15:49.494767 kubelet[2684]: I0416 00:15:49.494734 2684 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 16 00:15:49.494890 kubelet[2684]: I0416 00:15:49.494875 2684 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 16 00:15:49.495023 kubelet[2684]: I0416 00:15:49.495005 2684 reconciler.go:29] "Reconciler: start to sync state" Apr 16 00:15:49.498510 kubelet[2684]: I0416 00:15:49.498467 2684 factory.go:223] Registration of the systemd container factory successfully Apr 16 00:15:49.499980 kubelet[2684]: I0416 00:15:49.499946 2684 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 16 00:15:49.500434 kubelet[2684]: I0416 00:15:49.500377 2684 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 00:15:49.506293 kubelet[2684]: I0416 00:15:49.506255 2684 factory.go:223] Registration of the containerd container factory successfully Apr 16 00:15:49.510333 kubelet[2684]: I0416 00:15:49.510298 2684 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 16 00:15:49.510484 kubelet[2684]: I0416 00:15:49.510473 2684 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 16 00:15:49.510552 kubelet[2684]: I0416 00:15:49.510543 2684 kubelet.go:2428] "Starting kubelet main sync loop" Apr 16 00:15:49.510669 kubelet[2684]: E0416 00:15:49.510650 2684 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 00:15:49.573507 kubelet[2684]: I0416 00:15:49.573473 2684 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 16 00:15:49.574033 kubelet[2684]: I0416 00:15:49.574012 2684 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 16 00:15:49.574197 kubelet[2684]: I0416 00:15:49.574157 2684 state_mem.go:36] "Initialized new in-memory state store" Apr 16 00:15:49.574494 kubelet[2684]: I0416 00:15:49.574473 2684 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 16 00:15:49.574584 kubelet[2684]: I0416 00:15:49.574561 2684 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 16 00:15:49.574642 kubelet[2684]: I0416 00:15:49.574634 2684 policy_none.go:49] "None policy: Start" Apr 16 00:15:49.574698 kubelet[2684]: I0416 00:15:49.574689 2684 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 16 00:15:49.574760 kubelet[2684]: I0416 00:15:49.574750 2684 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 16 00:15:49.575045 kubelet[2684]: I0416 00:15:49.575017 2684 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 16 00:15:49.575116 kubelet[2684]: I0416 00:15:49.575107 2684 policy_none.go:47] "Start" Apr 16 00:15:49.581535 kubelet[2684]: E0416 00:15:49.581498 2684 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 00:15:49.583110 kubelet[2684]: I0416 00:15:49.582635 2684 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 00:15:49.583110 kubelet[2684]: I0416 00:15:49.582662 2684 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 00:15:49.585232 kubelet[2684]: I0416 00:15:49.584557 2684 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 00:15:49.588724 kubelet[2684]: E0416 00:15:49.588510 2684 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 00:15:49.611759 kubelet[2684]: I0416 00:15:49.611722 2684 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-4-n-0840528111" Apr 16 00:15:49.611980 kubelet[2684]: I0416 00:15:49.611771 2684 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-4-n-0840528111" Apr 16 00:15:49.612130 kubelet[2684]: I0416 00:15:49.611885 2684 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-0840528111" Apr 16 00:15:49.625103 kubelet[2684]: E0416 00:15:49.625000 2684 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-4-n-0840528111\" already exists" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-0840528111" Apr 16 00:15:49.694796 kubelet[2684]: I0416 00:15:49.694214 2684 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-0840528111" Apr 16 00:15:49.697850 kubelet[2684]: I0416 00:15:49.697453 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b69a6d1f0ae6951255b9a57956fb7a56-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-4-n-0840528111\" (UID: \"b69a6d1f0ae6951255b9a57956fb7a56\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-0840528111" Apr 16 00:15:49.697850 kubelet[2684]: I0416 00:15:49.697564 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b69a6d1f0ae6951255b9a57956fb7a56-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-4-n-0840528111\" (UID: \"b69a6d1f0ae6951255b9a57956fb7a56\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-0840528111" Apr 16 00:15:49.697850 kubelet[2684]: I0416 00:15:49.697584 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5fde35acccd2d66f7431a41f7e967688-ca-certs\") pod \"kube-apiserver-ci-4459-2-4-n-0840528111\" (UID: \"5fde35acccd2d66f7431a41f7e967688\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-0840528111" Apr 16 00:15:49.697850 kubelet[2684]: I0416 00:15:49.697604 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5fde35acccd2d66f7431a41f7e967688-k8s-certs\") pod \"kube-apiserver-ci-4459-2-4-n-0840528111\" (UID: \"5fde35acccd2d66f7431a41f7e967688\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-0840528111" Apr 16 00:15:49.697850 kubelet[2684]: I0416 00:15:49.697626 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b69a6d1f0ae6951255b9a57956fb7a56-ca-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-0840528111\" (UID: \"b69a6d1f0ae6951255b9a57956fb7a56\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-0840528111" Apr 16 00:15:49.698060 kubelet[2684]: I0416 00:15:49.697643 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b69a6d1f0ae6951255b9a57956fb7a56-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-4-n-0840528111\" (UID: \"b69a6d1f0ae6951255b9a57956fb7a56\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-0840528111" Apr 16 00:15:49.698060 kubelet[2684]: I0416 00:15:49.697668 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b69a6d1f0ae6951255b9a57956fb7a56-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-0840528111\" (UID: \"b69a6d1f0ae6951255b9a57956fb7a56\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-0840528111" Apr 16 00:15:49.698060 kubelet[2684]: I0416 00:15:49.697684 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7d850b37f29f4e4ac64e24378c6c3a72-kubeconfig\") pod \"kube-scheduler-ci-4459-2-4-n-0840528111\" (UID: \"7d850b37f29f4e4ac64e24378c6c3a72\") " pod="kube-system/kube-scheduler-ci-4459-2-4-n-0840528111" Apr 16 00:15:49.698060 kubelet[2684]: I0416 00:15:49.697713 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5fde35acccd2d66f7431a41f7e967688-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-4-n-0840528111\" (UID: \"5fde35acccd2d66f7431a41f7e967688\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-0840528111" Apr 16 00:15:49.707804 kubelet[2684]: I0416 00:15:49.707678 2684 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459-2-4-n-0840528111" Apr 16 00:15:49.708046 kubelet[2684]: I0416 00:15:49.708022 2684 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-4-n-0840528111" Apr 16 00:15:50.455214 kubelet[2684]: I0416 00:15:50.455145 2684 apiserver.go:52] "Watching apiserver" Apr 16 00:15:50.495296 kubelet[2684]: I0416 00:15:50.495241 2684 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 16 00:15:50.557412 kubelet[2684]: I0416 00:15:50.557201 2684 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-4-n-0840528111" Apr 16 00:15:50.571212 kubelet[2684]: E0416 00:15:50.569955 2684 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-4-n-0840528111\" already exists" pod="kube-system/kube-apiserver-ci-4459-2-4-n-0840528111" Apr 16 00:15:50.619160 kubelet[2684]: I0416 00:15:50.619057 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-2-4-n-0840528111" podStartSLOduration=1.619030376 podStartE2EDuration="1.619030376s" podCreationTimestamp="2026-04-16 00:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 00:15:50.595806182 +0000 UTC m=+1.226781652" watchObservedRunningTime="2026-04-16 00:15:50.619030376 +0000 UTC m=+1.250005806" Apr 16 00:15:50.619579 kubelet[2684]: I0416 00:15:50.619465 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-2-4-n-0840528111" podStartSLOduration=1.619452554 podStartE2EDuration="1.619452554s" podCreationTimestamp="2026-04-16 00:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 00:15:50.618050399 +0000 UTC m=+1.249025789" watchObservedRunningTime="2026-04-16 00:15:50.619452554 +0000 UTC m=+1.250428024" Apr 16 00:15:50.661605 kubelet[2684]: I0416 00:15:50.661225 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-0840528111" podStartSLOduration=2.661207209 podStartE2EDuration="2.661207209s" podCreationTimestamp="2026-04-16 00:15:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 00:15:50.637843795 +0000 UTC m=+1.268819225" watchObservedRunningTime="2026-04-16 00:15:50.661207209 +0000 UTC m=+1.292182599" Apr 16 00:15:55.684571 kubelet[2684]: I0416 00:15:55.684479 2684 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 16 00:15:55.685774 containerd[1514]: time="2026-04-16T00:15:55.685743615Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 16 00:15:55.686888 kubelet[2684]: I0416 00:15:55.686309 2684 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 16 00:15:56.407278 systemd[1]: Created slice kubepods-besteffort-podb71c4a7f_e59c_451e_934b_1932fc43a7db.slice - libcontainer container kubepods-besteffort-podb71c4a7f_e59c_451e_934b_1932fc43a7db.slice. Apr 16 00:15:56.443363 kubelet[2684]: I0416 00:15:56.443301 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b71c4a7f-e59c-451e-934b-1932fc43a7db-xtables-lock\") pod \"kube-proxy-pwdwb\" (UID: \"b71c4a7f-e59c-451e-934b-1932fc43a7db\") " pod="kube-system/kube-proxy-pwdwb" Apr 16 00:15:56.443363 kubelet[2684]: I0416 00:15:56.443348 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2645\" (UniqueName: \"kubernetes.io/projected/b71c4a7f-e59c-451e-934b-1932fc43a7db-kube-api-access-n2645\") pod \"kube-proxy-pwdwb\" (UID: \"b71c4a7f-e59c-451e-934b-1932fc43a7db\") " pod="kube-system/kube-proxy-pwdwb" Apr 16 00:15:56.443363 kubelet[2684]: I0416 00:15:56.443375 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b71c4a7f-e59c-451e-934b-1932fc43a7db-kube-proxy\") pod \"kube-proxy-pwdwb\" (UID: \"b71c4a7f-e59c-451e-934b-1932fc43a7db\") " pod="kube-system/kube-proxy-pwdwb" Apr 16 00:15:56.443680 kubelet[2684]: I0416 00:15:56.443395 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b71c4a7f-e59c-451e-934b-1932fc43a7db-lib-modules\") pod \"kube-proxy-pwdwb\" (UID: \"b71c4a7f-e59c-451e-934b-1932fc43a7db\") " pod="kube-system/kube-proxy-pwdwb" Apr 16 00:15:56.722683 containerd[1514]: time="2026-04-16T00:15:56.722230555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pwdwb,Uid:b71c4a7f-e59c-451e-934b-1932fc43a7db,Namespace:kube-system,Attempt:0,}" Apr 16 00:15:56.748643 containerd[1514]: time="2026-04-16T00:15:56.748592327Z" level=info msg="connecting to shim 0a2761de971c1e4c0e5f6f350c3802f276f1cb5998f001063553a35c2a634283" address="unix:///run/containerd/s/e341487bda4a0bf570c18ffdba12bb4a81c218ce20b95104207665925907351a" namespace=k8s.io protocol=ttrpc version=3 Apr 16 00:15:56.784542 systemd[1]: Started cri-containerd-0a2761de971c1e4c0e5f6f350c3802f276f1cb5998f001063553a35c2a634283.scope - libcontainer container 0a2761de971c1e4c0e5f6f350c3802f276f1cb5998f001063553a35c2a634283. Apr 16 00:15:56.826963 containerd[1514]: time="2026-04-16T00:15:56.826923533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pwdwb,Uid:b71c4a7f-e59c-451e-934b-1932fc43a7db,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a2761de971c1e4c0e5f6f350c3802f276f1cb5998f001063553a35c2a634283\"" Apr 16 00:15:56.835907 containerd[1514]: time="2026-04-16T00:15:56.835834856Z" level=info msg="CreateContainer within sandbox \"0a2761de971c1e4c0e5f6f350c3802f276f1cb5998f001063553a35c2a634283\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 16 00:15:56.856527 containerd[1514]: time="2026-04-16T00:15:56.855523677Z" level=info msg="Container 4461c0d0cc2e8a32607eb0f7224795a98fba21c62ccfa16c6df614e69602baea: CDI devices from CRI Config.CDIDevices: []" Apr 16 00:15:56.856498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1583844751.mount: Deactivated successfully. Apr 16 00:15:56.870699 containerd[1514]: time="2026-04-16T00:15:56.870631385Z" level=info msg="CreateContainer within sandbox \"0a2761de971c1e4c0e5f6f350c3802f276f1cb5998f001063553a35c2a634283\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4461c0d0cc2e8a32607eb0f7224795a98fba21c62ccfa16c6df614e69602baea\"" Apr 16 00:15:56.871844 containerd[1514]: time="2026-04-16T00:15:56.871821138Z" level=info msg="StartContainer for \"4461c0d0cc2e8a32607eb0f7224795a98fba21c62ccfa16c6df614e69602baea\"" Apr 16 00:15:56.874866 containerd[1514]: time="2026-04-16T00:15:56.874782938Z" level=info msg="connecting to shim 4461c0d0cc2e8a32607eb0f7224795a98fba21c62ccfa16c6df614e69602baea" address="unix:///run/containerd/s/e341487bda4a0bf570c18ffdba12bb4a81c218ce20b95104207665925907351a" protocol=ttrpc version=3 Apr 16 00:15:56.903764 systemd[1]: Started cri-containerd-4461c0d0cc2e8a32607eb0f7224795a98fba21c62ccfa16c6df614e69602baea.scope - libcontainer container 4461c0d0cc2e8a32607eb0f7224795a98fba21c62ccfa16c6df614e69602baea. Apr 16 00:15:56.939199 systemd[1]: Created slice kubepods-besteffort-pod1b69c9e5_1ab7_401a_ae46_de1d53c3f81d.slice - libcontainer container kubepods-besteffort-pod1b69c9e5_1ab7_401a_ae46_de1d53c3f81d.slice. Apr 16 00:15:56.947882 kubelet[2684]: I0416 00:15:56.947806 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1b69c9e5-1ab7-401a-ae46-de1d53c3f81d-var-lib-calico\") pod \"tigera-operator-5588576f44-jr8md\" (UID: \"1b69c9e5-1ab7-401a-ae46-de1d53c3f81d\") " pod="tigera-operator/tigera-operator-5588576f44-jr8md" Apr 16 00:15:56.947882 kubelet[2684]: I0416 00:15:56.947843 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zffxk\" (UniqueName: \"kubernetes.io/projected/1b69c9e5-1ab7-401a-ae46-de1d53c3f81d-kube-api-access-zffxk\") pod \"tigera-operator-5588576f44-jr8md\" (UID: \"1b69c9e5-1ab7-401a-ae46-de1d53c3f81d\") " pod="tigera-operator/tigera-operator-5588576f44-jr8md" Apr 16 00:15:57.011485 containerd[1514]: time="2026-04-16T00:15:57.010904064Z" level=info msg="StartContainer for \"4461c0d0cc2e8a32607eb0f7224795a98fba21c62ccfa16c6df614e69602baea\" returns successfully" Apr 16 00:15:57.247796 containerd[1514]: time="2026-04-16T00:15:57.247462791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-jr8md,Uid:1b69c9e5-1ab7-401a-ae46-de1d53c3f81d,Namespace:tigera-operator,Attempt:0,}" Apr 16 00:15:57.274199 containerd[1514]: time="2026-04-16T00:15:57.273881212Z" level=info msg="connecting to shim c8407bf730ed8184e429fe5f383ac30806c20800777650c2c99ba3f4e7607217" address="unix:///run/containerd/s/980f6a5689d5184356e0c77f074e3f58a54297ecb6b37e15375a22a8473b2df4" namespace=k8s.io protocol=ttrpc version=3 Apr 16 00:15:57.298569 systemd[1]: Started cri-containerd-c8407bf730ed8184e429fe5f383ac30806c20800777650c2c99ba3f4e7607217.scope - libcontainer container c8407bf730ed8184e429fe5f383ac30806c20800777650c2c99ba3f4e7607217. Apr 16 00:15:57.342682 containerd[1514]: time="2026-04-16T00:15:57.342630586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-jr8md,Uid:1b69c9e5-1ab7-401a-ae46-de1d53c3f81d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c8407bf730ed8184e429fe5f383ac30806c20800777650c2c99ba3f4e7607217\"" Apr 16 00:15:57.345832 containerd[1514]: time="2026-04-16T00:15:57.345768984Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 16 00:15:58.947791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2958010258.mount: Deactivated successfully. Apr 16 00:15:59.389999 containerd[1514]: time="2026-04-16T00:15:59.389815440Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:15:59.391648 containerd[1514]: time="2026-04-16T00:15:59.391607980Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Apr 16 00:15:59.392557 containerd[1514]: time="2026-04-16T00:15:59.392524811Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:15:59.395560 containerd[1514]: time="2026-04-16T00:15:59.395518125Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:15:59.396764 containerd[1514]: time="2026-04-16T00:15:59.396732619Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 2.05090299s" Apr 16 00:15:59.396822 containerd[1514]: time="2026-04-16T00:15:59.396772902Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Apr 16 00:15:59.402930 containerd[1514]: time="2026-04-16T00:15:59.402886859Z" level=info msg="CreateContainer within sandbox \"c8407bf730ed8184e429fe5f383ac30806c20800777650c2c99ba3f4e7607217\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 16 00:15:59.414647 containerd[1514]: time="2026-04-16T00:15:59.413815510Z" level=info msg="Container 6453e38f63b15c63a84241757dcbce7d5dcf9507b84274a690ea3fcf222e44b5: CDI devices from CRI Config.CDIDevices: []" Apr 16 00:15:59.418925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3665863056.mount: Deactivated successfully. Apr 16 00:15:59.423655 containerd[1514]: time="2026-04-16T00:15:59.423586791Z" level=info msg="CreateContainer within sandbox \"c8407bf730ed8184e429fe5f383ac30806c20800777650c2c99ba3f4e7607217\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6453e38f63b15c63a84241757dcbce7d5dcf9507b84274a690ea3fcf222e44b5\"" Apr 16 00:15:59.425512 containerd[1514]: time="2026-04-16T00:15:59.425474458Z" level=info msg="StartContainer for \"6453e38f63b15c63a84241757dcbce7d5dcf9507b84274a690ea3fcf222e44b5\"" Apr 16 00:15:59.427009 containerd[1514]: time="2026-04-16T00:15:59.426979576Z" level=info msg="connecting to shim 6453e38f63b15c63a84241757dcbce7d5dcf9507b84274a690ea3fcf222e44b5" address="unix:///run/containerd/s/980f6a5689d5184356e0c77f074e3f58a54297ecb6b37e15375a22a8473b2df4" protocol=ttrpc version=3 Apr 16 00:15:59.453414 systemd[1]: Started cri-containerd-6453e38f63b15c63a84241757dcbce7d5dcf9507b84274a690ea3fcf222e44b5.scope - libcontainer container 6453e38f63b15c63a84241757dcbce7d5dcf9507b84274a690ea3fcf222e44b5. Apr 16 00:15:59.489986 containerd[1514]: time="2026-04-16T00:15:59.489942200Z" level=info msg="StartContainer for \"6453e38f63b15c63a84241757dcbce7d5dcf9507b84274a690ea3fcf222e44b5\" returns successfully" Apr 16 00:15:59.526126 kubelet[2684]: I0416 00:15:59.526055 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pwdwb" podStartSLOduration=3.526034292 podStartE2EDuration="3.526034292s" podCreationTimestamp="2026-04-16 00:15:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 00:15:57.59540619 +0000 UTC m=+8.226381580" watchObservedRunningTime="2026-04-16 00:15:59.526034292 +0000 UTC m=+10.157009722" Apr 16 00:15:59.607506 kubelet[2684]: I0416 00:15:59.607425 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-jr8md" podStartSLOduration=1.55401647 podStartE2EDuration="3.60739503s" podCreationTimestamp="2026-04-16 00:15:56 +0000 UTC" firstStartedPulling="2026-04-16 00:15:57.34413916 +0000 UTC m=+7.975114510" lastFinishedPulling="2026-04-16 00:15:59.39751768 +0000 UTC m=+10.028493070" observedRunningTime="2026-04-16 00:15:59.606505521 +0000 UTC m=+10.237480871" watchObservedRunningTime="2026-04-16 00:15:59.60739503 +0000 UTC m=+10.238370420" Apr 16 00:16:05.190260 update_engine[1487]: I20260416 00:16:05.188766 1487 update_attempter.cc:509] Updating boot flags... Apr 16 00:16:05.786423 sudo[1760]: pam_unix(sudo:session): session closed for user root Apr 16 00:16:05.802796 sshd[1759]: Connection closed by 4.175.71.9 port 52606 Apr 16 00:16:05.803355 sshd-session[1756]: pam_unix(sshd:session): session closed for user core Apr 16 00:16:05.810264 systemd[1]: sshd@6-88.198.131.37:22-4.175.71.9:52606.service: Deactivated successfully. Apr 16 00:16:05.815266 systemd[1]: session-7.scope: Deactivated successfully. Apr 16 00:16:05.816316 systemd[1]: session-7.scope: Consumed 7.665s CPU time, 221.9M memory peak. Apr 16 00:16:05.822012 systemd-logind[1486]: Session 7 logged out. Waiting for processes to exit. Apr 16 00:16:05.824511 systemd-logind[1486]: Removed session 7. Apr 16 00:16:12.379449 systemd[1]: Created slice kubepods-besteffort-pod71cc08c8_0053_4615_abdb_475ac0ed0934.slice - libcontainer container kubepods-besteffort-pod71cc08c8_0053_4615_abdb_475ac0ed0934.slice. Apr 16 00:16:12.455462 kubelet[2684]: I0416 00:16:12.455388 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5k58\" (UniqueName: \"kubernetes.io/projected/71cc08c8-0053-4615-abdb-475ac0ed0934-kube-api-access-b5k58\") pod \"calico-typha-58d8b999b5-mrcbh\" (UID: \"71cc08c8-0053-4615-abdb-475ac0ed0934\") " pod="calico-system/calico-typha-58d8b999b5-mrcbh" Apr 16 00:16:12.455462 kubelet[2684]: I0416 00:16:12.455450 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/71cc08c8-0053-4615-abdb-475ac0ed0934-typha-certs\") pod \"calico-typha-58d8b999b5-mrcbh\" (UID: \"71cc08c8-0053-4615-abdb-475ac0ed0934\") " pod="calico-system/calico-typha-58d8b999b5-mrcbh" Apr 16 00:16:12.455462 kubelet[2684]: I0416 00:16:12.455470 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71cc08c8-0053-4615-abdb-475ac0ed0934-tigera-ca-bundle\") pod \"calico-typha-58d8b999b5-mrcbh\" (UID: \"71cc08c8-0053-4615-abdb-475ac0ed0934\") " pod="calico-system/calico-typha-58d8b999b5-mrcbh" Apr 16 00:16:12.475671 systemd[1]: Created slice kubepods-besteffort-pod78fb3dfb_15ad_4f7a_9a39_d661eb90a287.slice - libcontainer container kubepods-besteffort-pod78fb3dfb_15ad_4f7a_9a39_d661eb90a287.slice. Apr 16 00:16:12.556500 kubelet[2684]: I0416 00:16:12.556442 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/78fb3dfb-15ad-4f7a-9a39-d661eb90a287-node-certs\") pod \"calico-node-rbf4f\" (UID: \"78fb3dfb-15ad-4f7a-9a39-d661eb90a287\") " pod="calico-system/calico-node-rbf4f" Apr 16 00:16:12.556904 kubelet[2684]: I0416 00:16:12.556877 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/78fb3dfb-15ad-4f7a-9a39-d661eb90a287-cni-log-dir\") pod \"calico-node-rbf4f\" (UID: \"78fb3dfb-15ad-4f7a-9a39-d661eb90a287\") " pod="calico-system/calico-node-rbf4f" Apr 16 00:16:12.556945 kubelet[2684]: I0416 00:16:12.556909 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/78fb3dfb-15ad-4f7a-9a39-d661eb90a287-lib-modules\") pod \"calico-node-rbf4f\" (UID: \"78fb3dfb-15ad-4f7a-9a39-d661eb90a287\") " pod="calico-system/calico-node-rbf4f" Apr 16 00:16:12.556945 kubelet[2684]: I0416 00:16:12.556934 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/78fb3dfb-15ad-4f7a-9a39-d661eb90a287-bpffs\") pod \"calico-node-rbf4f\" (UID: \"78fb3dfb-15ad-4f7a-9a39-d661eb90a287\") " pod="calico-system/calico-node-rbf4f" Apr 16 00:16:12.557001 kubelet[2684]: I0416 00:16:12.556950 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/78fb3dfb-15ad-4f7a-9a39-d661eb90a287-nodeproc\") pod \"calico-node-rbf4f\" (UID: \"78fb3dfb-15ad-4f7a-9a39-d661eb90a287\") " pod="calico-system/calico-node-rbf4f" Apr 16 00:16:12.557001 kubelet[2684]: I0416 00:16:12.556982 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/78fb3dfb-15ad-4f7a-9a39-d661eb90a287-cni-bin-dir\") pod \"calico-node-rbf4f\" (UID: \"78fb3dfb-15ad-4f7a-9a39-d661eb90a287\") " pod="calico-system/calico-node-rbf4f" Apr 16 00:16:12.557057 kubelet[2684]: I0416 00:16:12.557010 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/78fb3dfb-15ad-4f7a-9a39-d661eb90a287-var-lib-calico\") pod \"calico-node-rbf4f\" (UID: \"78fb3dfb-15ad-4f7a-9a39-d661eb90a287\") " pod="calico-system/calico-node-rbf4f" Apr 16 00:16:12.557057 kubelet[2684]: I0416 00:16:12.557036 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78fb3dfb-15ad-4f7a-9a39-d661eb90a287-tigera-ca-bundle\") pod \"calico-node-rbf4f\" (UID: \"78fb3dfb-15ad-4f7a-9a39-d661eb90a287\") " pod="calico-system/calico-node-rbf4f" Apr 16 00:16:12.557057 kubelet[2684]: I0416 00:16:12.557054 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/78fb3dfb-15ad-4f7a-9a39-d661eb90a287-var-run-calico\") pod \"calico-node-rbf4f\" (UID: \"78fb3dfb-15ad-4f7a-9a39-d661eb90a287\") " pod="calico-system/calico-node-rbf4f" Apr 16 00:16:12.557122 kubelet[2684]: I0416 00:16:12.557070 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gphn4\" (UniqueName: \"kubernetes.io/projected/78fb3dfb-15ad-4f7a-9a39-d661eb90a287-kube-api-access-gphn4\") pod \"calico-node-rbf4f\" (UID: \"78fb3dfb-15ad-4f7a-9a39-d661eb90a287\") " pod="calico-system/calico-node-rbf4f" Apr 16 00:16:12.557122 kubelet[2684]: I0416 00:16:12.557087 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/78fb3dfb-15ad-4f7a-9a39-d661eb90a287-cni-net-dir\") pod \"calico-node-rbf4f\" (UID: \"78fb3dfb-15ad-4f7a-9a39-d661eb90a287\") " pod="calico-system/calico-node-rbf4f" Apr 16 00:16:12.557122 kubelet[2684]: I0416 00:16:12.557101 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/78fb3dfb-15ad-4f7a-9a39-d661eb90a287-policysync\") pod \"calico-node-rbf4f\" (UID: \"78fb3dfb-15ad-4f7a-9a39-d661eb90a287\") " pod="calico-system/calico-node-rbf4f" Apr 16 00:16:12.557197 kubelet[2684]: I0416 00:16:12.557134 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/78fb3dfb-15ad-4f7a-9a39-d661eb90a287-sys-fs\") pod \"calico-node-rbf4f\" (UID: \"78fb3dfb-15ad-4f7a-9a39-d661eb90a287\") " pod="calico-system/calico-node-rbf4f" Apr 16 00:16:12.557197 kubelet[2684]: I0416 00:16:12.557150 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/78fb3dfb-15ad-4f7a-9a39-d661eb90a287-xtables-lock\") pod \"calico-node-rbf4f\" (UID: \"78fb3dfb-15ad-4f7a-9a39-d661eb90a287\") " pod="calico-system/calico-node-rbf4f" Apr 16 00:16:12.557308 kubelet[2684]: I0416 00:16:12.557219 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/78fb3dfb-15ad-4f7a-9a39-d661eb90a287-flexvol-driver-host\") pod \"calico-node-rbf4f\" (UID: \"78fb3dfb-15ad-4f7a-9a39-d661eb90a287\") " pod="calico-system/calico-node-rbf4f" Apr 16 00:16:12.595803 kubelet[2684]: E0416 00:16:12.595332 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q9xkv" podUID="8072e2d2-48e1-4c7f-bbc3-6ef041c65bc0" Apr 16 00:16:12.659853 kubelet[2684]: I0416 00:16:12.658558 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8072e2d2-48e1-4c7f-bbc3-6ef041c65bc0-varrun\") pod \"csi-node-driver-q9xkv\" (UID: \"8072e2d2-48e1-4c7f-bbc3-6ef041c65bc0\") " pod="calico-system/csi-node-driver-q9xkv" Apr 16 00:16:12.659853 kubelet[2684]: I0416 00:16:12.658670 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czwf6\" (UniqueName: \"kubernetes.io/projected/8072e2d2-48e1-4c7f-bbc3-6ef041c65bc0-kube-api-access-czwf6\") pod \"csi-node-driver-q9xkv\" (UID: \"8072e2d2-48e1-4c7f-bbc3-6ef041c65bc0\") " pod="calico-system/csi-node-driver-q9xkv" Apr 16 00:16:12.659853 kubelet[2684]: I0416 00:16:12.658794 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8072e2d2-48e1-4c7f-bbc3-6ef041c65bc0-kubelet-dir\") pod \"csi-node-driver-q9xkv\" (UID: \"8072e2d2-48e1-4c7f-bbc3-6ef041c65bc0\") " pod="calico-system/csi-node-driver-q9xkv" Apr 16 00:16:12.659853 kubelet[2684]: I0416 00:16:12.658973 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8072e2d2-48e1-4c7f-bbc3-6ef041c65bc0-registration-dir\") pod \"csi-node-driver-q9xkv\" (UID: \"8072e2d2-48e1-4c7f-bbc3-6ef041c65bc0\") " pod="calico-system/csi-node-driver-q9xkv" Apr 16 00:16:12.659853 kubelet[2684]: I0416 00:16:12.659061 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8072e2d2-48e1-4c7f-bbc3-6ef041c65bc0-socket-dir\") pod \"csi-node-driver-q9xkv\" (UID: \"8072e2d2-48e1-4c7f-bbc3-6ef041c65bc0\") " pod="calico-system/csi-node-driver-q9xkv" Apr 16 00:16:12.666326 kubelet[2684]: E0416 00:16:12.666286 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:12.666326 kubelet[2684]: W0416 00:16:12.666315 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:12.666485 kubelet[2684]: E0416 00:16:12.666347 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:12.666904 kubelet[2684]: E0416 00:16:12.666882 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:12.666904 kubelet[2684]: W0416 00:16:12.666901 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:12.667136 kubelet[2684]: E0416 00:16:12.666914 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:12.670863 kubelet[2684]: E0416 00:16:12.670794 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:12.670863 kubelet[2684]: W0416 00:16:12.670821 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:12.670863 kubelet[2684]: E0416 00:16:12.670840 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:12.676496 kubelet[2684]: E0416 00:16:12.676415 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:12.676496 kubelet[2684]: W0416 00:16:12.676445 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:12.676496 kubelet[2684]: E0416 00:16:12.676476 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:12.678326 kubelet[2684]: E0416 00:16:12.678285 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:12.678326 kubelet[2684]: W0416 00:16:12.678311 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:12.678451 kubelet[2684]: E0416 00:16:12.678339 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:12.679643 kubelet[2684]: E0416 00:16:12.679574 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:12.679643 kubelet[2684]: W0416 00:16:12.679595 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:12.679643 kubelet[2684]: E0416 00:16:12.679610 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:12.680000 kubelet[2684]: E0416 00:16:12.679982 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:12.680120 kubelet[2684]: W0416 00:16:12.680060 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:12.680120 kubelet[2684]: E0416 00:16:12.680076 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:12.680376 kubelet[2684]: E0416 00:16:12.680357 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:12.680515 kubelet[2684]: W0416 00:16:12.680439 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:12.680515 kubelet[2684]: E0416 00:16:12.680456 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:12.680855 kubelet[2684]: E0416 00:16:12.680837 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:12.680939 kubelet[2684]: W0416 00:16:12.680928 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:12.681078 kubelet[2684]: E0416 00:16:12.680982 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:12.681250 kubelet[2684]: E0416 00:16:12.681239 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:12.681332 kubelet[2684]: W0416 00:16:12.681319 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:12.681524 kubelet[2684]: E0416 00:16:12.681401 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:12.681998 kubelet[2684]: E0416 00:16:12.681933 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:12.681998 kubelet[2684]: W0416 00:16:12.681951 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:12.681998 kubelet[2684]: E0416 00:16:12.681964 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:12.688677 containerd[1514]: time="2026-04-16T00:16:12.688059538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58d8b999b5-mrcbh,Uid:71cc08c8-0053-4615-abdb-475ac0ed0934,Namespace:calico-system,Attempt:0,}" Apr 16 00:16:12.711606 kubelet[2684]: E0416 00:16:12.709868 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:12.711606 kubelet[2684]: W0416 00:16:12.709893 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:12.711606 kubelet[2684]: E0416 00:16:12.709916 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:12.718353 containerd[1514]: time="2026-04-16T00:16:12.718246954Z" level=info msg="connecting to shim f78f67cd8bd81d114e5cdc762ec6a8db32d67d9f0bbb4012c0d207b6644bad4f" address="unix:///run/containerd/s/4ef46c000629480c9ab9e509dc5b23b3893a06fd0e9ab0413df03173c959c4b0" namespace=k8s.io protocol=ttrpc version=3 Apr 16 00:16:12.747414 systemd[1]: Started cri-containerd-f78f67cd8bd81d114e5cdc762ec6a8db32d67d9f0bbb4012c0d207b6644bad4f.scope - libcontainer container f78f67cd8bd81d114e5cdc762ec6a8db32d67d9f0bbb4012c0d207b6644bad4f. Apr 16 00:16:12.759638 kubelet[2684]: E0416 00:16:12.759614 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:12.760027 kubelet[2684]: W0416 00:16:12.759840 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:12.760027 kubelet[2684]: E0416 00:16:12.759868 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:12.760264 kubelet[2684]: E0416 00:16:12.760222 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:12.760264 kubelet[2684]: W0416 00:16:12.760240 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:12.760264 kubelet[2684]: E0416 00:16:12.760252 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:12.762137 kubelet[2684]: E0416 00:16:12.762067 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:12.762408 kubelet[2684]: W0416 00:16:12.762336 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:12.762408 kubelet[2684]: E0416 00:16:12.762365 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:12.762626 kubelet[2684]: E0416 00:16:12.762602 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:12.762626 kubelet[2684]: W0416 00:16:12.762620 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:12.762687 kubelet[2684]: E0416 00:16:12.762633 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:12.763018 kubelet[2684]: E0416 00:16:12.762978 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:12.763018 kubelet[2684]: W0416 00:16:12.763001 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:12.763018 kubelet[2684]: E0416 00:16:12.763014 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:12.763394 kubelet[2684]: E0416 00:16:12.763264 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:12.763394 kubelet[2684]: W0416 00:16:12.763280 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:12.763394 kubelet[2684]: E0416 00:16:12.763295 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:12.764193 kubelet[2684]: E0416 00:16:12.764040 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:12.764193 kubelet[2684]: W0416 00:16:12.764057 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:12.764193 kubelet[2684]: E0416 00:16:12.764086 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:12.764644 kubelet[2684]: E0416 00:16:12.764616 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:12.764772 kubelet[2684]: W0416 00:16:12.764709 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:12.764772 kubelet[2684]: E0416 00:16:12.764744 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:12.765129 kubelet[2684]: E0416 00:16:12.765117 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:12.765849 kubelet[2684]: W0416 00:16:12.765661 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:12.765849 kubelet[2684]: E0416 00:16:12.765687 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:12.765998 kubelet[2684]: E0416 00:16:12.765983 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:12.766067 kubelet[2684]: W0416 00:16:12.766054 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:12.766174 kubelet[2684]: E0416 00:16:12.766111 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:12.766612 kubelet[2684]: E0416 00:16:12.766598 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:12.766703 kubelet[2684]: W0416 00:16:12.766691 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:12.766844 kubelet[2684]: E0416 00:16:12.766772 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:12.767275 kubelet[2684]: E0416 00:16:12.767171 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:12.767275 kubelet[2684]: W0416 00:16:12.767211 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:12.767275 kubelet[2684]: E0416 00:16:12.767223 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:12.767707 kubelet[2684]: E0416 00:16:12.767678 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:12.767707 kubelet[2684]: W0416 00:16:12.767691 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:12.767973 kubelet[2684]: E0416 00:16:12.767836 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:12.768127 kubelet[2684]: E0416 00:16:12.768095 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:12.768127 kubelet[2684]: W0416 00:16:12.768107 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:12.768353 kubelet[2684]: E0416 00:16:12.768269 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:12.768639 kubelet[2684]: E0416 00:16:12.768619 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:12.768753 kubelet[2684]: W0416 00:16:12.768710 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:12.768753 kubelet[2684]: E0416 00:16:12.768727 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:12.769133 kubelet[2684]: E0416 00:16:12.769116 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:12.769255 kubelet[2684]: W0416 00:16:12.769228 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:12.769255 kubelet[2684]: E0416 00:16:12.769244 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:12.769602 kubelet[2684]: E0416 00:16:12.769539 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:12.769602 kubelet[2684]: W0416 00:16:12.769554 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:12.769602 kubelet[2684]: E0416 00:16:12.769574 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:12.769926 kubelet[2684]: E0416 00:16:12.769903 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:12.769998 kubelet[2684]: W0416 00:16:12.769970 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:12.770103 kubelet[2684]: E0416 00:16:12.770043 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:12.770388 kubelet[2684]: E0416 00:16:12.770376 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:12.770520 kubelet[2684]: W0416 00:16:12.770450 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:12.770520 kubelet[2684]: E0416 00:16:12.770466 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:12.771360 kubelet[2684]: E0416 00:16:12.771339 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:12.771426 kubelet[2684]: W0416 00:16:12.771415 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:12.771485 kubelet[2684]: E0416 00:16:12.771475 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:12.771689 kubelet[2684]: E0416 00:16:12.771679 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:12.771771 kubelet[2684]: W0416 00:16:12.771759 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:12.771837 kubelet[2684]: E0416 00:16:12.771819 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:12.772214 kubelet[2684]: E0416 00:16:12.772063 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:12.772214 kubelet[2684]: W0416 00:16:12.772074 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:12.772214 kubelet[2684]: E0416 00:16:12.772083 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:12.772440 kubelet[2684]: E0416 00:16:12.772429 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:12.772566 kubelet[2684]: W0416 00:16:12.772488 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:12.772566 kubelet[2684]: E0416 00:16:12.772502 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:12.772817 kubelet[2684]: E0416 00:16:12.772804 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:12.772904 kubelet[2684]: W0416 00:16:12.772887 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:12.772964 kubelet[2684]: E0416 00:16:12.772953 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:12.773403 kubelet[2684]: E0416 00:16:12.773233 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:12.773403 kubelet[2684]: W0416 00:16:12.773246 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:12.773403 kubelet[2684]: E0416 00:16:12.773257 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:12.783389 containerd[1514]: time="2026-04-16T00:16:12.783200501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rbf4f,Uid:78fb3dfb-15ad-4f7a-9a39-d661eb90a287,Namespace:calico-system,Attempt:0,}" Apr 16 00:16:12.790964 kubelet[2684]: E0416 00:16:12.790934 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:12.791262 kubelet[2684]: W0416 00:16:12.791096 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:12.791262 kubelet[2684]: E0416 00:16:12.791123 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:12.799580 containerd[1514]: time="2026-04-16T00:16:12.799493370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58d8b999b5-mrcbh,Uid:71cc08c8-0053-4615-abdb-475ac0ed0934,Namespace:calico-system,Attempt:0,} returns sandbox id \"f78f67cd8bd81d114e5cdc762ec6a8db32d67d9f0bbb4012c0d207b6644bad4f\"" Apr 16 00:16:12.802164 containerd[1514]: time="2026-04-16T00:16:12.801605601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 16 00:16:12.806869 containerd[1514]: time="2026-04-16T00:16:12.806698052Z" level=info msg="connecting to shim d845b600bd9b2f11f8fb01da3a6da913158d1d90338dc440e3fadbffeeb096a9" address="unix:///run/containerd/s/a641bf6272888a294e173fcd7304a975da94bbfa8261e1854073139969083736" namespace=k8s.io protocol=ttrpc version=3 Apr 16 00:16:12.833501 systemd[1]: Started cri-containerd-d845b600bd9b2f11f8fb01da3a6da913158d1d90338dc440e3fadbffeeb096a9.scope - libcontainer container d845b600bd9b2f11f8fb01da3a6da913158d1d90338dc440e3fadbffeeb096a9. Apr 16 00:16:12.868004 containerd[1514]: time="2026-04-16T00:16:12.867932394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rbf4f,Uid:78fb3dfb-15ad-4f7a-9a39-d661eb90a287,Namespace:calico-system,Attempt:0,} returns sandbox id \"d845b600bd9b2f11f8fb01da3a6da913158d1d90338dc440e3fadbffeeb096a9\"" Apr 16 00:16:14.415370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount467570425.mount: Deactivated successfully. Apr 16 00:16:14.512017 kubelet[2684]: E0416 00:16:14.511923 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q9xkv" podUID="8072e2d2-48e1-4c7f-bbc3-6ef041c65bc0" Apr 16 00:16:15.124476 containerd[1514]: time="2026-04-16T00:16:15.124420290Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:16:15.125885 containerd[1514]: time="2026-04-16T00:16:15.125607243Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Apr 16 00:16:15.127231 containerd[1514]: time="2026-04-16T00:16:15.127159566Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:16:15.134655 containerd[1514]: time="2026-04-16T00:16:15.134604012Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 2.33295609s" Apr 16 00:16:15.134655 containerd[1514]: time="2026-04-16T00:16:15.134650014Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Apr 16 00:16:15.135949 containerd[1514]: time="2026-04-16T00:16:15.135315832Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:16:15.136908 containerd[1514]: time="2026-04-16T00:16:15.136884076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 16 00:16:15.156373 containerd[1514]: time="2026-04-16T00:16:15.155961765Z" level=info msg="CreateContainer within sandbox \"f78f67cd8bd81d114e5cdc762ec6a8db32d67d9f0bbb4012c0d207b6644bad4f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 16 00:16:15.171566 containerd[1514]: time="2026-04-16T00:16:15.170408886Z" level=info msg="Container 62e73a0df269f7523bf79773fbffcafa0278ed4e57287c80a4439b2ae3c3737e: CDI devices from CRI Config.CDIDevices: []" Apr 16 00:16:15.175046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount602681411.mount: Deactivated successfully. Apr 16 00:16:15.181021 containerd[1514]: time="2026-04-16T00:16:15.180976979Z" level=info msg="CreateContainer within sandbox \"f78f67cd8bd81d114e5cdc762ec6a8db32d67d9f0bbb4012c0d207b6644bad4f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"62e73a0df269f7523bf79773fbffcafa0278ed4e57287c80a4439b2ae3c3737e\"" Apr 16 00:16:15.182235 containerd[1514]: time="2026-04-16T00:16:15.182171812Z" level=info msg="StartContainer for \"62e73a0df269f7523bf79773fbffcafa0278ed4e57287c80a4439b2ae3c3737e\"" Apr 16 00:16:15.184912 containerd[1514]: time="2026-04-16T00:16:15.184851566Z" level=info msg="connecting to shim 62e73a0df269f7523bf79773fbffcafa0278ed4e57287c80a4439b2ae3c3737e" address="unix:///run/containerd/s/4ef46c000629480c9ab9e509dc5b23b3893a06fd0e9ab0413df03173c959c4b0" protocol=ttrpc version=3 Apr 16 00:16:15.213447 systemd[1]: Started cri-containerd-62e73a0df269f7523bf79773fbffcafa0278ed4e57287c80a4439b2ae3c3737e.scope - libcontainer container 62e73a0df269f7523bf79773fbffcafa0278ed4e57287c80a4439b2ae3c3737e. Apr 16 00:16:15.259980 containerd[1514]: time="2026-04-16T00:16:15.259895928Z" level=info msg="StartContainer for \"62e73a0df269f7523bf79773fbffcafa0278ed4e57287c80a4439b2ae3c3737e\" returns successfully" Apr 16 00:16:15.650873 kubelet[2684]: E0416 00:16:15.650846 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:15.651361 kubelet[2684]: W0416 00:16:15.651341 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:15.651447 kubelet[2684]: E0416 00:16:15.651432 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:15.651779 kubelet[2684]: E0416 00:16:15.651755 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:15.651918 kubelet[2684]: W0416 00:16:15.651872 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:15.651988 kubelet[2684]: E0416 00:16:15.651977 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:15.652233 kubelet[2684]: E0416 00:16:15.652219 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:15.652319 kubelet[2684]: W0416 00:16:15.652309 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:15.652497 kubelet[2684]: E0416 00:16:15.652414 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:15.652676 kubelet[2684]: E0416 00:16:15.652665 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:15.652747 kubelet[2684]: W0416 00:16:15.652736 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:15.652915 kubelet[2684]: E0416 00:16:15.652843 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:15.653205 kubelet[2684]: E0416 00:16:15.653105 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:15.653357 kubelet[2684]: W0416 00:16:15.653290 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:15.653357 kubelet[2684]: E0416 00:16:15.653328 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:15.653710 kubelet[2684]: E0416 00:16:15.653644 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:15.653710 kubelet[2684]: W0416 00:16:15.653657 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:15.653710 kubelet[2684]: E0416 00:16:15.653667 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:15.653983 kubelet[2684]: E0416 00:16:15.653966 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:15.654155 kubelet[2684]: W0416 00:16:15.654045 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:15.654155 kubelet[2684]: E0416 00:16:15.654061 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:15.654334 kubelet[2684]: E0416 00:16:15.654322 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:15.654419 kubelet[2684]: W0416 00:16:15.654406 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:15.654482 kubelet[2684]: E0416 00:16:15.654463 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:15.654775 kubelet[2684]: E0416 00:16:15.654722 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:15.654775 kubelet[2684]: W0416 00:16:15.654734 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:15.654775 kubelet[2684]: E0416 00:16:15.654743 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:15.655121 kubelet[2684]: E0416 00:16:15.655045 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:15.655121 kubelet[2684]: W0416 00:16:15.655074 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:15.655121 kubelet[2684]: E0416 00:16:15.655084 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:15.655442 kubelet[2684]: E0416 00:16:15.655396 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:15.655442 kubelet[2684]: W0416 00:16:15.655409 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:15.655442 kubelet[2684]: E0416 00:16:15.655419 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:15.655848 kubelet[2684]: E0416 00:16:15.655726 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:15.655848 kubelet[2684]: W0416 00:16:15.655739 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:15.655848 kubelet[2684]: E0416 00:16:15.655749 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:15.656262 kubelet[2684]: E0416 00:16:15.656229 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:15.656508 kubelet[2684]: W0416 00:16:15.656370 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:15.656508 kubelet[2684]: E0416 00:16:15.656391 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:15.656748 kubelet[2684]: E0416 00:16:15.656721 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:15.656863 kubelet[2684]: W0416 00:16:15.656828 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:15.656863 kubelet[2684]: E0416 00:16:15.656850 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:15.657468 kubelet[2684]: E0416 00:16:15.657437 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:15.657855 kubelet[2684]: W0416 00:16:15.657564 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:15.657855 kubelet[2684]: E0416 00:16:15.657739 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:15.680649 kubelet[2684]: I0416 00:16:15.680469 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-58d8b999b5-mrcbh" podStartSLOduration=1.344851829 podStartE2EDuration="3.680174627s" podCreationTimestamp="2026-04-16 00:16:12 +0000 UTC" firstStartedPulling="2026-04-16 00:16:12.80128079 +0000 UTC m=+23.432256180" lastFinishedPulling="2026-04-16 00:16:15.136603588 +0000 UTC m=+25.767578978" observedRunningTime="2026-04-16 00:16:15.659463292 +0000 UTC m=+26.290438682" watchObservedRunningTime="2026-04-16 00:16:15.680174627 +0000 UTC m=+26.311150057" Apr 16 00:16:15.690445 kubelet[2684]: E0416 00:16:15.690370 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:15.690445 kubelet[2684]: W0416 00:16:15.690397 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:15.690445 kubelet[2684]: E0416 00:16:15.690415 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:15.691398 kubelet[2684]: E0416 00:16:15.691319 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:15.691398 kubelet[2684]: W0416 00:16:15.691336 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:15.691398 kubelet[2684]: E0416 00:16:15.691350 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:15.691881 kubelet[2684]: E0416 00:16:15.691848 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:15.692066 kubelet[2684]: W0416 00:16:15.691862 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:15.692066 kubelet[2684]: E0416 00:16:15.691970 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:15.692556 kubelet[2684]: E0416 00:16:15.692516 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:15.692556 kubelet[2684]: W0416 00:16:15.692529 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:15.692556 kubelet[2684]: E0416 00:16:15.692540 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:15.693047 kubelet[2684]: E0416 00:16:15.693012 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:15.693047 kubelet[2684]: W0416 00:16:15.693025 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:15.693047 kubelet[2684]: E0416 00:16:15.693035 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:15.693566 kubelet[2684]: E0416 00:16:15.693544 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:15.693721 kubelet[2684]: W0416 00:16:15.693633 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:15.693721 kubelet[2684]: E0416 00:16:15.693650 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:15.694395 kubelet[2684]: E0416 00:16:15.694279 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:15.694556 kubelet[2684]: W0416 00:16:15.694474 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:15.694556 kubelet[2684]: E0416 00:16:15.694490 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:15.695057 kubelet[2684]: E0416 00:16:15.694993 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:15.695057 kubelet[2684]: W0416 00:16:15.695033 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:15.695057 kubelet[2684]: E0416 00:16:15.695046 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:15.695447 kubelet[2684]: E0416 00:16:15.695403 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:15.695447 kubelet[2684]: W0416 00:16:15.695413 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:15.695634 kubelet[2684]: E0416 00:16:15.695529 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:15.695968 kubelet[2684]: E0416 00:16:15.695934 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:15.695968 kubelet[2684]: W0416 00:16:15.695947 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:15.695968 kubelet[2684]: E0416 00:16:15.695957 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:15.696567 kubelet[2684]: E0416 00:16:15.696554 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:15.696887 kubelet[2684]: W0416 00:16:15.696665 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:15.696887 kubelet[2684]: E0416 00:16:15.696682 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:15.697620 kubelet[2684]: E0416 00:16:15.697494 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:15.697620 kubelet[2684]: W0416 00:16:15.697506 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:15.697620 kubelet[2684]: E0416 00:16:15.697517 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:15.698228 kubelet[2684]: E0416 00:16:15.698068 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:15.698228 kubelet[2684]: W0416 00:16:15.698080 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:15.698228 kubelet[2684]: E0416 00:16:15.698091 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:15.698534 kubelet[2684]: E0416 00:16:15.698507 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:15.698631 kubelet[2684]: W0416 00:16:15.698583 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:15.698631 kubelet[2684]: E0416 00:16:15.698598 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:15.698957 kubelet[2684]: E0416 00:16:15.698943 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:15.699076 kubelet[2684]: W0416 00:16:15.699023 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:15.699076 kubelet[2684]: E0416 00:16:15.699035 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:15.699484 kubelet[2684]: E0416 00:16:15.699365 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:15.699484 kubelet[2684]: W0416 00:16:15.699377 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:15.699484 kubelet[2684]: E0416 00:16:15.699386 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:15.699836 kubelet[2684]: E0416 00:16:15.699706 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:15.699836 kubelet[2684]: W0416 00:16:15.699739 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:15.699836 kubelet[2684]: E0416 00:16:15.699750 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:15.700382 kubelet[2684]: E0416 00:16:15.700306 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:15.700382 kubelet[2684]: W0416 00:16:15.700333 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:15.700382 kubelet[2684]: E0416 00:16:15.700345 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:16.513216 kubelet[2684]: E0416 00:16:16.511789 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q9xkv" podUID="8072e2d2-48e1-4c7f-bbc3-6ef041c65bc0" Apr 16 00:16:16.665033 kubelet[2684]: E0416 00:16:16.664993 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:16.665955 kubelet[2684]: W0416 00:16:16.665598 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:16.665955 kubelet[2684]: E0416 00:16:16.665681 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:16.666443 kubelet[2684]: E0416 00:16:16.666421 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:16.666547 kubelet[2684]: W0416 00:16:16.666525 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:16.666665 kubelet[2684]: E0416 00:16:16.666642 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:16.667348 kubelet[2684]: E0416 00:16:16.667334 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:16.667569 kubelet[2684]: W0416 00:16:16.667550 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:16.667665 kubelet[2684]: E0416 00:16:16.667652 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:16.667984 kubelet[2684]: E0416 00:16:16.667968 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:16.668073 kubelet[2684]: W0416 00:16:16.668059 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:16.668137 kubelet[2684]: E0416 00:16:16.668125 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:16.668476 kubelet[2684]: E0416 00:16:16.668463 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:16.668590 kubelet[2684]: W0416 00:16:16.668536 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:16.668590 kubelet[2684]: E0416 00:16:16.668552 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:16.668911 kubelet[2684]: E0416 00:16:16.668844 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:16.668911 kubelet[2684]: W0416 00:16:16.668861 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:16.668911 kubelet[2684]: E0416 00:16:16.668873 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:16.669383 kubelet[2684]: E0416 00:16:16.669277 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:16.669383 kubelet[2684]: W0416 00:16:16.669326 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:16.669383 kubelet[2684]: E0416 00:16:16.669338 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:16.669682 kubelet[2684]: E0416 00:16:16.669667 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:16.669829 kubelet[2684]: W0416 00:16:16.669731 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:16.669829 kubelet[2684]: E0416 00:16:16.669746 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:16.670067 kubelet[2684]: E0416 00:16:16.670055 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:16.670144 kubelet[2684]: W0416 00:16:16.670132 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:16.670272 kubelet[2684]: E0416 00:16:16.670229 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:16.670493 kubelet[2684]: E0416 00:16:16.670481 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:16.670576 kubelet[2684]: W0416 00:16:16.670546 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:16.670651 kubelet[2684]: E0416 00:16:16.670614 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:16.670912 kubelet[2684]: E0416 00:16:16.670886 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:16.671037 kubelet[2684]: W0416 00:16:16.670976 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:16.671037 kubelet[2684]: E0416 00:16:16.670995 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:16.671368 kubelet[2684]: E0416 00:16:16.671298 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:16.671368 kubelet[2684]: W0416 00:16:16.671310 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:16.671368 kubelet[2684]: E0416 00:16:16.671321 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:16.671650 kubelet[2684]: E0416 00:16:16.671636 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:16.671773 kubelet[2684]: W0416 00:16:16.671712 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:16.671773 kubelet[2684]: E0416 00:16:16.671728 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:16.672034 kubelet[2684]: E0416 00:16:16.672021 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:16.672169 kubelet[2684]: W0416 00:16:16.672104 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:16.672169 kubelet[2684]: E0416 00:16:16.672123 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:16.672568 kubelet[2684]: E0416 00:16:16.672478 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:16.672568 kubelet[2684]: W0416 00:16:16.672491 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:16.672568 kubelet[2684]: E0416 00:16:16.672501 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:16.704380 kubelet[2684]: E0416 00:16:16.704220 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:16.704380 kubelet[2684]: W0416 00:16:16.704260 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:16.704380 kubelet[2684]: E0416 00:16:16.704290 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:16.705000 kubelet[2684]: E0416 00:16:16.704978 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:16.705171 kubelet[2684]: W0416 00:16:16.705084 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:16.705171 kubelet[2684]: E0416 00:16:16.705104 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:16.705419 kubelet[2684]: E0416 00:16:16.705399 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:16.705419 kubelet[2684]: W0416 00:16:16.705419 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:16.705649 kubelet[2684]: E0416 00:16:16.705433 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:16.705862 kubelet[2684]: E0416 00:16:16.705825 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:16.705862 kubelet[2684]: W0416 00:16:16.705843 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:16.705862 kubelet[2684]: E0416 00:16:16.705855 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:16.706285 kubelet[2684]: E0416 00:16:16.706251 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:16.706285 kubelet[2684]: W0416 00:16:16.706265 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:16.706285 kubelet[2684]: E0416 00:16:16.706276 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:16.706605 kubelet[2684]: E0416 00:16:16.706587 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:16.706605 kubelet[2684]: W0416 00:16:16.706600 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:16.706760 kubelet[2684]: E0416 00:16:16.706610 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:16.710220 kubelet[2684]: E0416 00:16:16.709539 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:16.710220 kubelet[2684]: W0416 00:16:16.709566 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:16.710220 kubelet[2684]: E0416 00:16:16.709587 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:16.711268 kubelet[2684]: E0416 00:16:16.711165 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:16.711538 kubelet[2684]: W0416 00:16:16.711392 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:16.711538 kubelet[2684]: E0416 00:16:16.711417 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:16.711844 kubelet[2684]: E0416 00:16:16.711786 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:16.711932 kubelet[2684]: W0416 00:16:16.711917 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:16.712083 kubelet[2684]: E0416 00:16:16.711982 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:16.712237 kubelet[2684]: E0416 00:16:16.712224 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:16.712366 kubelet[2684]: W0416 00:16:16.712351 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:16.712518 kubelet[2684]: E0416 00:16:16.712429 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:16.712756 kubelet[2684]: E0416 00:16:16.712730 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:16.712926 kubelet[2684]: W0416 00:16:16.712743 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:16.712926 kubelet[2684]: E0416 00:16:16.712839 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:16.713210 kubelet[2684]: E0416 00:16:16.713135 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:16.713210 kubelet[2684]: W0416 00:16:16.713150 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:16.713210 kubelet[2684]: E0416 00:16:16.713162 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:16.714642 kubelet[2684]: E0416 00:16:16.714297 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:16.714642 kubelet[2684]: W0416 00:16:16.714317 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:16.714642 kubelet[2684]: E0416 00:16:16.714336 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:16.715672 kubelet[2684]: E0416 00:16:16.715651 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:16.715775 kubelet[2684]: W0416 00:16:16.715760 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:16.715884 kubelet[2684]: E0416 00:16:16.715869 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:16.720250 kubelet[2684]: E0416 00:16:16.720063 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:16.720250 kubelet[2684]: W0416 00:16:16.720089 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:16.720250 kubelet[2684]: E0416 00:16:16.720118 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:16.721363 kubelet[2684]: E0416 00:16:16.721332 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:16.721363 kubelet[2684]: W0416 00:16:16.721354 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:16.721468 kubelet[2684]: E0416 00:16:16.721372 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:16.721954 kubelet[2684]: E0416 00:16:16.721930 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:16.721954 kubelet[2684]: W0416 00:16:16.721948 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:16.722040 kubelet[2684]: E0416 00:16:16.721961 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:16.722476 kubelet[2684]: E0416 00:16:16.722456 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:16:16.722476 kubelet[2684]: W0416 00:16:16.722471 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:16:16.722621 kubelet[2684]: E0416 00:16:16.722483 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:16:16.756808 containerd[1514]: time="2026-04-16T00:16:16.756658257Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:16:16.757891 containerd[1514]: time="2026-04-16T00:16:16.757669683Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Apr 16 00:16:16.758786 containerd[1514]: time="2026-04-16T00:16:16.758743551Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:16:16.761351 containerd[1514]: time="2026-04-16T00:16:16.761313298Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:16:16.762734 containerd[1514]: time="2026-04-16T00:16:16.762678454Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.625757657s" Apr 16 00:16:16.762839 containerd[1514]: time="2026-04-16T00:16:16.762766376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Apr 16 00:16:16.769388 containerd[1514]: time="2026-04-16T00:16:16.768983618Z" level=info msg="CreateContainer within sandbox \"d845b600bd9b2f11f8fb01da3a6da913158d1d90338dc440e3fadbffeeb096a9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 16 00:16:16.782223 containerd[1514]: time="2026-04-16T00:16:16.781042211Z" level=info msg="Container aa5b38c6ed05505e60644be4ecaad8ce5f132ca53fa3e05c43e95fe5707304a5: CDI devices from CRI Config.CDIDevices: []" Apr 16 00:16:16.792755 containerd[1514]: time="2026-04-16T00:16:16.792691474Z" level=info msg="CreateContainer within sandbox \"d845b600bd9b2f11f8fb01da3a6da913158d1d90338dc440e3fadbffeeb096a9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"aa5b38c6ed05505e60644be4ecaad8ce5f132ca53fa3e05c43e95fe5707304a5\"" Apr 16 00:16:16.795289 containerd[1514]: time="2026-04-16T00:16:16.793748622Z" level=info msg="StartContainer for \"aa5b38c6ed05505e60644be4ecaad8ce5f132ca53fa3e05c43e95fe5707304a5\"" Apr 16 00:16:16.797860 containerd[1514]: time="2026-04-16T00:16:16.797593922Z" level=info msg="connecting to shim aa5b38c6ed05505e60644be4ecaad8ce5f132ca53fa3e05c43e95fe5707304a5" address="unix:///run/containerd/s/a641bf6272888a294e173fcd7304a975da94bbfa8261e1854073139969083736" protocol=ttrpc version=3 Apr 16 00:16:16.824403 systemd[1]: Started cri-containerd-aa5b38c6ed05505e60644be4ecaad8ce5f132ca53fa3e05c43e95fe5707304a5.scope - libcontainer container aa5b38c6ed05505e60644be4ecaad8ce5f132ca53fa3e05c43e95fe5707304a5. Apr 16 00:16:16.892226 containerd[1514]: time="2026-04-16T00:16:16.891883374Z" level=info msg="StartContainer for \"aa5b38c6ed05505e60644be4ecaad8ce5f132ca53fa3e05c43e95fe5707304a5\" returns successfully" Apr 16 00:16:16.908586 systemd[1]: cri-containerd-aa5b38c6ed05505e60644be4ecaad8ce5f132ca53fa3e05c43e95fe5707304a5.scope: Deactivated successfully. Apr 16 00:16:16.916499 containerd[1514]: time="2026-04-16T00:16:16.916313049Z" level=info msg="received container exit event container_id:\"aa5b38c6ed05505e60644be4ecaad8ce5f132ca53fa3e05c43e95fe5707304a5\" id:\"aa5b38c6ed05505e60644be4ecaad8ce5f132ca53fa3e05c43e95fe5707304a5\" pid:3378 exited_at:{seconds:1776298576 nanos:915006975}" Apr 16 00:16:16.942423 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa5b38c6ed05505e60644be4ecaad8ce5f132ca53fa3e05c43e95fe5707304a5-rootfs.mount: Deactivated successfully. Apr 16 00:16:17.652301 containerd[1514]: time="2026-04-16T00:16:17.651982722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 16 00:16:18.512148 kubelet[2684]: E0416 00:16:18.512046 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q9xkv" podUID="8072e2d2-48e1-4c7f-bbc3-6ef041c65bc0" Apr 16 00:16:20.511555 kubelet[2684]: E0416 00:16:20.511427 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q9xkv" podUID="8072e2d2-48e1-4c7f-bbc3-6ef041c65bc0" Apr 16 00:16:22.511579 kubelet[2684]: E0416 00:16:22.511519 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q9xkv" podUID="8072e2d2-48e1-4c7f-bbc3-6ef041c65bc0" Apr 16 00:16:23.762112 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount293959755.mount: Deactivated successfully. Apr 16 00:16:23.790274 containerd[1514]: time="2026-04-16T00:16:23.790221187Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:16:23.792839 containerd[1514]: time="2026-04-16T00:16:23.792653758Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Apr 16 00:16:23.795872 containerd[1514]: time="2026-04-16T00:16:23.795780303Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:16:23.801500 containerd[1514]: time="2026-04-16T00:16:23.801435420Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:16:23.803190 containerd[1514]: time="2026-04-16T00:16:23.803122855Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 6.151096572s" Apr 16 00:16:23.803190 containerd[1514]: time="2026-04-16T00:16:23.803160776Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Apr 16 00:16:23.809011 containerd[1514]: time="2026-04-16T00:16:23.808970376Z" level=info msg="CreateContainer within sandbox \"d845b600bd9b2f11f8fb01da3a6da913158d1d90338dc440e3fadbffeeb096a9\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 16 00:16:23.825155 containerd[1514]: time="2026-04-16T00:16:23.822533498Z" level=info msg="Container 7a66b42f41f9872730cd8bf213b108223d424446d72adf33644c564efb3f89f2: CDI devices from CRI Config.CDIDevices: []" Apr 16 00:16:23.827286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1193833357.mount: Deactivated successfully. Apr 16 00:16:23.836540 containerd[1514]: time="2026-04-16T00:16:23.836442867Z" level=info msg="CreateContainer within sandbox \"d845b600bd9b2f11f8fb01da3a6da913158d1d90338dc440e3fadbffeeb096a9\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"7a66b42f41f9872730cd8bf213b108223d424446d72adf33644c564efb3f89f2\"" Apr 16 00:16:23.839585 containerd[1514]: time="2026-04-16T00:16:23.838509590Z" level=info msg="StartContainer for \"7a66b42f41f9872730cd8bf213b108223d424446d72adf33644c564efb3f89f2\"" Apr 16 00:16:23.842214 containerd[1514]: time="2026-04-16T00:16:23.841479212Z" level=info msg="connecting to shim 7a66b42f41f9872730cd8bf213b108223d424446d72adf33644c564efb3f89f2" address="unix:///run/containerd/s/a641bf6272888a294e173fcd7304a975da94bbfa8261e1854073139969083736" protocol=ttrpc version=3 Apr 16 00:16:23.870421 systemd[1]: Started cri-containerd-7a66b42f41f9872730cd8bf213b108223d424446d72adf33644c564efb3f89f2.scope - libcontainer container 7a66b42f41f9872730cd8bf213b108223d424446d72adf33644c564efb3f89f2. Apr 16 00:16:23.936378 containerd[1514]: time="2026-04-16T00:16:23.936315702Z" level=info msg="StartContainer for \"7a66b42f41f9872730cd8bf213b108223d424446d72adf33644c564efb3f89f2\" returns successfully" Apr 16 00:16:24.038336 systemd[1]: cri-containerd-7a66b42f41f9872730cd8bf213b108223d424446d72adf33644c564efb3f89f2.scope: Deactivated successfully. Apr 16 00:16:24.043193 containerd[1514]: time="2026-04-16T00:16:24.043030758Z" level=info msg="received container exit event container_id:\"7a66b42f41f9872730cd8bf213b108223d424446d72adf33644c564efb3f89f2\" id:\"7a66b42f41f9872730cd8bf213b108223d424446d72adf33644c564efb3f89f2\" pid:3435 exited_at:{seconds:1776298584 nanos:42064001}" Apr 16 00:16:24.511834 kubelet[2684]: E0416 00:16:24.511651 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q9xkv" podUID="8072e2d2-48e1-4c7f-bbc3-6ef041c65bc0" Apr 16 00:16:24.676469 containerd[1514]: time="2026-04-16T00:16:24.676402720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 16 00:16:24.763960 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a66b42f41f9872730cd8bf213b108223d424446d72adf33644c564efb3f89f2-rootfs.mount: Deactivated successfully. Apr 16 00:16:26.511246 kubelet[2684]: E0416 00:16:26.511171 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q9xkv" podUID="8072e2d2-48e1-4c7f-bbc3-6ef041c65bc0" Apr 16 00:16:28.108148 containerd[1514]: time="2026-04-16T00:16:28.107328815Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:16:28.109345 containerd[1514]: time="2026-04-16T00:16:28.109280562Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Apr 16 00:16:28.110569 containerd[1514]: time="2026-04-16T00:16:28.110516724Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:16:28.113016 containerd[1514]: time="2026-04-16T00:16:28.112951926Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:16:28.114067 containerd[1514]: time="2026-04-16T00:16:28.114028123Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 3.437560079s" Apr 16 00:16:28.114067 containerd[1514]: time="2026-04-16T00:16:28.114067884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Apr 16 00:16:28.121483 containerd[1514]: time="2026-04-16T00:16:28.120527063Z" level=info msg="CreateContainer within sandbox \"d845b600bd9b2f11f8fb01da3a6da913158d1d90338dc440e3fadbffeeb096a9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 16 00:16:28.130807 containerd[1514]: time="2026-04-16T00:16:28.130749450Z" level=info msg="Container 8753d05fb85d5101d9798decca525e0e8ac8f3f99024ac1a5ddd2426a866972c: CDI devices from CRI Config.CDIDevices: []" Apr 16 00:16:28.146300 containerd[1514]: time="2026-04-16T00:16:28.146240976Z" level=info msg="CreateContainer within sandbox \"d845b600bd9b2f11f8fb01da3a6da913158d1d90338dc440e3fadbffeeb096a9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8753d05fb85d5101d9798decca525e0e8ac8f3f99024ac1a5ddd2426a866972c\"" Apr 16 00:16:28.148493 containerd[1514]: time="2026-04-16T00:16:28.147825950Z" level=info msg="StartContainer for \"8753d05fb85d5101d9798decca525e0e8ac8f3f99024ac1a5ddd2426a866972c\"" Apr 16 00:16:28.150216 containerd[1514]: time="2026-04-16T00:16:28.150171629Z" level=info msg="connecting to shim 8753d05fb85d5101d9798decca525e0e8ac8f3f99024ac1a5ddd2426a866972c" address="unix:///run/containerd/s/a641bf6272888a294e173fcd7304a975da94bbfa8261e1854073139969083736" protocol=ttrpc version=3 Apr 16 00:16:28.175429 systemd[1]: Started cri-containerd-8753d05fb85d5101d9798decca525e0e8ac8f3f99024ac1a5ddd2426a866972c.scope - libcontainer container 8753d05fb85d5101d9798decca525e0e8ac8f3f99024ac1a5ddd2426a866972c. Apr 16 00:16:28.250573 containerd[1514]: time="2026-04-16T00:16:28.250537235Z" level=info msg="StartContainer for \"8753d05fb85d5101d9798decca525e0e8ac8f3f99024ac1a5ddd2426a866972c\" returns successfully" Apr 16 00:16:28.512912 kubelet[2684]: E0416 00:16:28.511660 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q9xkv" podUID="8072e2d2-48e1-4c7f-bbc3-6ef041c65bc0" Apr 16 00:16:28.837289 containerd[1514]: time="2026-04-16T00:16:28.837220465Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 16 00:16:28.839817 systemd[1]: cri-containerd-8753d05fb85d5101d9798decca525e0e8ac8f3f99024ac1a5ddd2426a866972c.scope: Deactivated successfully. Apr 16 00:16:28.840406 systemd[1]: cri-containerd-8753d05fb85d5101d9798decca525e0e8ac8f3f99024ac1a5ddd2426a866972c.scope: Consumed 543ms CPU time, 192M memory peak, 171.3M written to disk. Apr 16 00:16:28.845865 containerd[1514]: time="2026-04-16T00:16:28.845825917Z" level=info msg="received container exit event container_id:\"8753d05fb85d5101d9798decca525e0e8ac8f3f99024ac1a5ddd2426a866972c\" id:\"8753d05fb85d5101d9798decca525e0e8ac8f3f99024ac1a5ddd2426a866972c\" pid:3494 exited_at:{seconds:1776298588 nanos:845089332}" Apr 16 00:16:28.871653 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8753d05fb85d5101d9798decca525e0e8ac8f3f99024ac1a5ddd2426a866972c-rootfs.mount: Deactivated successfully. Apr 16 00:16:28.916219 kubelet[2684]: I0416 00:16:28.915415 2684 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Apr 16 00:16:28.985562 systemd[1]: Created slice kubepods-burstable-pod77775a7e_a050_479a_b2cf_e2b2e306c45b.slice - libcontainer container kubepods-burstable-pod77775a7e_a050_479a_b2cf_e2b2e306c45b.slice. Apr 16 00:16:29.000546 systemd[1]: Created slice kubepods-burstable-pod986c9599_0625_4935_992c_bbdd4ed4b0da.slice - libcontainer container kubepods-burstable-pod986c9599_0625_4935_992c_bbdd4ed4b0da.slice. Apr 16 00:16:29.013682 systemd[1]: Created slice kubepods-besteffort-pode7536550_604c_412d_a9f8_518eab3d01c1.slice - libcontainer container kubepods-besteffort-pode7536550_604c_412d_a9f8_518eab3d01c1.slice. Apr 16 00:16:29.038008 systemd[1]: Created slice kubepods-besteffort-pod472bf850_ff7c_4585_ae04_90ee1c340e3a.slice - libcontainer container kubepods-besteffort-pod472bf850_ff7c_4585_ae04_90ee1c340e3a.slice. Apr 16 00:16:29.047013 systemd[1]: Created slice kubepods-besteffort-podc14ea271_eeb8_4668_9677_80b08ccbd7da.slice - libcontainer container kubepods-besteffort-podc14ea271_eeb8_4668_9677_80b08ccbd7da.slice. Apr 16 00:16:29.056234 systemd[1]: Created slice kubepods-besteffort-pod662ca606_a84d_4f11_b53d_469084a86bbd.slice - libcontainer container kubepods-besteffort-pod662ca606_a84d_4f11_b53d_469084a86bbd.slice. Apr 16 00:16:29.062267 systemd[1]: Created slice kubepods-besteffort-podc5f7d55b_5915_4e61_8741_d346763d163a.slice - libcontainer container kubepods-besteffort-podc5f7d55b_5915_4e61_8741_d346763d163a.slice. Apr 16 00:16:29.108032 kubelet[2684]: I0416 00:16:29.107168 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pddl\" (UniqueName: \"kubernetes.io/projected/c5f7d55b-5915-4e61-8741-d346763d163a-kube-api-access-2pddl\") pod \"calico-apiserver-6d5bb45b9d-xkbr9\" (UID: \"c5f7d55b-5915-4e61-8741-d346763d163a\") " pod="calico-system/calico-apiserver-6d5bb45b9d-xkbr9" Apr 16 00:16:29.108032 kubelet[2684]: I0416 00:16:29.107227 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpv4w\" (UniqueName: \"kubernetes.io/projected/77775a7e-a050-479a-b2cf-e2b2e306c45b-kube-api-access-lpv4w\") pod \"coredns-66bc5c9577-5r2gs\" (UID: \"77775a7e-a050-479a-b2cf-e2b2e306c45b\") " pod="kube-system/coredns-66bc5c9577-5r2gs" Apr 16 00:16:29.108032 kubelet[2684]: I0416 00:16:29.107326 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c5f7d55b-5915-4e61-8741-d346763d163a-calico-apiserver-certs\") pod \"calico-apiserver-6d5bb45b9d-xkbr9\" (UID: \"c5f7d55b-5915-4e61-8741-d346763d163a\") " pod="calico-system/calico-apiserver-6d5bb45b9d-xkbr9" Apr 16 00:16:29.108032 kubelet[2684]: I0416 00:16:29.107376 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c14ea271-eeb8-4668-9677-80b08ccbd7da-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-2mdz9\" (UID: \"c14ea271-eeb8-4668-9677-80b08ccbd7da\") " pod="calico-system/goldmane-cccfbd5cf-2mdz9" Apr 16 00:16:29.108032 kubelet[2684]: I0416 00:16:29.107407 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/662ca606-a84d-4f11-b53d-469084a86bbd-whisker-ca-bundle\") pod \"whisker-64d5d9d696-dpxfr\" (UID: \"662ca606-a84d-4f11-b53d-469084a86bbd\") " pod="calico-system/whisker-64d5d9d696-dpxfr" Apr 16 00:16:29.108520 kubelet[2684]: I0416 00:16:29.107438 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knv7m\" (UniqueName: \"kubernetes.io/projected/472bf850-ff7c-4585-ae04-90ee1c340e3a-kube-api-access-knv7m\") pod \"calico-apiserver-6d5bb45b9d-mfsr2\" (UID: \"472bf850-ff7c-4585-ae04-90ee1c340e3a\") " pod="calico-system/calico-apiserver-6d5bb45b9d-mfsr2" Apr 16 00:16:29.108520 kubelet[2684]: I0416 00:16:29.107479 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c14ea271-eeb8-4668-9677-80b08ccbd7da-config\") pod \"goldmane-cccfbd5cf-2mdz9\" (UID: \"c14ea271-eeb8-4668-9677-80b08ccbd7da\") " pod="calico-system/goldmane-cccfbd5cf-2mdz9" Apr 16 00:16:29.108520 kubelet[2684]: I0416 00:16:29.107503 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq2vl\" (UniqueName: \"kubernetes.io/projected/662ca606-a84d-4f11-b53d-469084a86bbd-kube-api-access-gq2vl\") pod \"whisker-64d5d9d696-dpxfr\" (UID: \"662ca606-a84d-4f11-b53d-469084a86bbd\") " pod="calico-system/whisker-64d5d9d696-dpxfr" Apr 16 00:16:29.108520 kubelet[2684]: I0416 00:16:29.107551 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/472bf850-ff7c-4585-ae04-90ee1c340e3a-calico-apiserver-certs\") pod \"calico-apiserver-6d5bb45b9d-mfsr2\" (UID: \"472bf850-ff7c-4585-ae04-90ee1c340e3a\") " pod="calico-system/calico-apiserver-6d5bb45b9d-mfsr2" Apr 16 00:16:29.108520 kubelet[2684]: I0416 00:16:29.107576 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77775a7e-a050-479a-b2cf-e2b2e306c45b-config-volume\") pod \"coredns-66bc5c9577-5r2gs\" (UID: \"77775a7e-a050-479a-b2cf-e2b2e306c45b\") " pod="kube-system/coredns-66bc5c9577-5r2gs" Apr 16 00:16:29.108703 kubelet[2684]: I0416 00:16:29.107610 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/986c9599-0625-4935-992c-bbdd4ed4b0da-config-volume\") pod \"coredns-66bc5c9577-krm8w\" (UID: \"986c9599-0625-4935-992c-bbdd4ed4b0da\") " pod="kube-system/coredns-66bc5c9577-krm8w" Apr 16 00:16:29.108703 kubelet[2684]: I0416 00:16:29.107634 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pwkh\" (UniqueName: \"kubernetes.io/projected/986c9599-0625-4935-992c-bbdd4ed4b0da-kube-api-access-6pwkh\") pod \"coredns-66bc5c9577-krm8w\" (UID: \"986c9599-0625-4935-992c-bbdd4ed4b0da\") " pod="kube-system/coredns-66bc5c9577-krm8w" Apr 16 00:16:29.108703 kubelet[2684]: I0416 00:16:29.107653 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c14ea271-eeb8-4668-9677-80b08ccbd7da-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-2mdz9\" (UID: \"c14ea271-eeb8-4668-9677-80b08ccbd7da\") " pod="calico-system/goldmane-cccfbd5cf-2mdz9" Apr 16 00:16:29.108703 kubelet[2684]: I0416 00:16:29.107673 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncssq\" (UniqueName: \"kubernetes.io/projected/c14ea271-eeb8-4668-9677-80b08ccbd7da-kube-api-access-ncssq\") pod \"goldmane-cccfbd5cf-2mdz9\" (UID: \"c14ea271-eeb8-4668-9677-80b08ccbd7da\") " pod="calico-system/goldmane-cccfbd5cf-2mdz9" Apr 16 00:16:29.108703 kubelet[2684]: I0416 00:16:29.107702 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/662ca606-a84d-4f11-b53d-469084a86bbd-nginx-config\") pod \"whisker-64d5d9d696-dpxfr\" (UID: \"662ca606-a84d-4f11-b53d-469084a86bbd\") " pod="calico-system/whisker-64d5d9d696-dpxfr" Apr 16 00:16:29.109043 kubelet[2684]: I0416 00:16:29.107720 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/662ca606-a84d-4f11-b53d-469084a86bbd-whisker-backend-key-pair\") pod \"whisker-64d5d9d696-dpxfr\" (UID: \"662ca606-a84d-4f11-b53d-469084a86bbd\") " pod="calico-system/whisker-64d5d9d696-dpxfr" Apr 16 00:16:29.109043 kubelet[2684]: I0416 00:16:29.107736 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7536550-604c-412d-a9f8-518eab3d01c1-tigera-ca-bundle\") pod \"calico-kube-controllers-84cb8894f8-djtj2\" (UID: \"e7536550-604c-412d-a9f8-518eab3d01c1\") " pod="calico-system/calico-kube-controllers-84cb8894f8-djtj2" Apr 16 00:16:29.109043 kubelet[2684]: I0416 00:16:29.107769 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpw8f\" (UniqueName: \"kubernetes.io/projected/e7536550-604c-412d-a9f8-518eab3d01c1-kube-api-access-vpw8f\") pod \"calico-kube-controllers-84cb8894f8-djtj2\" (UID: \"e7536550-604c-412d-a9f8-518eab3d01c1\") " pod="calico-system/calico-kube-controllers-84cb8894f8-djtj2" Apr 16 00:16:29.296006 containerd[1514]: time="2026-04-16T00:16:29.295850476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5r2gs,Uid:77775a7e-a050-479a-b2cf-e2b2e306c45b,Namespace:kube-system,Attempt:0,}" Apr 16 00:16:29.313749 containerd[1514]: time="2026-04-16T00:16:29.313410935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-krm8w,Uid:986c9599-0625-4935-992c-bbdd4ed4b0da,Namespace:kube-system,Attempt:0,}" Apr 16 00:16:29.336113 containerd[1514]: time="2026-04-16T00:16:29.336066843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84cb8894f8-djtj2,Uid:e7536550-604c-412d-a9f8-518eab3d01c1,Namespace:calico-system,Attempt:0,}" Apr 16 00:16:29.348834 containerd[1514]: time="2026-04-16T00:16:29.348791943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5bb45b9d-mfsr2,Uid:472bf850-ff7c-4585-ae04-90ee1c340e3a,Namespace:calico-system,Attempt:0,}" Apr 16 00:16:29.353338 containerd[1514]: time="2026-04-16T00:16:29.353300572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-2mdz9,Uid:c14ea271-eeb8-4668-9677-80b08ccbd7da,Namespace:calico-system,Attempt:0,}" Apr 16 00:16:29.372145 containerd[1514]: time="2026-04-16T00:16:29.371113560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64d5d9d696-dpxfr,Uid:662ca606-a84d-4f11-b53d-469084a86bbd,Namespace:calico-system,Attempt:0,}" Apr 16 00:16:29.376489 containerd[1514]: time="2026-04-16T00:16:29.376041042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5bb45b9d-xkbr9,Uid:c5f7d55b-5915-4e61-8741-d346763d163a,Namespace:calico-system,Attempt:0,}" Apr 16 00:16:29.502329 containerd[1514]: time="2026-04-16T00:16:29.502263129Z" level=error msg="Failed to destroy network for sandbox \"64f7e10eaa5593e9285ebd9045f3f091072c486ed6ddf62c296366e01bbb06ae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 00:16:29.504489 containerd[1514]: time="2026-04-16T00:16:29.504307316Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5r2gs,Uid:77775a7e-a050-479a-b2cf-e2b2e306c45b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"64f7e10eaa5593e9285ebd9045f3f091072c486ed6ddf62c296366e01bbb06ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 00:16:29.504894 kubelet[2684]: E0416 00:16:29.504838 2684 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64f7e10eaa5593e9285ebd9045f3f091072c486ed6ddf62c296366e01bbb06ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 00:16:29.504962 kubelet[2684]: E0416 00:16:29.504916 2684 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64f7e10eaa5593e9285ebd9045f3f091072c486ed6ddf62c296366e01bbb06ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-5r2gs" Apr 16 00:16:29.504962 kubelet[2684]: E0416 00:16:29.504937 2684 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64f7e10eaa5593e9285ebd9045f3f091072c486ed6ddf62c296366e01bbb06ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-5r2gs" Apr 16 00:16:29.505408 kubelet[2684]: E0416 00:16:29.505002 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-5r2gs_kube-system(77775a7e-a050-479a-b2cf-e2b2e306c45b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-5r2gs_kube-system(77775a7e-a050-479a-b2cf-e2b2e306c45b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64f7e10eaa5593e9285ebd9045f3f091072c486ed6ddf62c296366e01bbb06ae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-5r2gs" podUID="77775a7e-a050-479a-b2cf-e2b2e306c45b" Apr 16 00:16:29.529820 containerd[1514]: time="2026-04-16T00:16:29.529680234Z" level=error msg="Failed to destroy network for sandbox \"d7beaaf649dd536f2e26f50c2e6cc407eebd9f615c9d36a55608762a47aa0f69\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 00:16:29.532435 containerd[1514]: time="2026-04-16T00:16:29.532371123Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-krm8w,Uid:986c9599-0625-4935-992c-bbdd4ed4b0da,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7beaaf649dd536f2e26f50c2e6cc407eebd9f615c9d36a55608762a47aa0f69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 00:16:29.533173 kubelet[2684]: E0416 00:16:29.532627 2684 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7beaaf649dd536f2e26f50c2e6cc407eebd9f615c9d36a55608762a47aa0f69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 00:16:29.533173 kubelet[2684]: E0416 00:16:29.532677 2684 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7beaaf649dd536f2e26f50c2e6cc407eebd9f615c9d36a55608762a47aa0f69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-krm8w" Apr 16 00:16:29.533173 kubelet[2684]: E0416 00:16:29.532695 2684 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7beaaf649dd536f2e26f50c2e6cc407eebd9f615c9d36a55608762a47aa0f69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-krm8w" Apr 16 00:16:29.533668 kubelet[2684]: E0416 00:16:29.532743 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-krm8w_kube-system(986c9599-0625-4935-992c-bbdd4ed4b0da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-krm8w_kube-system(986c9599-0625-4935-992c-bbdd4ed4b0da)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7beaaf649dd536f2e26f50c2e6cc407eebd9f615c9d36a55608762a47aa0f69\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-krm8w" podUID="986c9599-0625-4935-992c-bbdd4ed4b0da" Apr 16 00:16:29.540049 containerd[1514]: time="2026-04-16T00:16:29.539960853Z" level=error msg="Failed to destroy network for sandbox \"943f7401f09858c5335f9dd611846825fba24d94eef3845803142eb2f7a1d8b4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 00:16:29.542534 containerd[1514]: time="2026-04-16T00:16:29.542482016Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84cb8894f8-djtj2,Uid:e7536550-604c-412d-a9f8-518eab3d01c1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"943f7401f09858c5335f9dd611846825fba24d94eef3845803142eb2f7a1d8b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 00:16:29.543035 kubelet[2684]: E0416 00:16:29.542994 2684 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"943f7401f09858c5335f9dd611846825fba24d94eef3845803142eb2f7a1d8b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 00:16:29.543035 kubelet[2684]: E0416 00:16:29.543050 2684 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"943f7401f09858c5335f9dd611846825fba24d94eef3845803142eb2f7a1d8b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84cb8894f8-djtj2" Apr 16 00:16:29.543035 kubelet[2684]: E0416 00:16:29.543071 2684 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"943f7401f09858c5335f9dd611846825fba24d94eef3845803142eb2f7a1d8b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84cb8894f8-djtj2" Apr 16 00:16:29.543660 kubelet[2684]: E0416 00:16:29.543125 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-84cb8894f8-djtj2_calico-system(e7536550-604c-412d-a9f8-518eab3d01c1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-84cb8894f8-djtj2_calico-system(e7536550-604c-412d-a9f8-518eab3d01c1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"943f7401f09858c5335f9dd611846825fba24d94eef3845803142eb2f7a1d8b4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84cb8894f8-djtj2" podUID="e7536550-604c-412d-a9f8-518eab3d01c1" Apr 16 00:16:29.572645 containerd[1514]: time="2026-04-16T00:16:29.572506607Z" level=error msg="Failed to destroy network for sandbox \"c0c23f5e1b9a95d15f67665d72cbe90c03caf493f6887f86c0e69960d415f92d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 00:16:29.575926 containerd[1514]: time="2026-04-16T00:16:29.575882279Z" level=error msg="Failed to destroy network for sandbox \"1498ddb3913383ccd1af29b542f5590e6a89fc796d3f31070950f27ff1e2390f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 00:16:29.576495 containerd[1514]: time="2026-04-16T00:16:29.576447577Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5bb45b9d-mfsr2,Uid:472bf850-ff7c-4585-ae04-90ee1c340e3a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0c23f5e1b9a95d15f67665d72cbe90c03caf493f6887f86c0e69960d415f92d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 00:16:29.576803 kubelet[2684]: E0416 00:16:29.576694 2684 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0c23f5e1b9a95d15f67665d72cbe90c03caf493f6887f86c0e69960d415f92d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 00:16:29.576912 kubelet[2684]: E0416 00:16:29.576895 2684 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0c23f5e1b9a95d15f67665d72cbe90c03caf493f6887f86c0e69960d415f92d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6d5bb45b9d-mfsr2" Apr 16 00:16:29.577002 kubelet[2684]: E0416 00:16:29.576972 2684 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0c23f5e1b9a95d15f67665d72cbe90c03caf493f6887f86c0e69960d415f92d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6d5bb45b9d-mfsr2" Apr 16 00:16:29.577338 kubelet[2684]: E0416 00:16:29.577092 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d5bb45b9d-mfsr2_calico-system(472bf850-ff7c-4585-ae04-90ee1c340e3a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d5bb45b9d-mfsr2_calico-system(472bf850-ff7c-4585-ae04-90ee1c340e3a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c0c23f5e1b9a95d15f67665d72cbe90c03caf493f6887f86c0e69960d415f92d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6d5bb45b9d-mfsr2" podUID="472bf850-ff7c-4585-ae04-90ee1c340e3a" Apr 16 00:16:29.579579 containerd[1514]: time="2026-04-16T00:16:29.579527119Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-2mdz9,Uid:c14ea271-eeb8-4668-9677-80b08ccbd7da,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1498ddb3913383ccd1af29b542f5590e6a89fc796d3f31070950f27ff1e2390f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 00:16:29.580225 kubelet[2684]: E0416 00:16:29.580119 2684 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1498ddb3913383ccd1af29b542f5590e6a89fc796d3f31070950f27ff1e2390f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 00:16:29.580493 kubelet[2684]: E0416 00:16:29.580310 2684 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1498ddb3913383ccd1af29b542f5590e6a89fc796d3f31070950f27ff1e2390f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-2mdz9" Apr 16 00:16:29.580493 kubelet[2684]: E0416 00:16:29.580334 2684 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1498ddb3913383ccd1af29b542f5590e6a89fc796d3f31070950f27ff1e2390f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-2mdz9" Apr 16 00:16:29.580789 kubelet[2684]: E0416 00:16:29.580576 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-2mdz9_calico-system(c14ea271-eeb8-4668-9677-80b08ccbd7da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-2mdz9_calico-system(c14ea271-eeb8-4668-9677-80b08ccbd7da)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1498ddb3913383ccd1af29b542f5590e6a89fc796d3f31070950f27ff1e2390f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-2mdz9" podUID="c14ea271-eeb8-4668-9677-80b08ccbd7da" Apr 16 00:16:29.583454 containerd[1514]: time="2026-04-16T00:16:29.583059076Z" level=error msg="Failed to destroy network for sandbox \"7938921f29415cf0ee2a22acbf8f43d4d7ac89ca1b644f5582b1bb1d2a109f76\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 00:16:29.588217 containerd[1514]: time="2026-04-16T00:16:29.588076961Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5bb45b9d-xkbr9,Uid:c5f7d55b-5915-4e61-8741-d346763d163a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7938921f29415cf0ee2a22acbf8f43d4d7ac89ca1b644f5582b1bb1d2a109f76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 00:16:29.588419 kubelet[2684]: E0416 00:16:29.588337 2684 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7938921f29415cf0ee2a22acbf8f43d4d7ac89ca1b644f5582b1bb1d2a109f76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 00:16:29.588419 kubelet[2684]: E0416 00:16:29.588389 2684 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7938921f29415cf0ee2a22acbf8f43d4d7ac89ca1b644f5582b1bb1d2a109f76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6d5bb45b9d-xkbr9" Apr 16 00:16:29.588419 kubelet[2684]: E0416 00:16:29.588409 2684 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7938921f29415cf0ee2a22acbf8f43d4d7ac89ca1b644f5582b1bb1d2a109f76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6d5bb45b9d-xkbr9" Apr 16 00:16:29.589141 kubelet[2684]: E0416 00:16:29.588745 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d5bb45b9d-xkbr9_calico-system(c5f7d55b-5915-4e61-8741-d346763d163a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d5bb45b9d-xkbr9_calico-system(c5f7d55b-5915-4e61-8741-d346763d163a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7938921f29415cf0ee2a22acbf8f43d4d7ac89ca1b644f5582b1bb1d2a109f76\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6d5bb45b9d-xkbr9" podUID="c5f7d55b-5915-4e61-8741-d346763d163a" Apr 16 00:16:29.606487 containerd[1514]: time="2026-04-16T00:16:29.606437487Z" level=error msg="Failed to destroy network for sandbox \"5bc2dfe2c1ba8ea2b6b8d30f5f306c258f2e064784e002404e3d92705fb79c37\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 00:16:29.608203 containerd[1514]: time="2026-04-16T00:16:29.608121943Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64d5d9d696-dpxfr,Uid:662ca606-a84d-4f11-b53d-469084a86bbd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bc2dfe2c1ba8ea2b6b8d30f5f306c258f2e064784e002404e3d92705fb79c37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 00:16:29.608645 kubelet[2684]: E0416 00:16:29.608543 2684 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bc2dfe2c1ba8ea2b6b8d30f5f306c258f2e064784e002404e3d92705fb79c37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 00:16:29.608645 kubelet[2684]: E0416 00:16:29.608620 2684 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bc2dfe2c1ba8ea2b6b8d30f5f306c258f2e064784e002404e3d92705fb79c37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-64d5d9d696-dpxfr" Apr 16 00:16:29.608827 kubelet[2684]: E0416 00:16:29.608752 2684 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bc2dfe2c1ba8ea2b6b8d30f5f306c258f2e064784e002404e3d92705fb79c37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-64d5d9d696-dpxfr" Apr 16 00:16:29.608942 kubelet[2684]: E0416 00:16:29.608810 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-64d5d9d696-dpxfr_calico-system(662ca606-a84d-4f11-b53d-469084a86bbd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-64d5d9d696-dpxfr_calico-system(662ca606-a84d-4f11-b53d-469084a86bbd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5bc2dfe2c1ba8ea2b6b8d30f5f306c258f2e064784e002404e3d92705fb79c37\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-64d5d9d696-dpxfr" podUID="662ca606-a84d-4f11-b53d-469084a86bbd" Apr 16 00:16:29.721823 containerd[1514]: time="2026-04-16T00:16:29.721387481Z" level=info msg="CreateContainer within sandbox \"d845b600bd9b2f11f8fb01da3a6da913158d1d90338dc440e3fadbffeeb096a9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 16 00:16:29.733390 containerd[1514]: time="2026-04-16T00:16:29.733349796Z" level=info msg="Container 97ea4508d130e3e0729f3a6ae321bdc82899232efd90218f192be5a783b19697: CDI devices from CRI Config.CDIDevices: []" Apr 16 00:16:29.750652 containerd[1514]: time="2026-04-16T00:16:29.750612046Z" level=info msg="CreateContainer within sandbox \"d845b600bd9b2f11f8fb01da3a6da913158d1d90338dc440e3fadbffeeb096a9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"97ea4508d130e3e0729f3a6ae321bdc82899232efd90218f192be5a783b19697\"" Apr 16 00:16:29.752446 containerd[1514]: time="2026-04-16T00:16:29.752368024Z" level=info msg="StartContainer for \"97ea4508d130e3e0729f3a6ae321bdc82899232efd90218f192be5a783b19697\"" Apr 16 00:16:29.754387 containerd[1514]: time="2026-04-16T00:16:29.754302928Z" level=info msg="connecting to shim 97ea4508d130e3e0729f3a6ae321bdc82899232efd90218f192be5a783b19697" address="unix:///run/containerd/s/a641bf6272888a294e173fcd7304a975da94bbfa8261e1854073139969083736" protocol=ttrpc version=3 Apr 16 00:16:29.777428 systemd[1]: Started cri-containerd-97ea4508d130e3e0729f3a6ae321bdc82899232efd90218f192be5a783b19697.scope - libcontainer container 97ea4508d130e3e0729f3a6ae321bdc82899232efd90218f192be5a783b19697. Apr 16 00:16:29.849137 containerd[1514]: time="2026-04-16T00:16:29.849091137Z" level=info msg="StartContainer for \"97ea4508d130e3e0729f3a6ae321bdc82899232efd90218f192be5a783b19697\" returns successfully" Apr 16 00:16:30.121036 kubelet[2684]: I0416 00:16:30.120945 2684 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/662ca606-a84d-4f11-b53d-469084a86bbd-whisker-backend-key-pair\") pod \"662ca606-a84d-4f11-b53d-469084a86bbd\" (UID: \"662ca606-a84d-4f11-b53d-469084a86bbd\") " Apr 16 00:16:30.121036 kubelet[2684]: I0416 00:16:30.121008 2684 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/662ca606-a84d-4f11-b53d-469084a86bbd-whisker-ca-bundle\") pod \"662ca606-a84d-4f11-b53d-469084a86bbd\" (UID: \"662ca606-a84d-4f11-b53d-469084a86bbd\") " Apr 16 00:16:30.121036 kubelet[2684]: I0416 00:16:30.121039 2684 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gq2vl\" (UniqueName: \"kubernetes.io/projected/662ca606-a84d-4f11-b53d-469084a86bbd-kube-api-access-gq2vl\") pod \"662ca606-a84d-4f11-b53d-469084a86bbd\" (UID: \"662ca606-a84d-4f11-b53d-469084a86bbd\") " Apr 16 00:16:30.121354 kubelet[2684]: I0416 00:16:30.121059 2684 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/662ca606-a84d-4f11-b53d-469084a86bbd-nginx-config\") pod \"662ca606-a84d-4f11-b53d-469084a86bbd\" (UID: \"662ca606-a84d-4f11-b53d-469084a86bbd\") " Apr 16 00:16:30.121693 kubelet[2684]: I0416 00:16:30.121499 2684 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/662ca606-a84d-4f11-b53d-469084a86bbd-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "662ca606-a84d-4f11-b53d-469084a86bbd" (UID: "662ca606-a84d-4f11-b53d-469084a86bbd"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 00:16:30.122483 kubelet[2684]: I0416 00:16:30.122406 2684 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/662ca606-a84d-4f11-b53d-469084a86bbd-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "662ca606-a84d-4f11-b53d-469084a86bbd" (UID: "662ca606-a84d-4f11-b53d-469084a86bbd"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 00:16:30.126434 kubelet[2684]: I0416 00:16:30.126379 2684 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/662ca606-a84d-4f11-b53d-469084a86bbd-kube-api-access-gq2vl" (OuterVolumeSpecName: "kube-api-access-gq2vl") pod "662ca606-a84d-4f11-b53d-469084a86bbd" (UID: "662ca606-a84d-4f11-b53d-469084a86bbd"). InnerVolumeSpecName "kube-api-access-gq2vl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 00:16:30.128128 kubelet[2684]: I0416 00:16:30.128087 2684 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/662ca606-a84d-4f11-b53d-469084a86bbd-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "662ca606-a84d-4f11-b53d-469084a86bbd" (UID: "662ca606-a84d-4f11-b53d-469084a86bbd"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 00:16:30.221421 kubelet[2684]: I0416 00:16:30.221346 2684 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/662ca606-a84d-4f11-b53d-469084a86bbd-whisker-backend-key-pair\") on node \"ci-4459-2-4-n-0840528111\" DevicePath \"\"" Apr 16 00:16:30.221421 kubelet[2684]: I0416 00:16:30.221378 2684 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/662ca606-a84d-4f11-b53d-469084a86bbd-whisker-ca-bundle\") on node \"ci-4459-2-4-n-0840528111\" DevicePath \"\"" Apr 16 00:16:30.221421 kubelet[2684]: I0416 00:16:30.221389 2684 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gq2vl\" (UniqueName: \"kubernetes.io/projected/662ca606-a84d-4f11-b53d-469084a86bbd-kube-api-access-gq2vl\") on node \"ci-4459-2-4-n-0840528111\" DevicePath \"\"" Apr 16 00:16:30.221421 kubelet[2684]: I0416 00:16:30.221397 2684 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/662ca606-a84d-4f11-b53d-469084a86bbd-nginx-config\") on node \"ci-4459-2-4-n-0840528111\" DevicePath \"\"" Apr 16 00:16:30.227298 systemd[1]: var-lib-kubelet-pods-662ca606\x2da84d\x2d4f11\x2db53d\x2d469084a86bbd-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 16 00:16:30.521771 systemd[1]: Created slice kubepods-besteffort-pod8072e2d2_48e1_4c7f_bbc3_6ef041c65bc0.slice - libcontainer container kubepods-besteffort-pod8072e2d2_48e1_4c7f_bbc3_6ef041c65bc0.slice. Apr 16 00:16:30.528268 containerd[1514]: time="2026-04-16T00:16:30.528208958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q9xkv,Uid:8072e2d2-48e1-4c7f-bbc3-6ef041c65bc0,Namespace:calico-system,Attempt:0,}" Apr 16 00:16:30.707976 systemd-networkd[1414]: calid642de0f0e6: Link UP Apr 16 00:16:30.709632 systemd-networkd[1414]: calid642de0f0e6: Gained carrier Apr 16 00:16:30.747386 containerd[1514]: 2026-04-16 00:16:30.554 [ERROR][3770] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 16 00:16:30.747386 containerd[1514]: 2026-04-16 00:16:30.584 [INFO][3770] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--0840528111-k8s-csi--node--driver--q9xkv-eth0 csi-node-driver- calico-system 8072e2d2-48e1-4c7f-bbc3-6ef041c65bc0 730 0 2026-04-16 00:16:12 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459-2-4-n-0840528111 csi-node-driver-q9xkv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid642de0f0e6 [] [] }} ContainerID="f0cb82b292096a0154b2cbdd873a79cf5e414a581f2c286f2ad03a4323bbc97d" Namespace="calico-system" Pod="csi-node-driver-q9xkv" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-csi--node--driver--q9xkv-" Apr 16 00:16:30.747386 containerd[1514]: 2026-04-16 00:16:30.585 [INFO][3770] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f0cb82b292096a0154b2cbdd873a79cf5e414a581f2c286f2ad03a4323bbc97d" Namespace="calico-system" Pod="csi-node-driver-q9xkv" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-csi--node--driver--q9xkv-eth0" Apr 16 00:16:30.747386 containerd[1514]: 2026-04-16 00:16:30.635 [INFO][3782] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f0cb82b292096a0154b2cbdd873a79cf5e414a581f2c286f2ad03a4323bbc97d" HandleID="k8s-pod-network.f0cb82b292096a0154b2cbdd873a79cf5e414a581f2c286f2ad03a4323bbc97d" Workload="ci--4459--2--4--n--0840528111-k8s-csi--node--driver--q9xkv-eth0" Apr 16 00:16:30.747651 containerd[1514]: 2026-04-16 00:16:30.648 [INFO][3782] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f0cb82b292096a0154b2cbdd873a79cf5e414a581f2c286f2ad03a4323bbc97d" HandleID="k8s-pod-network.f0cb82b292096a0154b2cbdd873a79cf5e414a581f2c286f2ad03a4323bbc97d" Workload="ci--4459--2--4--n--0840528111-k8s-csi--node--driver--q9xkv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001215d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-n-0840528111", "pod":"csi-node-driver-q9xkv", "timestamp":"2026-04-16 00:16:30.6353934 +0000 UTC"}, Hostname:"ci-4459-2-4-n-0840528111", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000166dc0)} Apr 16 00:16:30.747651 containerd[1514]: 2026-04-16 00:16:30.648 [INFO][3782] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 00:16:30.747651 containerd[1514]: 2026-04-16 00:16:30.648 [INFO][3782] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 00:16:30.747651 containerd[1514]: 2026-04-16 00:16:30.648 [INFO][3782] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-0840528111' Apr 16 00:16:30.747651 containerd[1514]: 2026-04-16 00:16:30.652 [INFO][3782] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f0cb82b292096a0154b2cbdd873a79cf5e414a581f2c286f2ad03a4323bbc97d" host="ci-4459-2-4-n-0840528111" Apr 16 00:16:30.747651 containerd[1514]: 2026-04-16 00:16:30.660 [INFO][3782] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-0840528111" Apr 16 00:16:30.747651 containerd[1514]: 2026-04-16 00:16:30.666 [INFO][3782] ipam/ipam.go 526: Trying affinity for 192.168.36.0/26 host="ci-4459-2-4-n-0840528111" Apr 16 00:16:30.747651 containerd[1514]: 2026-04-16 00:16:30.669 [INFO][3782] ipam/ipam.go 160: Attempting to load block cidr=192.168.36.0/26 host="ci-4459-2-4-n-0840528111" Apr 16 00:16:30.747651 containerd[1514]: 2026-04-16 00:16:30.673 [INFO][3782] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.36.0/26 host="ci-4459-2-4-n-0840528111" Apr 16 00:16:30.747848 containerd[1514]: 2026-04-16 00:16:30.673 [INFO][3782] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.36.0/26 handle="k8s-pod-network.f0cb82b292096a0154b2cbdd873a79cf5e414a581f2c286f2ad03a4323bbc97d" host="ci-4459-2-4-n-0840528111" Apr 16 00:16:30.747848 containerd[1514]: 2026-04-16 00:16:30.675 [INFO][3782] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f0cb82b292096a0154b2cbdd873a79cf5e414a581f2c286f2ad03a4323bbc97d Apr 16 00:16:30.747848 containerd[1514]: 2026-04-16 00:16:30.682 [INFO][3782] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.36.0/26 handle="k8s-pod-network.f0cb82b292096a0154b2cbdd873a79cf5e414a581f2c286f2ad03a4323bbc97d" host="ci-4459-2-4-n-0840528111" Apr 16 00:16:30.747848 containerd[1514]: 2026-04-16 00:16:30.690 [INFO][3782] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.36.1/26] block=192.168.36.0/26 handle="k8s-pod-network.f0cb82b292096a0154b2cbdd873a79cf5e414a581f2c286f2ad03a4323bbc97d" host="ci-4459-2-4-n-0840528111" Apr 16 00:16:30.747848 containerd[1514]: 2026-04-16 00:16:30.690 [INFO][3782] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.36.1/26] handle="k8s-pod-network.f0cb82b292096a0154b2cbdd873a79cf5e414a581f2c286f2ad03a4323bbc97d" host="ci-4459-2-4-n-0840528111" Apr 16 00:16:30.747848 containerd[1514]: 2026-04-16 00:16:30.691 [INFO][3782] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 00:16:30.747848 containerd[1514]: 2026-04-16 00:16:30.691 [INFO][3782] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.36.1/26] IPv6=[] ContainerID="f0cb82b292096a0154b2cbdd873a79cf5e414a581f2c286f2ad03a4323bbc97d" HandleID="k8s-pod-network.f0cb82b292096a0154b2cbdd873a79cf5e414a581f2c286f2ad03a4323bbc97d" Workload="ci--4459--2--4--n--0840528111-k8s-csi--node--driver--q9xkv-eth0" Apr 16 00:16:30.747976 containerd[1514]: 2026-04-16 00:16:30.696 [INFO][3770] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f0cb82b292096a0154b2cbdd873a79cf5e414a581f2c286f2ad03a4323bbc97d" Namespace="calico-system" Pod="csi-node-driver-q9xkv" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-csi--node--driver--q9xkv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--0840528111-k8s-csi--node--driver--q9xkv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8072e2d2-48e1-4c7f-bbc3-6ef041c65bc0", ResourceVersion:"730", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 0, 16, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-0840528111", ContainerID:"", Pod:"csi-node-driver-q9xkv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.36.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid642de0f0e6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 00:16:30.748501 containerd[1514]: 2026-04-16 00:16:30.698 [INFO][3770] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.1/32] ContainerID="f0cb82b292096a0154b2cbdd873a79cf5e414a581f2c286f2ad03a4323bbc97d" Namespace="calico-system" Pod="csi-node-driver-q9xkv" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-csi--node--driver--q9xkv-eth0" Apr 16 00:16:30.748501 containerd[1514]: 2026-04-16 00:16:30.698 [INFO][3770] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid642de0f0e6 ContainerID="f0cb82b292096a0154b2cbdd873a79cf5e414a581f2c286f2ad03a4323bbc97d" Namespace="calico-system" Pod="csi-node-driver-q9xkv" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-csi--node--driver--q9xkv-eth0" Apr 16 00:16:30.748501 containerd[1514]: 2026-04-16 00:16:30.711 [INFO][3770] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f0cb82b292096a0154b2cbdd873a79cf5e414a581f2c286f2ad03a4323bbc97d" Namespace="calico-system" Pod="csi-node-driver-q9xkv" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-csi--node--driver--q9xkv-eth0" Apr 16 00:16:30.748596 containerd[1514]: 2026-04-16 00:16:30.711 [INFO][3770] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f0cb82b292096a0154b2cbdd873a79cf5e414a581f2c286f2ad03a4323bbc97d" Namespace="calico-system" Pod="csi-node-driver-q9xkv" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-csi--node--driver--q9xkv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--0840528111-k8s-csi--node--driver--q9xkv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8072e2d2-48e1-4c7f-bbc3-6ef041c65bc0", ResourceVersion:"730", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 0, 16, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-0840528111", ContainerID:"f0cb82b292096a0154b2cbdd873a79cf5e414a581f2c286f2ad03a4323bbc97d", Pod:"csi-node-driver-q9xkv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.36.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid642de0f0e6", MAC:"ea:8b:a6:ac:00:31", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 00:16:30.748654 containerd[1514]: 2026-04-16 00:16:30.732 [INFO][3770] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f0cb82b292096a0154b2cbdd873a79cf5e414a581f2c286f2ad03a4323bbc97d" Namespace="calico-system" Pod="csi-node-driver-q9xkv" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-csi--node--driver--q9xkv-eth0" Apr 16 00:16:30.758740 systemd[1]: Removed slice kubepods-besteffort-pod662ca606_a84d_4f11_b53d_469084a86bbd.slice - libcontainer container kubepods-besteffort-pod662ca606_a84d_4f11_b53d_469084a86bbd.slice. Apr 16 00:16:30.782031 kubelet[2684]: I0416 00:16:30.780708 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rbf4f" podStartSLOduration=3.535099873 podStartE2EDuration="18.780690265s" podCreationTimestamp="2026-04-16 00:16:12 +0000 UTC" firstStartedPulling="2026-04-16 00:16:12.869859699 +0000 UTC m=+23.500835089" lastFinishedPulling="2026-04-16 00:16:28.115450091 +0000 UTC m=+38.746425481" observedRunningTime="2026-04-16 00:16:30.77679642 +0000 UTC m=+41.407771850" watchObservedRunningTime="2026-04-16 00:16:30.780690265 +0000 UTC m=+41.411665615" Apr 16 00:16:30.798038 containerd[1514]: time="2026-04-16T00:16:30.797494605Z" level=info msg="connecting to shim f0cb82b292096a0154b2cbdd873a79cf5e414a581f2c286f2ad03a4323bbc97d" address="unix:///run/containerd/s/824f12069865d4eb8360b6729cc135d9706703491493b775cbfb365e49287b57" namespace=k8s.io protocol=ttrpc version=3 Apr 16 00:16:30.842453 systemd[1]: Started cri-containerd-f0cb82b292096a0154b2cbdd873a79cf5e414a581f2c286f2ad03a4323bbc97d.scope - libcontainer container f0cb82b292096a0154b2cbdd873a79cf5e414a581f2c286f2ad03a4323bbc97d. Apr 16 00:16:30.893644 systemd[1]: Created slice kubepods-besteffort-poddaefb7a6_42ff_4ea3_bd24_c65ae218b599.slice - libcontainer container kubepods-besteffort-poddaefb7a6_42ff_4ea3_bd24_c65ae218b599.slice. Apr 16 00:16:30.928861 kubelet[2684]: I0416 00:16:30.928561 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzql7\" (UniqueName: \"kubernetes.io/projected/daefb7a6-42ff-4ea3-bd24-c65ae218b599-kube-api-access-mzql7\") pod \"whisker-59cbd788b5-2qnvc\" (UID: \"daefb7a6-42ff-4ea3-bd24-c65ae218b599\") " pod="calico-system/whisker-59cbd788b5-2qnvc" Apr 16 00:16:30.929001 kubelet[2684]: I0416 00:16:30.928970 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/daefb7a6-42ff-4ea3-bd24-c65ae218b599-nginx-config\") pod \"whisker-59cbd788b5-2qnvc\" (UID: \"daefb7a6-42ff-4ea3-bd24-c65ae218b599\") " pod="calico-system/whisker-59cbd788b5-2qnvc" Apr 16 00:16:30.930460 kubelet[2684]: I0416 00:16:30.930411 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/daefb7a6-42ff-4ea3-bd24-c65ae218b599-whisker-ca-bundle\") pod \"whisker-59cbd788b5-2qnvc\" (UID: \"daefb7a6-42ff-4ea3-bd24-c65ae218b599\") " pod="calico-system/whisker-59cbd788b5-2qnvc" Apr 16 00:16:30.930557 kubelet[2684]: I0416 00:16:30.930493 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/daefb7a6-42ff-4ea3-bd24-c65ae218b599-whisker-backend-key-pair\") pod \"whisker-59cbd788b5-2qnvc\" (UID: \"daefb7a6-42ff-4ea3-bd24-c65ae218b599\") " pod="calico-system/whisker-59cbd788b5-2qnvc" Apr 16 00:16:30.942142 containerd[1514]: time="2026-04-16T00:16:30.942101408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q9xkv,Uid:8072e2d2-48e1-4c7f-bbc3-6ef041c65bc0,Namespace:calico-system,Attempt:0,} returns sandbox id \"f0cb82b292096a0154b2cbdd873a79cf5e414a581f2c286f2ad03a4323bbc97d\"" Apr 16 00:16:30.945635 containerd[1514]: time="2026-04-16T00:16:30.945584439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 16 00:16:31.201634 containerd[1514]: time="2026-04-16T00:16:31.201488601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59cbd788b5-2qnvc,Uid:daefb7a6-42ff-4ea3-bd24-c65ae218b599,Namespace:calico-system,Attempt:0,}" Apr 16 00:16:31.365985 systemd-networkd[1414]: cali2f3927540f0: Link UP Apr 16 00:16:31.367482 systemd-networkd[1414]: cali2f3927540f0: Gained carrier Apr 16 00:16:31.394199 containerd[1514]: 2026-04-16 00:16:31.235 [ERROR][3866] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 16 00:16:31.394199 containerd[1514]: 2026-04-16 00:16:31.251 [INFO][3866] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--0840528111-k8s-whisker--59cbd788b5--2qnvc-eth0 whisker-59cbd788b5- calico-system daefb7a6-42ff-4ea3-bd24-c65ae218b599 922 0 2026-04-16 00:16:30 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:59cbd788b5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459-2-4-n-0840528111 whisker-59cbd788b5-2qnvc eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali2f3927540f0 [] [] }} ContainerID="d46bb9beb86801859884a5434efe15e11065bb793d5301837e086ba94347958b" Namespace="calico-system" Pod="whisker-59cbd788b5-2qnvc" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-whisker--59cbd788b5--2qnvc-" Apr 16 00:16:31.394199 containerd[1514]: 2026-04-16 00:16:31.252 [INFO][3866] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d46bb9beb86801859884a5434efe15e11065bb793d5301837e086ba94347958b" Namespace="calico-system" Pod="whisker-59cbd788b5-2qnvc" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-whisker--59cbd788b5--2qnvc-eth0" Apr 16 00:16:31.394199 containerd[1514]: 2026-04-16 00:16:31.280 [INFO][3879] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d46bb9beb86801859884a5434efe15e11065bb793d5301837e086ba94347958b" HandleID="k8s-pod-network.d46bb9beb86801859884a5434efe15e11065bb793d5301837e086ba94347958b" Workload="ci--4459--2--4--n--0840528111-k8s-whisker--59cbd788b5--2qnvc-eth0" Apr 16 00:16:31.394460 containerd[1514]: 2026-04-16 00:16:31.296 [INFO][3879] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d46bb9beb86801859884a5434efe15e11065bb793d5301837e086ba94347958b" HandleID="k8s-pod-network.d46bb9beb86801859884a5434efe15e11065bb793d5301837e086ba94347958b" Workload="ci--4459--2--4--n--0840528111-k8s-whisker--59cbd788b5--2qnvc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002734e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-n-0840528111", "pod":"whisker-59cbd788b5-2qnvc", "timestamp":"2026-04-16 00:16:31.280641074 +0000 UTC"}, Hostname:"ci-4459-2-4-n-0840528111", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400030cf20)} Apr 16 00:16:31.394460 containerd[1514]: 2026-04-16 00:16:31.296 [INFO][3879] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 00:16:31.394460 containerd[1514]: 2026-04-16 00:16:31.296 [INFO][3879] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 00:16:31.394460 containerd[1514]: 2026-04-16 00:16:31.296 [INFO][3879] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-0840528111' Apr 16 00:16:31.394460 containerd[1514]: 2026-04-16 00:16:31.305 [INFO][3879] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d46bb9beb86801859884a5434efe15e11065bb793d5301837e086ba94347958b" host="ci-4459-2-4-n-0840528111" Apr 16 00:16:31.394460 containerd[1514]: 2026-04-16 00:16:31.317 [INFO][3879] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-0840528111" Apr 16 00:16:31.394460 containerd[1514]: 2026-04-16 00:16:31.325 [INFO][3879] ipam/ipam.go 526: Trying affinity for 192.168.36.0/26 host="ci-4459-2-4-n-0840528111" Apr 16 00:16:31.394460 containerd[1514]: 2026-04-16 00:16:31.328 [INFO][3879] ipam/ipam.go 160: Attempting to load block cidr=192.168.36.0/26 host="ci-4459-2-4-n-0840528111" Apr 16 00:16:31.394460 containerd[1514]: 2026-04-16 00:16:31.334 [INFO][3879] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.36.0/26 host="ci-4459-2-4-n-0840528111" Apr 16 00:16:31.394644 containerd[1514]: 2026-04-16 00:16:31.334 [INFO][3879] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.36.0/26 handle="k8s-pod-network.d46bb9beb86801859884a5434efe15e11065bb793d5301837e086ba94347958b" host="ci-4459-2-4-n-0840528111" Apr 16 00:16:31.394644 containerd[1514]: 2026-04-16 00:16:31.341 [INFO][3879] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d46bb9beb86801859884a5434efe15e11065bb793d5301837e086ba94347958b Apr 16 00:16:31.394644 containerd[1514]: 2026-04-16 00:16:31.347 [INFO][3879] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.36.0/26 handle="k8s-pod-network.d46bb9beb86801859884a5434efe15e11065bb793d5301837e086ba94347958b" host="ci-4459-2-4-n-0840528111" Apr 16 00:16:31.394644 containerd[1514]: 2026-04-16 00:16:31.356 [INFO][3879] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.36.2/26] block=192.168.36.0/26 handle="k8s-pod-network.d46bb9beb86801859884a5434efe15e11065bb793d5301837e086ba94347958b" host="ci-4459-2-4-n-0840528111" Apr 16 00:16:31.394644 containerd[1514]: 2026-04-16 00:16:31.356 [INFO][3879] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.36.2/26] handle="k8s-pod-network.d46bb9beb86801859884a5434efe15e11065bb793d5301837e086ba94347958b" host="ci-4459-2-4-n-0840528111" Apr 16 00:16:31.394644 containerd[1514]: 2026-04-16 00:16:31.356 [INFO][3879] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 00:16:31.394644 containerd[1514]: 2026-04-16 00:16:31.356 [INFO][3879] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.36.2/26] IPv6=[] ContainerID="d46bb9beb86801859884a5434efe15e11065bb793d5301837e086ba94347958b" HandleID="k8s-pod-network.d46bb9beb86801859884a5434efe15e11065bb793d5301837e086ba94347958b" Workload="ci--4459--2--4--n--0840528111-k8s-whisker--59cbd788b5--2qnvc-eth0" Apr 16 00:16:31.395462 containerd[1514]: 2026-04-16 00:16:31.359 [INFO][3866] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d46bb9beb86801859884a5434efe15e11065bb793d5301837e086ba94347958b" Namespace="calico-system" Pod="whisker-59cbd788b5-2qnvc" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-whisker--59cbd788b5--2qnvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--0840528111-k8s-whisker--59cbd788b5--2qnvc-eth0", GenerateName:"whisker-59cbd788b5-", Namespace:"calico-system", SelfLink:"", UID:"daefb7a6-42ff-4ea3-bd24-c65ae218b599", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 0, 16, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"59cbd788b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-0840528111", ContainerID:"", Pod:"whisker-59cbd788b5-2qnvc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.36.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2f3927540f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 00:16:31.395462 containerd[1514]: 2026-04-16 00:16:31.359 [INFO][3866] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.2/32] ContainerID="d46bb9beb86801859884a5434efe15e11065bb793d5301837e086ba94347958b" Namespace="calico-system" Pod="whisker-59cbd788b5-2qnvc" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-whisker--59cbd788b5--2qnvc-eth0" Apr 16 00:16:31.395583 containerd[1514]: 2026-04-16 00:16:31.359 [INFO][3866] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2f3927540f0 ContainerID="d46bb9beb86801859884a5434efe15e11065bb793d5301837e086ba94347958b" Namespace="calico-system" Pod="whisker-59cbd788b5-2qnvc" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-whisker--59cbd788b5--2qnvc-eth0" Apr 16 00:16:31.395583 containerd[1514]: 2026-04-16 00:16:31.367 [INFO][3866] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d46bb9beb86801859884a5434efe15e11065bb793d5301837e086ba94347958b" Namespace="calico-system" Pod="whisker-59cbd788b5-2qnvc" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-whisker--59cbd788b5--2qnvc-eth0" Apr 16 00:16:31.395629 containerd[1514]: 2026-04-16 00:16:31.370 [INFO][3866] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d46bb9beb86801859884a5434efe15e11065bb793d5301837e086ba94347958b" Namespace="calico-system" Pod="whisker-59cbd788b5-2qnvc" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-whisker--59cbd788b5--2qnvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--0840528111-k8s-whisker--59cbd788b5--2qnvc-eth0", GenerateName:"whisker-59cbd788b5-", Namespace:"calico-system", SelfLink:"", UID:"daefb7a6-42ff-4ea3-bd24-c65ae218b599", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 0, 16, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"59cbd788b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-0840528111", ContainerID:"d46bb9beb86801859884a5434efe15e11065bb793d5301837e086ba94347958b", Pod:"whisker-59cbd788b5-2qnvc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.36.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2f3927540f0", MAC:"96:85:94:8e:d3:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 00:16:31.396112 containerd[1514]: 2026-04-16 00:16:31.390 [INFO][3866] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d46bb9beb86801859884a5434efe15e11065bb793d5301837e086ba94347958b" Namespace="calico-system" Pod="whisker-59cbd788b5-2qnvc" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-whisker--59cbd788b5--2qnvc-eth0" Apr 16 00:16:31.429787 containerd[1514]: time="2026-04-16T00:16:31.429689730Z" level=info msg="connecting to shim d46bb9beb86801859884a5434efe15e11065bb793d5301837e086ba94347958b" address="unix:///run/containerd/s/20622e02b2114e2ea620baddec6e3938cc26d5ae36add13345812e8750d41590" namespace=k8s.io protocol=ttrpc version=3 Apr 16 00:16:31.480490 systemd[1]: Started cri-containerd-d46bb9beb86801859884a5434efe15e11065bb793d5301837e086ba94347958b.scope - libcontainer container d46bb9beb86801859884a5434efe15e11065bb793d5301837e086ba94347958b. Apr 16 00:16:31.526261 kubelet[2684]: I0416 00:16:31.525641 2684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="662ca606-a84d-4f11-b53d-469084a86bbd" path="/var/lib/kubelet/pods/662ca606-a84d-4f11-b53d-469084a86bbd/volumes" Apr 16 00:16:31.599665 containerd[1514]: time="2026-04-16T00:16:31.599612517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59cbd788b5-2qnvc,Uid:daefb7a6-42ff-4ea3-bd24-c65ae218b599,Namespace:calico-system,Attempt:0,} returns sandbox id \"d46bb9beb86801859884a5434efe15e11065bb793d5301837e086ba94347958b\"" Apr 16 00:16:32.298137 systemd-networkd[1414]: vxlan.calico: Link UP Apr 16 00:16:32.298146 systemd-networkd[1414]: vxlan.calico: Gained carrier Apr 16 00:16:32.416463 systemd-networkd[1414]: calid642de0f0e6: Gained IPv6LL Apr 16 00:16:32.544440 systemd-networkd[1414]: cali2f3927540f0: Gained IPv6LL Apr 16 00:16:33.190404 containerd[1514]: time="2026-04-16T00:16:33.190291126Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:16:33.191869 containerd[1514]: time="2026-04-16T00:16:33.191765409Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Apr 16 00:16:33.192771 containerd[1514]: time="2026-04-16T00:16:33.192736438Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:16:33.195661 containerd[1514]: time="2026-04-16T00:16:33.195596323Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:16:33.197098 containerd[1514]: time="2026-04-16T00:16:33.197016004Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 2.251390643s" Apr 16 00:16:33.197098 containerd[1514]: time="2026-04-16T00:16:33.197089567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Apr 16 00:16:33.198908 containerd[1514]: time="2026-04-16T00:16:33.198270642Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 16 00:16:33.201756 containerd[1514]: time="2026-04-16T00:16:33.201722864Z" level=info msg="CreateContainer within sandbox \"f0cb82b292096a0154b2cbdd873a79cf5e414a581f2c286f2ad03a4323bbc97d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 16 00:16:33.213574 containerd[1514]: time="2026-04-16T00:16:33.213491452Z" level=info msg="Container 79d2c333a0b1e2a8f8f00f0b9b475b69efebeac0e224f5b16461dada83b170a0: CDI devices from CRI Config.CDIDevices: []" Apr 16 00:16:33.219893 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2418313339.mount: Deactivated successfully. Apr 16 00:16:33.246399 containerd[1514]: time="2026-04-16T00:16:33.246315462Z" level=info msg="CreateContainer within sandbox \"f0cb82b292096a0154b2cbdd873a79cf5e414a581f2c286f2ad03a4323bbc97d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"79d2c333a0b1e2a8f8f00f0b9b475b69efebeac0e224f5b16461dada83b170a0\"" Apr 16 00:16:33.247834 containerd[1514]: time="2026-04-16T00:16:33.247782666Z" level=info msg="StartContainer for \"79d2c333a0b1e2a8f8f00f0b9b475b69efebeac0e224f5b16461dada83b170a0\"" Apr 16 00:16:33.250087 containerd[1514]: time="2026-04-16T00:16:33.249989491Z" level=info msg="connecting to shim 79d2c333a0b1e2a8f8f00f0b9b475b69efebeac0e224f5b16461dada83b170a0" address="unix:///run/containerd/s/824f12069865d4eb8360b6729cc135d9706703491493b775cbfb365e49287b57" protocol=ttrpc version=3 Apr 16 00:16:33.276427 systemd[1]: Started cri-containerd-79d2c333a0b1e2a8f8f00f0b9b475b69efebeac0e224f5b16461dada83b170a0.scope - libcontainer container 79d2c333a0b1e2a8f8f00f0b9b475b69efebeac0e224f5b16461dada83b170a0. Apr 16 00:16:33.348686 containerd[1514]: time="2026-04-16T00:16:33.348554686Z" level=info msg="StartContainer for \"79d2c333a0b1e2a8f8f00f0b9b475b69efebeac0e224f5b16461dada83b170a0\" returns successfully" Apr 16 00:16:33.761158 systemd-networkd[1414]: vxlan.calico: Gained IPv6LL Apr 16 00:16:34.910500 containerd[1514]: time="2026-04-16T00:16:34.910430957Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:16:34.912249 containerd[1514]: time="2026-04-16T00:16:34.912156206Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Apr 16 00:16:34.913200 containerd[1514]: time="2026-04-16T00:16:34.913106474Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:16:34.916607 containerd[1514]: time="2026-04-16T00:16:34.916549693Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:16:34.917682 containerd[1514]: time="2026-04-16T00:16:34.917650924Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.719346642s" Apr 16 00:16:34.917761 containerd[1514]: time="2026-04-16T00:16:34.917683485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Apr 16 00:16:34.919728 containerd[1514]: time="2026-04-16T00:16:34.919654262Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 16 00:16:34.923002 containerd[1514]: time="2026-04-16T00:16:34.922943757Z" level=info msg="CreateContainer within sandbox \"d46bb9beb86801859884a5434efe15e11065bb793d5301837e086ba94347958b\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 16 00:16:34.932688 containerd[1514]: time="2026-04-16T00:16:34.932356308Z" level=info msg="Container 07755441aef31a907a94ccaaca58f5f4535f16308e1085a3c5c8960811875ed0: CDI devices from CRI Config.CDIDevices: []" Apr 16 00:16:34.943806 containerd[1514]: time="2026-04-16T00:16:34.943761276Z" level=info msg="CreateContainer within sandbox \"d46bb9beb86801859884a5434efe15e11065bb793d5301837e086ba94347958b\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"07755441aef31a907a94ccaaca58f5f4535f16308e1085a3c5c8960811875ed0\"" Apr 16 00:16:34.944854 containerd[1514]: time="2026-04-16T00:16:34.944829627Z" level=info msg="StartContainer for \"07755441aef31a907a94ccaaca58f5f4535f16308e1085a3c5c8960811875ed0\"" Apr 16 00:16:34.946469 containerd[1514]: time="2026-04-16T00:16:34.946433553Z" level=info msg="connecting to shim 07755441aef31a907a94ccaaca58f5f4535f16308e1085a3c5c8960811875ed0" address="unix:///run/containerd/s/20622e02b2114e2ea620baddec6e3938cc26d5ae36add13345812e8750d41590" protocol=ttrpc version=3 Apr 16 00:16:34.970952 systemd[1]: Started cri-containerd-07755441aef31a907a94ccaaca58f5f4535f16308e1085a3c5c8960811875ed0.scope - libcontainer container 07755441aef31a907a94ccaaca58f5f4535f16308e1085a3c5c8960811875ed0. Apr 16 00:16:35.015510 containerd[1514]: time="2026-04-16T00:16:35.015476209Z" level=info msg="StartContainer for \"07755441aef31a907a94ccaaca58f5f4535f16308e1085a3c5c8960811875ed0\" returns successfully" Apr 16 00:16:36.655713 containerd[1514]: time="2026-04-16T00:16:36.655640510Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:16:36.656943 containerd[1514]: time="2026-04-16T00:16:36.656906705Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Apr 16 00:16:36.657570 containerd[1514]: time="2026-04-16T00:16:36.657517521Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:16:36.660035 containerd[1514]: time="2026-04-16T00:16:36.660005389Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:16:36.660817 containerd[1514]: time="2026-04-16T00:16:36.660784530Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 1.741097227s" Apr 16 00:16:36.660872 containerd[1514]: time="2026-04-16T00:16:36.660817531Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Apr 16 00:16:36.663265 containerd[1514]: time="2026-04-16T00:16:36.662420935Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 16 00:16:36.667919 containerd[1514]: time="2026-04-16T00:16:36.667691879Z" level=info msg="CreateContainer within sandbox \"f0cb82b292096a0154b2cbdd873a79cf5e414a581f2c286f2ad03a4323bbc97d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 16 00:16:36.678384 containerd[1514]: time="2026-04-16T00:16:36.678337929Z" level=info msg="Container 64f56927a832dda8f2b7ebc50f99430525c1fee3c588c319d54a8fd0201ed7c1: CDI devices from CRI Config.CDIDevices: []" Apr 16 00:16:36.694605 containerd[1514]: time="2026-04-16T00:16:36.694540331Z" level=info msg="CreateContainer within sandbox \"f0cb82b292096a0154b2cbdd873a79cf5e414a581f2c286f2ad03a4323bbc97d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"64f56927a832dda8f2b7ebc50f99430525c1fee3c588c319d54a8fd0201ed7c1\"" Apr 16 00:16:36.695590 containerd[1514]: time="2026-04-16T00:16:36.695565199Z" level=info msg="StartContainer for \"64f56927a832dda8f2b7ebc50f99430525c1fee3c588c319d54a8fd0201ed7c1\"" Apr 16 00:16:36.700333 containerd[1514]: time="2026-04-16T00:16:36.700006600Z" level=info msg="connecting to shim 64f56927a832dda8f2b7ebc50f99430525c1fee3c588c319d54a8fd0201ed7c1" address="unix:///run/containerd/s/824f12069865d4eb8360b6729cc135d9706703491493b775cbfb365e49287b57" protocol=ttrpc version=3 Apr 16 00:16:36.726426 systemd[1]: Started cri-containerd-64f56927a832dda8f2b7ebc50f99430525c1fee3c588c319d54a8fd0201ed7c1.scope - libcontainer container 64f56927a832dda8f2b7ebc50f99430525c1fee3c588c319d54a8fd0201ed7c1. Apr 16 00:16:36.795678 containerd[1514]: time="2026-04-16T00:16:36.795565006Z" level=info msg="StartContainer for \"64f56927a832dda8f2b7ebc50f99430525c1fee3c588c319d54a8fd0201ed7c1\" returns successfully" Apr 16 00:16:37.621746 kubelet[2684]: I0416 00:16:37.621682 2684 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 16 00:16:37.621746 kubelet[2684]: I0416 00:16:37.621720 2684 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 16 00:16:37.813142 kubelet[2684]: I0416 00:16:37.812561 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-q9xkv" podStartSLOduration=20.095721777 podStartE2EDuration="25.812542315s" podCreationTimestamp="2026-04-16 00:16:12 +0000 UTC" firstStartedPulling="2026-04-16 00:16:30.945109264 +0000 UTC m=+41.576084654" lastFinishedPulling="2026-04-16 00:16:36.661929842 +0000 UTC m=+47.292905192" observedRunningTime="2026-04-16 00:16:37.811400965 +0000 UTC m=+48.442376355" watchObservedRunningTime="2026-04-16 00:16:37.812542315 +0000 UTC m=+48.443517705" Apr 16 00:16:38.771020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3953829054.mount: Deactivated successfully. Apr 16 00:16:38.792312 containerd[1514]: time="2026-04-16T00:16:38.791459354Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:16:38.792751 containerd[1514]: time="2026-04-16T00:16:38.792721906Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Apr 16 00:16:38.794776 containerd[1514]: time="2026-04-16T00:16:38.794741439Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:16:38.797090 containerd[1514]: time="2026-04-16T00:16:38.797023338Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:16:38.798123 containerd[1514]: time="2026-04-16T00:16:38.798083445Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 2.135587628s" Apr 16 00:16:38.798253 containerd[1514]: time="2026-04-16T00:16:38.798123326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Apr 16 00:16:38.804744 containerd[1514]: time="2026-04-16T00:16:38.804698976Z" level=info msg="CreateContainer within sandbox \"d46bb9beb86801859884a5434efe15e11065bb793d5301837e086ba94347958b\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 16 00:16:38.815814 containerd[1514]: time="2026-04-16T00:16:38.814406267Z" level=info msg="Container 85cab44c94411ffea01fee2a9dab886c748d7684fbbaf31c660ae02a06b0d7cc: CDI devices from CRI Config.CDIDevices: []" Apr 16 00:16:38.826304 containerd[1514]: time="2026-04-16T00:16:38.826246773Z" level=info msg="CreateContainer within sandbox \"d46bb9beb86801859884a5434efe15e11065bb793d5301837e086ba94347958b\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"85cab44c94411ffea01fee2a9dab886c748d7684fbbaf31c660ae02a06b0d7cc\"" Apr 16 00:16:38.827213 containerd[1514]: time="2026-04-16T00:16:38.827063154Z" level=info msg="StartContainer for \"85cab44c94411ffea01fee2a9dab886c748d7684fbbaf31c660ae02a06b0d7cc\"" Apr 16 00:16:38.828701 containerd[1514]: time="2026-04-16T00:16:38.828635595Z" level=info msg="connecting to shim 85cab44c94411ffea01fee2a9dab886c748d7684fbbaf31c660ae02a06b0d7cc" address="unix:///run/containerd/s/20622e02b2114e2ea620baddec6e3938cc26d5ae36add13345812e8750d41590" protocol=ttrpc version=3 Apr 16 00:16:38.854564 systemd[1]: Started cri-containerd-85cab44c94411ffea01fee2a9dab886c748d7684fbbaf31c660ae02a06b0d7cc.scope - libcontainer container 85cab44c94411ffea01fee2a9dab886c748d7684fbbaf31c660ae02a06b0d7cc. Apr 16 00:16:38.900456 containerd[1514]: time="2026-04-16T00:16:38.900397130Z" level=info msg="StartContainer for \"85cab44c94411ffea01fee2a9dab886c748d7684fbbaf31c660ae02a06b0d7cc\" returns successfully" Apr 16 00:16:42.515122 containerd[1514]: time="2026-04-16T00:16:42.514993078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5r2gs,Uid:77775a7e-a050-479a-b2cf-e2b2e306c45b,Namespace:kube-system,Attempt:0,}" Apr 16 00:16:42.517135 containerd[1514]: time="2026-04-16T00:16:42.517044845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-2mdz9,Uid:c14ea271-eeb8-4668-9677-80b08ccbd7da,Namespace:calico-system,Attempt:0,}" Apr 16 00:16:42.705867 systemd-networkd[1414]: cali25002a09b44: Link UP Apr 16 00:16:42.707261 systemd-networkd[1414]: cali25002a09b44: Gained carrier Apr 16 00:16:42.727032 kubelet[2684]: I0416 00:16:42.726730 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-59cbd788b5-2qnvc" podStartSLOduration=5.529964696 podStartE2EDuration="12.726713325s" podCreationTimestamp="2026-04-16 00:16:30 +0000 UTC" firstStartedPulling="2026-04-16 00:16:31.602218079 +0000 UTC m=+42.233193469" lastFinishedPulling="2026-04-16 00:16:38.798966708 +0000 UTC m=+49.429942098" observedRunningTime="2026-04-16 00:16:39.804432038 +0000 UTC m=+50.435407468" watchObservedRunningTime="2026-04-16 00:16:42.726713325 +0000 UTC m=+53.357688715" Apr 16 00:16:42.733366 containerd[1514]: 2026-04-16 00:16:42.582 [INFO][4345] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--0840528111-k8s-goldmane--cccfbd5cf--2mdz9-eth0 goldmane-cccfbd5cf- calico-system c14ea271-eeb8-4668-9677-80b08ccbd7da 870 0 2026-04-16 00:16:11 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459-2-4-n-0840528111 goldmane-cccfbd5cf-2mdz9 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali25002a09b44 [] [] }} ContainerID="d53a6df9d662c0755a6e984f34529d0b5eddb4fa107f1428b1c2b90324a710f5" Namespace="calico-system" Pod="goldmane-cccfbd5cf-2mdz9" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-goldmane--cccfbd5cf--2mdz9-" Apr 16 00:16:42.733366 containerd[1514]: 2026-04-16 00:16:42.583 [INFO][4345] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d53a6df9d662c0755a6e984f34529d0b5eddb4fa107f1428b1c2b90324a710f5" Namespace="calico-system" Pod="goldmane-cccfbd5cf-2mdz9" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-goldmane--cccfbd5cf--2mdz9-eth0" Apr 16 00:16:42.733366 containerd[1514]: 2026-04-16 00:16:42.628 [INFO][4366] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d53a6df9d662c0755a6e984f34529d0b5eddb4fa107f1428b1c2b90324a710f5" HandleID="k8s-pod-network.d53a6df9d662c0755a6e984f34529d0b5eddb4fa107f1428b1c2b90324a710f5" Workload="ci--4459--2--4--n--0840528111-k8s-goldmane--cccfbd5cf--2mdz9-eth0" Apr 16 00:16:42.733598 containerd[1514]: 2026-04-16 00:16:42.645 [INFO][4366] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d53a6df9d662c0755a6e984f34529d0b5eddb4fa107f1428b1c2b90324a710f5" HandleID="k8s-pod-network.d53a6df9d662c0755a6e984f34529d0b5eddb4fa107f1428b1c2b90324a710f5" Workload="ci--4459--2--4--n--0840528111-k8s-goldmane--cccfbd5cf--2mdz9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fb4c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-n-0840528111", "pod":"goldmane-cccfbd5cf-2mdz9", "timestamp":"2026-04-16 00:16:42.628459078 +0000 UTC"}, Hostname:"ci-4459-2-4-n-0840528111", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002834a0)} Apr 16 00:16:42.733598 containerd[1514]: 2026-04-16 00:16:42.645 [INFO][4366] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 00:16:42.733598 containerd[1514]: 2026-04-16 00:16:42.645 [INFO][4366] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 00:16:42.733598 containerd[1514]: 2026-04-16 00:16:42.645 [INFO][4366] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-0840528111' Apr 16 00:16:42.733598 containerd[1514]: 2026-04-16 00:16:42.651 [INFO][4366] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d53a6df9d662c0755a6e984f34529d0b5eddb4fa107f1428b1c2b90324a710f5" host="ci-4459-2-4-n-0840528111" Apr 16 00:16:42.733598 containerd[1514]: 2026-04-16 00:16:42.659 [INFO][4366] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-0840528111" Apr 16 00:16:42.733598 containerd[1514]: 2026-04-16 00:16:42.667 [INFO][4366] ipam/ipam.go 526: Trying affinity for 192.168.36.0/26 host="ci-4459-2-4-n-0840528111" Apr 16 00:16:42.733598 containerd[1514]: 2026-04-16 00:16:42.670 [INFO][4366] ipam/ipam.go 160: Attempting to load block cidr=192.168.36.0/26 host="ci-4459-2-4-n-0840528111" Apr 16 00:16:42.733598 containerd[1514]: 2026-04-16 00:16:42.672 [INFO][4366] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.36.0/26 host="ci-4459-2-4-n-0840528111" Apr 16 00:16:42.733935 containerd[1514]: 2026-04-16 00:16:42.672 [INFO][4366] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.36.0/26 handle="k8s-pod-network.d53a6df9d662c0755a6e984f34529d0b5eddb4fa107f1428b1c2b90324a710f5" host="ci-4459-2-4-n-0840528111" Apr 16 00:16:42.733935 containerd[1514]: 2026-04-16 00:16:42.675 [INFO][4366] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d53a6df9d662c0755a6e984f34529d0b5eddb4fa107f1428b1c2b90324a710f5 Apr 16 00:16:42.733935 containerd[1514]: 2026-04-16 00:16:42.681 [INFO][4366] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.36.0/26 handle="k8s-pod-network.d53a6df9d662c0755a6e984f34529d0b5eddb4fa107f1428b1c2b90324a710f5" host="ci-4459-2-4-n-0840528111" Apr 16 00:16:42.733935 containerd[1514]: 2026-04-16 00:16:42.692 [INFO][4366] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.36.3/26] block=192.168.36.0/26 handle="k8s-pod-network.d53a6df9d662c0755a6e984f34529d0b5eddb4fa107f1428b1c2b90324a710f5" host="ci-4459-2-4-n-0840528111" Apr 16 00:16:42.733935 containerd[1514]: 2026-04-16 00:16:42.693 [INFO][4366] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.36.3/26] handle="k8s-pod-network.d53a6df9d662c0755a6e984f34529d0b5eddb4fa107f1428b1c2b90324a710f5" host="ci-4459-2-4-n-0840528111" Apr 16 00:16:42.733935 containerd[1514]: 2026-04-16 00:16:42.693 [INFO][4366] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 00:16:42.733935 containerd[1514]: 2026-04-16 00:16:42.694 [INFO][4366] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.36.3/26] IPv6=[] ContainerID="d53a6df9d662c0755a6e984f34529d0b5eddb4fa107f1428b1c2b90324a710f5" HandleID="k8s-pod-network.d53a6df9d662c0755a6e984f34529d0b5eddb4fa107f1428b1c2b90324a710f5" Workload="ci--4459--2--4--n--0840528111-k8s-goldmane--cccfbd5cf--2mdz9-eth0" Apr 16 00:16:42.734308 containerd[1514]: 2026-04-16 00:16:42.698 [INFO][4345] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d53a6df9d662c0755a6e984f34529d0b5eddb4fa107f1428b1c2b90324a710f5" Namespace="calico-system" Pod="goldmane-cccfbd5cf-2mdz9" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-goldmane--cccfbd5cf--2mdz9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--0840528111-k8s-goldmane--cccfbd5cf--2mdz9-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"c14ea271-eeb8-4668-9677-80b08ccbd7da", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 0, 16, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-0840528111", ContainerID:"", Pod:"goldmane-cccfbd5cf-2mdz9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.36.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali25002a09b44", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 00:16:42.734308 containerd[1514]: 2026-04-16 00:16:42.698 [INFO][4345] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.3/32] ContainerID="d53a6df9d662c0755a6e984f34529d0b5eddb4fa107f1428b1c2b90324a710f5" Namespace="calico-system" Pod="goldmane-cccfbd5cf-2mdz9" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-goldmane--cccfbd5cf--2mdz9-eth0" Apr 16 00:16:42.734732 containerd[1514]: 2026-04-16 00:16:42.698 [INFO][4345] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali25002a09b44 ContainerID="d53a6df9d662c0755a6e984f34529d0b5eddb4fa107f1428b1c2b90324a710f5" Namespace="calico-system" Pod="goldmane-cccfbd5cf-2mdz9" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-goldmane--cccfbd5cf--2mdz9-eth0" Apr 16 00:16:42.734732 containerd[1514]: 2026-04-16 00:16:42.708 [INFO][4345] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d53a6df9d662c0755a6e984f34529d0b5eddb4fa107f1428b1c2b90324a710f5" Namespace="calico-system" Pod="goldmane-cccfbd5cf-2mdz9" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-goldmane--cccfbd5cf--2mdz9-eth0" Apr 16 00:16:42.734781 containerd[1514]: 2026-04-16 00:16:42.710 [INFO][4345] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d53a6df9d662c0755a6e984f34529d0b5eddb4fa107f1428b1c2b90324a710f5" Namespace="calico-system" Pod="goldmane-cccfbd5cf-2mdz9" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-goldmane--cccfbd5cf--2mdz9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--0840528111-k8s-goldmane--cccfbd5cf--2mdz9-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"c14ea271-eeb8-4668-9677-80b08ccbd7da", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 0, 16, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-0840528111", ContainerID:"d53a6df9d662c0755a6e984f34529d0b5eddb4fa107f1428b1c2b90324a710f5", Pod:"goldmane-cccfbd5cf-2mdz9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.36.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali25002a09b44", MAC:"fa:de:13:e6:c4:db", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 00:16:42.734834 containerd[1514]: 2026-04-16 00:16:42.727 [INFO][4345] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d53a6df9d662c0755a6e984f34529d0b5eddb4fa107f1428b1c2b90324a710f5" Namespace="calico-system" Pod="goldmane-cccfbd5cf-2mdz9" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-goldmane--cccfbd5cf--2mdz9-eth0" Apr 16 00:16:42.768162 containerd[1514]: time="2026-04-16T00:16:42.767906523Z" level=info msg="connecting to shim d53a6df9d662c0755a6e984f34529d0b5eddb4fa107f1428b1c2b90324a710f5" address="unix:///run/containerd/s/e67953a5fd2f7381fa95afdfbd1da8df2951088ebfc11b065d07498890254020" namespace=k8s.io protocol=ttrpc version=3 Apr 16 00:16:42.809392 systemd[1]: Started cri-containerd-d53a6df9d662c0755a6e984f34529d0b5eddb4fa107f1428b1c2b90324a710f5.scope - libcontainer container d53a6df9d662c0755a6e984f34529d0b5eddb4fa107f1428b1c2b90324a710f5. Apr 16 00:16:42.824644 systemd-networkd[1414]: califf6c0069fb5: Link UP Apr 16 00:16:42.825491 systemd-networkd[1414]: califf6c0069fb5: Gained carrier Apr 16 00:16:42.850575 containerd[1514]: 2026-04-16 00:16:42.578 [INFO][4337] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--0840528111-k8s-coredns--66bc5c9577--5r2gs-eth0 coredns-66bc5c9577- kube-system 77775a7e-a050-479a-b2cf-e2b2e306c45b 858 0 2026-04-16 00:15:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-2-4-n-0840528111 coredns-66bc5c9577-5r2gs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califf6c0069fb5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="1c4e5d6fb01a903d3ab42e63d96672454226e232679a1f41ed8d080c56f00f82" Namespace="kube-system" Pod="coredns-66bc5c9577-5r2gs" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-coredns--66bc5c9577--5r2gs-" Apr 16 00:16:42.850575 containerd[1514]: 2026-04-16 00:16:42.579 [INFO][4337] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1c4e5d6fb01a903d3ab42e63d96672454226e232679a1f41ed8d080c56f00f82" Namespace="kube-system" Pod="coredns-66bc5c9577-5r2gs" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-coredns--66bc5c9577--5r2gs-eth0" Apr 16 00:16:42.850575 containerd[1514]: 2026-04-16 00:16:42.640 [INFO][4361] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1c4e5d6fb01a903d3ab42e63d96672454226e232679a1f41ed8d080c56f00f82" HandleID="k8s-pod-network.1c4e5d6fb01a903d3ab42e63d96672454226e232679a1f41ed8d080c56f00f82" Workload="ci--4459--2--4--n--0840528111-k8s-coredns--66bc5c9577--5r2gs-eth0" Apr 16 00:16:42.850780 containerd[1514]: 2026-04-16 00:16:42.658 [INFO][4361] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1c4e5d6fb01a903d3ab42e63d96672454226e232679a1f41ed8d080c56f00f82" HandleID="k8s-pod-network.1c4e5d6fb01a903d3ab42e63d96672454226e232679a1f41ed8d080c56f00f82" Workload="ci--4459--2--4--n--0840528111-k8s-coredns--66bc5c9577--5r2gs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fb3a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-2-4-n-0840528111", "pod":"coredns-66bc5c9577-5r2gs", "timestamp":"2026-04-16 00:16:42.640466958 +0000 UTC"}, Hostname:"ci-4459-2-4-n-0840528111", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000254dc0)} Apr 16 00:16:42.850780 containerd[1514]: 2026-04-16 00:16:42.658 [INFO][4361] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 00:16:42.850780 containerd[1514]: 2026-04-16 00:16:42.694 [INFO][4361] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 00:16:42.850780 containerd[1514]: 2026-04-16 00:16:42.694 [INFO][4361] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-0840528111' Apr 16 00:16:42.850780 containerd[1514]: 2026-04-16 00:16:42.754 [INFO][4361] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1c4e5d6fb01a903d3ab42e63d96672454226e232679a1f41ed8d080c56f00f82" host="ci-4459-2-4-n-0840528111" Apr 16 00:16:42.850780 containerd[1514]: 2026-04-16 00:16:42.768 [INFO][4361] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-0840528111" Apr 16 00:16:42.850780 containerd[1514]: 2026-04-16 00:16:42.783 [INFO][4361] ipam/ipam.go 526: Trying affinity for 192.168.36.0/26 host="ci-4459-2-4-n-0840528111" Apr 16 00:16:42.850780 containerd[1514]: 2026-04-16 00:16:42.788 [INFO][4361] ipam/ipam.go 160: Attempting to load block cidr=192.168.36.0/26 host="ci-4459-2-4-n-0840528111" Apr 16 00:16:42.850780 containerd[1514]: 2026-04-16 00:16:42.791 [INFO][4361] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.36.0/26 host="ci-4459-2-4-n-0840528111" Apr 16 00:16:42.850968 containerd[1514]: 2026-04-16 00:16:42.792 [INFO][4361] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.36.0/26 handle="k8s-pod-network.1c4e5d6fb01a903d3ab42e63d96672454226e232679a1f41ed8d080c56f00f82" host="ci-4459-2-4-n-0840528111" Apr 16 00:16:42.850968 containerd[1514]: 2026-04-16 00:16:42.795 [INFO][4361] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1c4e5d6fb01a903d3ab42e63d96672454226e232679a1f41ed8d080c56f00f82 Apr 16 00:16:42.850968 containerd[1514]: 2026-04-16 00:16:42.806 [INFO][4361] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.36.0/26 handle="k8s-pod-network.1c4e5d6fb01a903d3ab42e63d96672454226e232679a1f41ed8d080c56f00f82" host="ci-4459-2-4-n-0840528111" Apr 16 00:16:42.850968 containerd[1514]: 2026-04-16 00:16:42.816 [INFO][4361] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.36.4/26] block=192.168.36.0/26 handle="k8s-pod-network.1c4e5d6fb01a903d3ab42e63d96672454226e232679a1f41ed8d080c56f00f82" host="ci-4459-2-4-n-0840528111" Apr 16 00:16:42.850968 containerd[1514]: 2026-04-16 00:16:42.816 [INFO][4361] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.36.4/26] handle="k8s-pod-network.1c4e5d6fb01a903d3ab42e63d96672454226e232679a1f41ed8d080c56f00f82" host="ci-4459-2-4-n-0840528111" Apr 16 00:16:42.850968 containerd[1514]: 2026-04-16 00:16:42.816 [INFO][4361] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 00:16:42.850968 containerd[1514]: 2026-04-16 00:16:42.816 [INFO][4361] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.36.4/26] IPv6=[] ContainerID="1c4e5d6fb01a903d3ab42e63d96672454226e232679a1f41ed8d080c56f00f82" HandleID="k8s-pod-network.1c4e5d6fb01a903d3ab42e63d96672454226e232679a1f41ed8d080c56f00f82" Workload="ci--4459--2--4--n--0840528111-k8s-coredns--66bc5c9577--5r2gs-eth0" Apr 16 00:16:42.851099 containerd[1514]: 2026-04-16 00:16:42.820 [INFO][4337] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1c4e5d6fb01a903d3ab42e63d96672454226e232679a1f41ed8d080c56f00f82" Namespace="kube-system" Pod="coredns-66bc5c9577-5r2gs" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-coredns--66bc5c9577--5r2gs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--0840528111-k8s-coredns--66bc5c9577--5r2gs-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"77775a7e-a050-479a-b2cf-e2b2e306c45b", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 0, 15, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-0840528111", ContainerID:"", Pod:"coredns-66bc5c9577-5r2gs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califf6c0069fb5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 00:16:42.851099 containerd[1514]: 2026-04-16 00:16:42.820 [INFO][4337] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.4/32] ContainerID="1c4e5d6fb01a903d3ab42e63d96672454226e232679a1f41ed8d080c56f00f82" Namespace="kube-system" Pod="coredns-66bc5c9577-5r2gs" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-coredns--66bc5c9577--5r2gs-eth0" Apr 16 00:16:42.851099 containerd[1514]: 2026-04-16 00:16:42.820 [INFO][4337] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califf6c0069fb5 ContainerID="1c4e5d6fb01a903d3ab42e63d96672454226e232679a1f41ed8d080c56f00f82" Namespace="kube-system" Pod="coredns-66bc5c9577-5r2gs" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-coredns--66bc5c9577--5r2gs-eth0" Apr 16 00:16:42.851099 containerd[1514]: 2026-04-16 00:16:42.826 [INFO][4337] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1c4e5d6fb01a903d3ab42e63d96672454226e232679a1f41ed8d080c56f00f82" Namespace="kube-system" Pod="coredns-66bc5c9577-5r2gs" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-coredns--66bc5c9577--5r2gs-eth0" Apr 16 00:16:42.851099 containerd[1514]: 2026-04-16 00:16:42.827 [INFO][4337] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1c4e5d6fb01a903d3ab42e63d96672454226e232679a1f41ed8d080c56f00f82" Namespace="kube-system" Pod="coredns-66bc5c9577-5r2gs" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-coredns--66bc5c9577--5r2gs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--0840528111-k8s-coredns--66bc5c9577--5r2gs-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"77775a7e-a050-479a-b2cf-e2b2e306c45b", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 0, 15, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-0840528111", ContainerID:"1c4e5d6fb01a903d3ab42e63d96672454226e232679a1f41ed8d080c56f00f82", Pod:"coredns-66bc5c9577-5r2gs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califf6c0069fb5", MAC:"a2:c9:4b:11:07:85", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 00:16:42.852360 containerd[1514]: 2026-04-16 00:16:42.848 [INFO][4337] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1c4e5d6fb01a903d3ab42e63d96672454226e232679a1f41ed8d080c56f00f82" Namespace="kube-system" Pod="coredns-66bc5c9577-5r2gs" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-coredns--66bc5c9577--5r2gs-eth0" Apr 16 00:16:42.893790 containerd[1514]: time="2026-04-16T00:16:42.893738812Z" level=info msg="connecting to shim 1c4e5d6fb01a903d3ab42e63d96672454226e232679a1f41ed8d080c56f00f82" address="unix:///run/containerd/s/8d95cb1716f9ac6e50fd598afd74f650cf450e902ccb159e273891e0dee0bb10" namespace=k8s.io protocol=ttrpc version=3 Apr 16 00:16:42.912361 containerd[1514]: time="2026-04-16T00:16:42.911960956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-2mdz9,Uid:c14ea271-eeb8-4668-9677-80b08ccbd7da,Namespace:calico-system,Attempt:0,} returns sandbox id \"d53a6df9d662c0755a6e984f34529d0b5eddb4fa107f1428b1c2b90324a710f5\"" Apr 16 00:16:42.915689 containerd[1514]: time="2026-04-16T00:16:42.915657882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 16 00:16:42.932517 systemd[1]: Started cri-containerd-1c4e5d6fb01a903d3ab42e63d96672454226e232679a1f41ed8d080c56f00f82.scope - libcontainer container 1c4e5d6fb01a903d3ab42e63d96672454226e232679a1f41ed8d080c56f00f82. Apr 16 00:16:42.986759 containerd[1514]: time="2026-04-16T00:16:42.986713855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5r2gs,Uid:77775a7e-a050-479a-b2cf-e2b2e306c45b,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c4e5d6fb01a903d3ab42e63d96672454226e232679a1f41ed8d080c56f00f82\"" Apr 16 00:16:42.994712 containerd[1514]: time="2026-04-16T00:16:42.994669201Z" level=info msg="CreateContainer within sandbox \"1c4e5d6fb01a903d3ab42e63d96672454226e232679a1f41ed8d080c56f00f82\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 16 00:16:43.008242 containerd[1514]: time="2026-04-16T00:16:43.007546697Z" level=info msg="Container a06c4bb897c65d9c88094a96d9bcfd3cc2ed7418ced3959a8847c650da7036bd: CDI devices from CRI Config.CDIDevices: []" Apr 16 00:16:43.020476 containerd[1514]: time="2026-04-16T00:16:43.019669531Z" level=info msg="CreateContainer within sandbox \"1c4e5d6fb01a903d3ab42e63d96672454226e232679a1f41ed8d080c56f00f82\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a06c4bb897c65d9c88094a96d9bcfd3cc2ed7418ced3959a8847c650da7036bd\"" Apr 16 00:16:43.022311 containerd[1514]: time="2026-04-16T00:16:43.022277231Z" level=info msg="StartContainer for \"a06c4bb897c65d9c88094a96d9bcfd3cc2ed7418ced3959a8847c650da7036bd\"" Apr 16 00:16:43.024751 containerd[1514]: time="2026-04-16T00:16:43.024719966Z" level=info msg="connecting to shim a06c4bb897c65d9c88094a96d9bcfd3cc2ed7418ced3959a8847c650da7036bd" address="unix:///run/containerd/s/8d95cb1716f9ac6e50fd598afd74f650cf450e902ccb159e273891e0dee0bb10" protocol=ttrpc version=3 Apr 16 00:16:43.044396 systemd[1]: Started cri-containerd-a06c4bb897c65d9c88094a96d9bcfd3cc2ed7418ced3959a8847c650da7036bd.scope - libcontainer container a06c4bb897c65d9c88094a96d9bcfd3cc2ed7418ced3959a8847c650da7036bd. Apr 16 00:16:43.080626 containerd[1514]: time="2026-04-16T00:16:43.080575833Z" level=info msg="StartContainer for \"a06c4bb897c65d9c88094a96d9bcfd3cc2ed7418ced3959a8847c650da7036bd\" returns successfully" Apr 16 00:16:43.844984 kubelet[2684]: I0416 00:16:43.844836 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-5r2gs" podStartSLOduration=47.844816683 podStartE2EDuration="47.844816683s" podCreationTimestamp="2026-04-16 00:15:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 00:16:43.822540138 +0000 UTC m=+54.453515528" watchObservedRunningTime="2026-04-16 00:16:43.844816683 +0000 UTC m=+54.475792073" Apr 16 00:16:44.387650 systemd-networkd[1414]: cali25002a09b44: Gained IPv6LL Apr 16 00:16:44.515799 containerd[1514]: time="2026-04-16T00:16:44.515632838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-krm8w,Uid:986c9599-0625-4935-992c-bbdd4ed4b0da,Namespace:kube-system,Attempt:0,}" Apr 16 00:16:44.518213 containerd[1514]: time="2026-04-16T00:16:44.517955650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5bb45b9d-mfsr2,Uid:472bf850-ff7c-4585-ae04-90ee1c340e3a,Namespace:calico-system,Attempt:0,}" Apr 16 00:16:44.518834 containerd[1514]: time="2026-04-16T00:16:44.518794988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5bb45b9d-xkbr9,Uid:c5f7d55b-5915-4e61-8741-d346763d163a,Namespace:calico-system,Attempt:0,}" Apr 16 00:16:44.524544 containerd[1514]: time="2026-04-16T00:16:44.523812659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84cb8894f8-djtj2,Uid:e7536550-604c-412d-a9f8-518eab3d01c1,Namespace:calico-system,Attempt:0,}" Apr 16 00:16:44.643983 systemd-networkd[1414]: califf6c0069fb5: Gained IPv6LL Apr 16 00:16:44.901767 systemd-networkd[1414]: cali53c1ca815ef: Link UP Apr 16 00:16:44.902001 systemd-networkd[1414]: cali53c1ca815ef: Gained carrier Apr 16 00:16:44.924852 containerd[1514]: 2026-04-16 00:16:44.693 [INFO][4573] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--0840528111-k8s-calico--apiserver--6d5bb45b9d--mfsr2-eth0 calico-apiserver-6d5bb45b9d- calico-system 472bf850-ff7c-4585-ae04-90ee1c340e3a 868 0 2026-04-16 00:16:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d5bb45b9d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-4-n-0840528111 calico-apiserver-6d5bb45b9d-mfsr2 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali53c1ca815ef [] [] }} ContainerID="77aea444e191c28a92e5c7aafb64cb16ec33a519c7d6d35dad4171634b620d53" Namespace="calico-system" Pod="calico-apiserver-6d5bb45b9d-mfsr2" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-calico--apiserver--6d5bb45b9d--mfsr2-" Apr 16 00:16:44.924852 containerd[1514]: 2026-04-16 00:16:44.693 [INFO][4573] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="77aea444e191c28a92e5c7aafb64cb16ec33a519c7d6d35dad4171634b620d53" Namespace="calico-system" Pod="calico-apiserver-6d5bb45b9d-mfsr2" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-calico--apiserver--6d5bb45b9d--mfsr2-eth0" Apr 16 00:16:44.924852 containerd[1514]: 2026-04-16 00:16:44.782 [INFO][4619] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="77aea444e191c28a92e5c7aafb64cb16ec33a519c7d6d35dad4171634b620d53" HandleID="k8s-pod-network.77aea444e191c28a92e5c7aafb64cb16ec33a519c7d6d35dad4171634b620d53" Workload="ci--4459--2--4--n--0840528111-k8s-calico--apiserver--6d5bb45b9d--mfsr2-eth0" Apr 16 00:16:44.924852 containerd[1514]: 2026-04-16 00:16:44.825 [INFO][4619] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="77aea444e191c28a92e5c7aafb64cb16ec33a519c7d6d35dad4171634b620d53" HandleID="k8s-pod-network.77aea444e191c28a92e5c7aafb64cb16ec33a519c7d6d35dad4171634b620d53" Workload="ci--4459--2--4--n--0840528111-k8s-calico--apiserver--6d5bb45b9d--mfsr2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000373ad0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-n-0840528111", "pod":"calico-apiserver-6d5bb45b9d-mfsr2", "timestamp":"2026-04-16 00:16:44.782823663 +0000 UTC"}, Hostname:"ci-4459-2-4-n-0840528111", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002b6dc0)} Apr 16 00:16:44.924852 containerd[1514]: 2026-04-16 00:16:44.825 [INFO][4619] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 00:16:44.924852 containerd[1514]: 2026-04-16 00:16:44.826 [INFO][4619] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 00:16:44.924852 containerd[1514]: 2026-04-16 00:16:44.826 [INFO][4619] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-0840528111' Apr 16 00:16:44.924852 containerd[1514]: 2026-04-16 00:16:44.834 [INFO][4619] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.77aea444e191c28a92e5c7aafb64cb16ec33a519c7d6d35dad4171634b620d53" host="ci-4459-2-4-n-0840528111" Apr 16 00:16:44.924852 containerd[1514]: 2026-04-16 00:16:44.843 [INFO][4619] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-0840528111" Apr 16 00:16:44.924852 containerd[1514]: 2026-04-16 00:16:44.853 [INFO][4619] ipam/ipam.go 526: Trying affinity for 192.168.36.0/26 host="ci-4459-2-4-n-0840528111" Apr 16 00:16:44.924852 containerd[1514]: 2026-04-16 00:16:44.858 [INFO][4619] ipam/ipam.go 160: Attempting to load block cidr=192.168.36.0/26 host="ci-4459-2-4-n-0840528111" Apr 16 00:16:44.924852 containerd[1514]: 2026-04-16 00:16:44.863 [INFO][4619] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.36.0/26 host="ci-4459-2-4-n-0840528111" Apr 16 00:16:44.924852 containerd[1514]: 2026-04-16 00:16:44.863 [INFO][4619] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.36.0/26 handle="k8s-pod-network.77aea444e191c28a92e5c7aafb64cb16ec33a519c7d6d35dad4171634b620d53" host="ci-4459-2-4-n-0840528111" Apr 16 00:16:44.924852 containerd[1514]: 2026-04-16 00:16:44.869 [INFO][4619] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.77aea444e191c28a92e5c7aafb64cb16ec33a519c7d6d35dad4171634b620d53 Apr 16 00:16:44.924852 containerd[1514]: 2026-04-16 00:16:44.878 [INFO][4619] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.36.0/26 handle="k8s-pod-network.77aea444e191c28a92e5c7aafb64cb16ec33a519c7d6d35dad4171634b620d53" host="ci-4459-2-4-n-0840528111" Apr 16 00:16:44.924852 containerd[1514]: 2026-04-16 00:16:44.891 [INFO][4619] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.36.5/26] block=192.168.36.0/26 handle="k8s-pod-network.77aea444e191c28a92e5c7aafb64cb16ec33a519c7d6d35dad4171634b620d53" host="ci-4459-2-4-n-0840528111" Apr 16 00:16:44.924852 containerd[1514]: 2026-04-16 00:16:44.892 [INFO][4619] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.36.5/26] handle="k8s-pod-network.77aea444e191c28a92e5c7aafb64cb16ec33a519c7d6d35dad4171634b620d53" host="ci-4459-2-4-n-0840528111" Apr 16 00:16:44.924852 containerd[1514]: 2026-04-16 00:16:44.892 [INFO][4619] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 00:16:44.924852 containerd[1514]: 2026-04-16 00:16:44.892 [INFO][4619] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.36.5/26] IPv6=[] ContainerID="77aea444e191c28a92e5c7aafb64cb16ec33a519c7d6d35dad4171634b620d53" HandleID="k8s-pod-network.77aea444e191c28a92e5c7aafb64cb16ec33a519c7d6d35dad4171634b620d53" Workload="ci--4459--2--4--n--0840528111-k8s-calico--apiserver--6d5bb45b9d--mfsr2-eth0" Apr 16 00:16:44.925602 containerd[1514]: 2026-04-16 00:16:44.897 [INFO][4573] cni-plugin/k8s.go 418: Populated endpoint ContainerID="77aea444e191c28a92e5c7aafb64cb16ec33a519c7d6d35dad4171634b620d53" Namespace="calico-system" Pod="calico-apiserver-6d5bb45b9d-mfsr2" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-calico--apiserver--6d5bb45b9d--mfsr2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--0840528111-k8s-calico--apiserver--6d5bb45b9d--mfsr2-eth0", GenerateName:"calico-apiserver-6d5bb45b9d-", Namespace:"calico-system", SelfLink:"", UID:"472bf850-ff7c-4585-ae04-90ee1c340e3a", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 0, 16, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d5bb45b9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-0840528111", ContainerID:"", Pod:"calico-apiserver-6d5bb45b9d-mfsr2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali53c1ca815ef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 00:16:44.925602 containerd[1514]: 2026-04-16 00:16:44.897 [INFO][4573] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.5/32] ContainerID="77aea444e191c28a92e5c7aafb64cb16ec33a519c7d6d35dad4171634b620d53" Namespace="calico-system" Pod="calico-apiserver-6d5bb45b9d-mfsr2" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-calico--apiserver--6d5bb45b9d--mfsr2-eth0" Apr 16 00:16:44.925602 containerd[1514]: 2026-04-16 00:16:44.897 [INFO][4573] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali53c1ca815ef ContainerID="77aea444e191c28a92e5c7aafb64cb16ec33a519c7d6d35dad4171634b620d53" Namespace="calico-system" Pod="calico-apiserver-6d5bb45b9d-mfsr2" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-calico--apiserver--6d5bb45b9d--mfsr2-eth0" Apr 16 00:16:44.925602 containerd[1514]: 2026-04-16 00:16:44.900 [INFO][4573] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="77aea444e191c28a92e5c7aafb64cb16ec33a519c7d6d35dad4171634b620d53" Namespace="calico-system" Pod="calico-apiserver-6d5bb45b9d-mfsr2" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-calico--apiserver--6d5bb45b9d--mfsr2-eth0" Apr 16 00:16:44.925602 containerd[1514]: 2026-04-16 00:16:44.901 [INFO][4573] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="77aea444e191c28a92e5c7aafb64cb16ec33a519c7d6d35dad4171634b620d53" Namespace="calico-system" Pod="calico-apiserver-6d5bb45b9d-mfsr2" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-calico--apiserver--6d5bb45b9d--mfsr2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--0840528111-k8s-calico--apiserver--6d5bb45b9d--mfsr2-eth0", GenerateName:"calico-apiserver-6d5bb45b9d-", Namespace:"calico-system", SelfLink:"", UID:"472bf850-ff7c-4585-ae04-90ee1c340e3a", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 0, 16, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d5bb45b9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-0840528111", ContainerID:"77aea444e191c28a92e5c7aafb64cb16ec33a519c7d6d35dad4171634b620d53", Pod:"calico-apiserver-6d5bb45b9d-mfsr2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali53c1ca815ef", MAC:"e2:8c:60:fc:1c:6e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 00:16:44.925602 containerd[1514]: 2026-04-16 00:16:44.920 [INFO][4573] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="77aea444e191c28a92e5c7aafb64cb16ec33a519c7d6d35dad4171634b620d53" Namespace="calico-system" Pod="calico-apiserver-6d5bb45b9d-mfsr2" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-calico--apiserver--6d5bb45b9d--mfsr2-eth0" Apr 16 00:16:44.988455 containerd[1514]: time="2026-04-16T00:16:44.988237443Z" level=info msg="connecting to shim 77aea444e191c28a92e5c7aafb64cb16ec33a519c7d6d35dad4171634b620d53" address="unix:///run/containerd/s/105f7205ac0d407d3d5c18bb4b0076ada915d85a0bbd670a1bc251bddf69ab8c" namespace=k8s.io protocol=ttrpc version=3 Apr 16 00:16:45.009307 systemd-networkd[1414]: cali9432ac81cbf: Link UP Apr 16 00:16:45.010136 systemd-networkd[1414]: cali9432ac81cbf: Gained carrier Apr 16 00:16:45.047189 containerd[1514]: 2026-04-16 00:16:44.666 [INFO][4579] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--0840528111-k8s-calico--kube--controllers--84cb8894f8--djtj2-eth0 calico-kube-controllers-84cb8894f8- calico-system e7536550-604c-412d-a9f8-518eab3d01c1 865 0 2026-04-16 00:16:12 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:84cb8894f8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459-2-4-n-0840528111 calico-kube-controllers-84cb8894f8-djtj2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali9432ac81cbf [] [] }} ContainerID="e58e62164f68b4e4fe55140447f2ff13e6f91543495c04d8a8425a47db5aecc1" Namespace="calico-system" Pod="calico-kube-controllers-84cb8894f8-djtj2" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-calico--kube--controllers--84cb8894f8--djtj2-" Apr 16 00:16:45.047189 containerd[1514]: 2026-04-16 00:16:44.669 [INFO][4579] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e58e62164f68b4e4fe55140447f2ff13e6f91543495c04d8a8425a47db5aecc1" Namespace="calico-system" Pod="calico-kube-controllers-84cb8894f8-djtj2" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-calico--kube--controllers--84cb8894f8--djtj2-eth0" Apr 16 00:16:45.047189 containerd[1514]: 2026-04-16 00:16:44.805 [INFO][4617] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e58e62164f68b4e4fe55140447f2ff13e6f91543495c04d8a8425a47db5aecc1" HandleID="k8s-pod-network.e58e62164f68b4e4fe55140447f2ff13e6f91543495c04d8a8425a47db5aecc1" Workload="ci--4459--2--4--n--0840528111-k8s-calico--kube--controllers--84cb8894f8--djtj2-eth0" Apr 16 00:16:45.047189 containerd[1514]: 2026-04-16 00:16:44.833 [INFO][4617] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e58e62164f68b4e4fe55140447f2ff13e6f91543495c04d8a8425a47db5aecc1" HandleID="k8s-pod-network.e58e62164f68b4e4fe55140447f2ff13e6f91543495c04d8a8425a47db5aecc1" Workload="ci--4459--2--4--n--0840528111-k8s-calico--kube--controllers--84cb8894f8--djtj2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400037be70), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-n-0840528111", "pod":"calico-kube-controllers-84cb8894f8-djtj2", "timestamp":"2026-04-16 00:16:44.805268279 +0000 UTC"}, Hostname:"ci-4459-2-4-n-0840528111", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40000fc160)} Apr 16 00:16:45.047189 containerd[1514]: 2026-04-16 00:16:44.833 [INFO][4617] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 00:16:45.047189 containerd[1514]: 2026-04-16 00:16:44.892 [INFO][4617] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 00:16:45.047189 containerd[1514]: 2026-04-16 00:16:44.892 [INFO][4617] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-0840528111' Apr 16 00:16:45.047189 containerd[1514]: 2026-04-16 00:16:44.935 [INFO][4617] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e58e62164f68b4e4fe55140447f2ff13e6f91543495c04d8a8425a47db5aecc1" host="ci-4459-2-4-n-0840528111" Apr 16 00:16:45.047189 containerd[1514]: 2026-04-16 00:16:44.945 [INFO][4617] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-0840528111" Apr 16 00:16:45.047189 containerd[1514]: 2026-04-16 00:16:44.959 [INFO][4617] ipam/ipam.go 526: Trying affinity for 192.168.36.0/26 host="ci-4459-2-4-n-0840528111" Apr 16 00:16:45.047189 containerd[1514]: 2026-04-16 00:16:44.963 [INFO][4617] ipam/ipam.go 160: Attempting to load block cidr=192.168.36.0/26 host="ci-4459-2-4-n-0840528111" Apr 16 00:16:45.047189 containerd[1514]: 2026-04-16 00:16:44.968 [INFO][4617] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.36.0/26 host="ci-4459-2-4-n-0840528111" Apr 16 00:16:45.047189 containerd[1514]: 2026-04-16 00:16:44.969 [INFO][4617] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.36.0/26 handle="k8s-pod-network.e58e62164f68b4e4fe55140447f2ff13e6f91543495c04d8a8425a47db5aecc1" host="ci-4459-2-4-n-0840528111" Apr 16 00:16:45.047189 containerd[1514]: 2026-04-16 00:16:44.972 [INFO][4617] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e58e62164f68b4e4fe55140447f2ff13e6f91543495c04d8a8425a47db5aecc1 Apr 16 00:16:45.047189 containerd[1514]: 2026-04-16 00:16:44.979 [INFO][4617] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.36.0/26 handle="k8s-pod-network.e58e62164f68b4e4fe55140447f2ff13e6f91543495c04d8a8425a47db5aecc1" host="ci-4459-2-4-n-0840528111" Apr 16 00:16:45.047189 containerd[1514]: 2026-04-16 00:16:44.993 [INFO][4617] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.36.6/26] block=192.168.36.0/26 handle="k8s-pod-network.e58e62164f68b4e4fe55140447f2ff13e6f91543495c04d8a8425a47db5aecc1" host="ci-4459-2-4-n-0840528111" Apr 16 00:16:45.047189 containerd[1514]: 2026-04-16 00:16:44.994 [INFO][4617] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.36.6/26] handle="k8s-pod-network.e58e62164f68b4e4fe55140447f2ff13e6f91543495c04d8a8425a47db5aecc1" host="ci-4459-2-4-n-0840528111" Apr 16 00:16:45.047189 containerd[1514]: 2026-04-16 00:16:44.994 [INFO][4617] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 00:16:45.047189 containerd[1514]: 2026-04-16 00:16:44.994 [INFO][4617] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.36.6/26] IPv6=[] ContainerID="e58e62164f68b4e4fe55140447f2ff13e6f91543495c04d8a8425a47db5aecc1" HandleID="k8s-pod-network.e58e62164f68b4e4fe55140447f2ff13e6f91543495c04d8a8425a47db5aecc1" Workload="ci--4459--2--4--n--0840528111-k8s-calico--kube--controllers--84cb8894f8--djtj2-eth0" Apr 16 00:16:45.048570 containerd[1514]: 2026-04-16 00:16:44.998 [INFO][4579] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e58e62164f68b4e4fe55140447f2ff13e6f91543495c04d8a8425a47db5aecc1" Namespace="calico-system" Pod="calico-kube-controllers-84cb8894f8-djtj2" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-calico--kube--controllers--84cb8894f8--djtj2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--0840528111-k8s-calico--kube--controllers--84cb8894f8--djtj2-eth0", GenerateName:"calico-kube-controllers-84cb8894f8-", Namespace:"calico-system", SelfLink:"", UID:"e7536550-604c-412d-a9f8-518eab3d01c1", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 0, 16, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84cb8894f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-0840528111", ContainerID:"", Pod:"calico-kube-controllers-84cb8894f8-djtj2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.36.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9432ac81cbf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 00:16:45.048570 containerd[1514]: 2026-04-16 00:16:44.998 [INFO][4579] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.6/32] ContainerID="e58e62164f68b4e4fe55140447f2ff13e6f91543495c04d8a8425a47db5aecc1" Namespace="calico-system" Pod="calico-kube-controllers-84cb8894f8-djtj2" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-calico--kube--controllers--84cb8894f8--djtj2-eth0" Apr 16 00:16:45.048570 containerd[1514]: 2026-04-16 00:16:44.998 [INFO][4579] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9432ac81cbf ContainerID="e58e62164f68b4e4fe55140447f2ff13e6f91543495c04d8a8425a47db5aecc1" Namespace="calico-system" Pod="calico-kube-controllers-84cb8894f8-djtj2" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-calico--kube--controllers--84cb8894f8--djtj2-eth0" Apr 16 00:16:45.048570 containerd[1514]: 2026-04-16 00:16:45.014 [INFO][4579] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e58e62164f68b4e4fe55140447f2ff13e6f91543495c04d8a8425a47db5aecc1" Namespace="calico-system" Pod="calico-kube-controllers-84cb8894f8-djtj2" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-calico--kube--controllers--84cb8894f8--djtj2-eth0" Apr 16 00:16:45.048570 containerd[1514]: 2026-04-16 00:16:45.016 [INFO][4579] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e58e62164f68b4e4fe55140447f2ff13e6f91543495c04d8a8425a47db5aecc1" Namespace="calico-system" Pod="calico-kube-controllers-84cb8894f8-djtj2" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-calico--kube--controllers--84cb8894f8--djtj2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--0840528111-k8s-calico--kube--controllers--84cb8894f8--djtj2-eth0", GenerateName:"calico-kube-controllers-84cb8894f8-", Namespace:"calico-system", SelfLink:"", UID:"e7536550-604c-412d-a9f8-518eab3d01c1", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 0, 16, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84cb8894f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-0840528111", ContainerID:"e58e62164f68b4e4fe55140447f2ff13e6f91543495c04d8a8425a47db5aecc1", Pod:"calico-kube-controllers-84cb8894f8-djtj2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.36.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9432ac81cbf", MAC:"5e:3a:17:63:ad:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 00:16:45.048570 containerd[1514]: 2026-04-16 00:16:45.039 [INFO][4579] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e58e62164f68b4e4fe55140447f2ff13e6f91543495c04d8a8425a47db5aecc1" Namespace="calico-system" Pod="calico-kube-controllers-84cb8894f8-djtj2" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-calico--kube--controllers--84cb8894f8--djtj2-eth0" Apr 16 00:16:45.077444 systemd[1]: Started cri-containerd-77aea444e191c28a92e5c7aafb64cb16ec33a519c7d6d35dad4171634b620d53.scope - libcontainer container 77aea444e191c28a92e5c7aafb64cb16ec33a519c7d6d35dad4171634b620d53. Apr 16 00:16:45.129045 containerd[1514]: time="2026-04-16T00:16:45.128916320Z" level=info msg="connecting to shim e58e62164f68b4e4fe55140447f2ff13e6f91543495c04d8a8425a47db5aecc1" address="unix:///run/containerd/s/d80bc5a98599743e21dadfe12b8aa007db463b02c3282e7953c3d91bcbe57c86" namespace=k8s.io protocol=ttrpc version=3 Apr 16 00:16:45.141478 systemd-networkd[1414]: calia55c23a3318: Link UP Apr 16 00:16:45.146453 systemd-networkd[1414]: calia55c23a3318: Gained carrier Apr 16 00:16:45.192128 containerd[1514]: 2026-04-16 00:16:44.701 [INFO][4594] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--0840528111-k8s-calico--apiserver--6d5bb45b9d--xkbr9-eth0 calico-apiserver-6d5bb45b9d- calico-system c5f7d55b-5915-4e61-8741-d346763d163a 866 0 2026-04-16 00:16:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d5bb45b9d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-4-n-0840528111 calico-apiserver-6d5bb45b9d-xkbr9 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calia55c23a3318 [] [] }} ContainerID="fffe706683d9f30247c4ea9abcf2c26c9cb8586a0d413d85d9937fde2948c29d" Namespace="calico-system" Pod="calico-apiserver-6d5bb45b9d-xkbr9" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-calico--apiserver--6d5bb45b9d--xkbr9-" Apr 16 00:16:45.192128 containerd[1514]: 2026-04-16 00:16:44.702 [INFO][4594] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fffe706683d9f30247c4ea9abcf2c26c9cb8586a0d413d85d9937fde2948c29d" Namespace="calico-system" Pod="calico-apiserver-6d5bb45b9d-xkbr9" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-calico--apiserver--6d5bb45b9d--xkbr9-eth0" Apr 16 00:16:45.192128 containerd[1514]: 2026-04-16 00:16:44.797 [INFO][4627] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fffe706683d9f30247c4ea9abcf2c26c9cb8586a0d413d85d9937fde2948c29d" HandleID="k8s-pod-network.fffe706683d9f30247c4ea9abcf2c26c9cb8586a0d413d85d9937fde2948c29d" Workload="ci--4459--2--4--n--0840528111-k8s-calico--apiserver--6d5bb45b9d--xkbr9-eth0" Apr 16 00:16:45.192128 containerd[1514]: 2026-04-16 00:16:44.832 [INFO][4627] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="fffe706683d9f30247c4ea9abcf2c26c9cb8586a0d413d85d9937fde2948c29d" HandleID="k8s-pod-network.fffe706683d9f30247c4ea9abcf2c26c9cb8586a0d413d85d9937fde2948c29d" Workload="ci--4459--2--4--n--0840528111-k8s-calico--apiserver--6d5bb45b9d--xkbr9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400038fcb0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-n-0840528111", "pod":"calico-apiserver-6d5bb45b9d-xkbr9", "timestamp":"2026-04-16 00:16:44.797303583 +0000 UTC"}, Hostname:"ci-4459-2-4-n-0840528111", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400018c9a0)} Apr 16 00:16:45.192128 containerd[1514]: 2026-04-16 00:16:44.832 [INFO][4627] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 00:16:45.192128 containerd[1514]: 2026-04-16 00:16:44.994 [INFO][4627] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 00:16:45.192128 containerd[1514]: 2026-04-16 00:16:44.994 [INFO][4627] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-0840528111' Apr 16 00:16:45.192128 containerd[1514]: 2026-04-16 00:16:45.037 [INFO][4627] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.fffe706683d9f30247c4ea9abcf2c26c9cb8586a0d413d85d9937fde2948c29d" host="ci-4459-2-4-n-0840528111" Apr 16 00:16:45.192128 containerd[1514]: 2026-04-16 00:16:45.050 [INFO][4627] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-0840528111" Apr 16 00:16:45.192128 containerd[1514]: 2026-04-16 00:16:45.065 [INFO][4627] ipam/ipam.go 526: Trying affinity for 192.168.36.0/26 host="ci-4459-2-4-n-0840528111" Apr 16 00:16:45.192128 containerd[1514]: 2026-04-16 00:16:45.071 [INFO][4627] ipam/ipam.go 160: Attempting to load block cidr=192.168.36.0/26 host="ci-4459-2-4-n-0840528111" Apr 16 00:16:45.192128 containerd[1514]: 2026-04-16 00:16:45.083 [INFO][4627] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.36.0/26 host="ci-4459-2-4-n-0840528111" Apr 16 00:16:45.192128 containerd[1514]: 2026-04-16 00:16:45.083 [INFO][4627] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.36.0/26 handle="k8s-pod-network.fffe706683d9f30247c4ea9abcf2c26c9cb8586a0d413d85d9937fde2948c29d" host="ci-4459-2-4-n-0840528111" Apr 16 00:16:45.192128 containerd[1514]: 2026-04-16 00:16:45.088 [INFO][4627] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.fffe706683d9f30247c4ea9abcf2c26c9cb8586a0d413d85d9937fde2948c29d Apr 16 00:16:45.192128 containerd[1514]: 2026-04-16 00:16:45.099 [INFO][4627] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.36.0/26 handle="k8s-pod-network.fffe706683d9f30247c4ea9abcf2c26c9cb8586a0d413d85d9937fde2948c29d" host="ci-4459-2-4-n-0840528111" Apr 16 00:16:45.192128 containerd[1514]: 2026-04-16 00:16:45.115 [INFO][4627] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.36.7/26] block=192.168.36.0/26 handle="k8s-pod-network.fffe706683d9f30247c4ea9abcf2c26c9cb8586a0d413d85d9937fde2948c29d" host="ci-4459-2-4-n-0840528111" Apr 16 00:16:45.192128 containerd[1514]: 2026-04-16 00:16:45.116 [INFO][4627] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.36.7/26] handle="k8s-pod-network.fffe706683d9f30247c4ea9abcf2c26c9cb8586a0d413d85d9937fde2948c29d" host="ci-4459-2-4-n-0840528111" Apr 16 00:16:45.192128 containerd[1514]: 2026-04-16 00:16:45.116 [INFO][4627] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 00:16:45.192128 containerd[1514]: 2026-04-16 00:16:45.116 [INFO][4627] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.36.7/26] IPv6=[] ContainerID="fffe706683d9f30247c4ea9abcf2c26c9cb8586a0d413d85d9937fde2948c29d" HandleID="k8s-pod-network.fffe706683d9f30247c4ea9abcf2c26c9cb8586a0d413d85d9937fde2948c29d" Workload="ci--4459--2--4--n--0840528111-k8s-calico--apiserver--6d5bb45b9d--xkbr9-eth0" Apr 16 00:16:45.192686 containerd[1514]: 2026-04-16 00:16:45.130 [INFO][4594] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fffe706683d9f30247c4ea9abcf2c26c9cb8586a0d413d85d9937fde2948c29d" Namespace="calico-system" Pod="calico-apiserver-6d5bb45b9d-xkbr9" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-calico--apiserver--6d5bb45b9d--xkbr9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--0840528111-k8s-calico--apiserver--6d5bb45b9d--xkbr9-eth0", GenerateName:"calico-apiserver-6d5bb45b9d-", Namespace:"calico-system", SelfLink:"", UID:"c5f7d55b-5915-4e61-8741-d346763d163a", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 0, 16, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d5bb45b9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-0840528111", ContainerID:"", Pod:"calico-apiserver-6d5bb45b9d-xkbr9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia55c23a3318", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 00:16:45.192686 containerd[1514]: 2026-04-16 00:16:45.130 [INFO][4594] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.7/32] ContainerID="fffe706683d9f30247c4ea9abcf2c26c9cb8586a0d413d85d9937fde2948c29d" Namespace="calico-system" Pod="calico-apiserver-6d5bb45b9d-xkbr9" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-calico--apiserver--6d5bb45b9d--xkbr9-eth0" Apr 16 00:16:45.192686 containerd[1514]: 2026-04-16 00:16:45.130 [INFO][4594] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia55c23a3318 ContainerID="fffe706683d9f30247c4ea9abcf2c26c9cb8586a0d413d85d9937fde2948c29d" Namespace="calico-system" Pod="calico-apiserver-6d5bb45b9d-xkbr9" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-calico--apiserver--6d5bb45b9d--xkbr9-eth0" Apr 16 00:16:45.192686 containerd[1514]: 2026-04-16 00:16:45.156 [INFO][4594] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fffe706683d9f30247c4ea9abcf2c26c9cb8586a0d413d85d9937fde2948c29d" Namespace="calico-system" Pod="calico-apiserver-6d5bb45b9d-xkbr9" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-calico--apiserver--6d5bb45b9d--xkbr9-eth0" Apr 16 00:16:45.192686 containerd[1514]: 2026-04-16 00:16:45.157 [INFO][4594] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fffe706683d9f30247c4ea9abcf2c26c9cb8586a0d413d85d9937fde2948c29d" Namespace="calico-system" Pod="calico-apiserver-6d5bb45b9d-xkbr9" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-calico--apiserver--6d5bb45b9d--xkbr9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--0840528111-k8s-calico--apiserver--6d5bb45b9d--xkbr9-eth0", GenerateName:"calico-apiserver-6d5bb45b9d-", Namespace:"calico-system", SelfLink:"", UID:"c5f7d55b-5915-4e61-8741-d346763d163a", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 0, 16, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d5bb45b9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-0840528111", ContainerID:"fffe706683d9f30247c4ea9abcf2c26c9cb8586a0d413d85d9937fde2948c29d", Pod:"calico-apiserver-6d5bb45b9d-xkbr9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia55c23a3318", MAC:"c2:d0:5c:7d:68:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 00:16:45.192686 containerd[1514]: 2026-04-16 00:16:45.182 [INFO][4594] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fffe706683d9f30247c4ea9abcf2c26c9cb8586a0d413d85d9937fde2948c29d" Namespace="calico-system" Pod="calico-apiserver-6d5bb45b9d-xkbr9" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-calico--apiserver--6d5bb45b9d--xkbr9-eth0" Apr 16 00:16:45.203518 systemd[1]: Started cri-containerd-e58e62164f68b4e4fe55140447f2ff13e6f91543495c04d8a8425a47db5aecc1.scope - libcontainer container e58e62164f68b4e4fe55140447f2ff13e6f91543495c04d8a8425a47db5aecc1. Apr 16 00:16:45.246877 systemd-networkd[1414]: cali7fb0f0cb1f2: Link UP Apr 16 00:16:45.248357 systemd-networkd[1414]: cali7fb0f0cb1f2: Gained carrier Apr 16 00:16:45.278557 containerd[1514]: 2026-04-16 00:16:44.708 [INFO][4561] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--0840528111-k8s-coredns--66bc5c9577--krm8w-eth0 coredns-66bc5c9577- kube-system 986c9599-0625-4935-992c-bbdd4ed4b0da 869 0 2026-04-16 00:15:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-2-4-n-0840528111 coredns-66bc5c9577-krm8w eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7fb0f0cb1f2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="5b71a57fcbc233cf750f05de7b01983abcd96c682a19e4c5063813da4b603e87" Namespace="kube-system" Pod="coredns-66bc5c9577-krm8w" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-coredns--66bc5c9577--krm8w-" Apr 16 00:16:45.278557 containerd[1514]: 2026-04-16 00:16:44.710 [INFO][4561] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5b71a57fcbc233cf750f05de7b01983abcd96c682a19e4c5063813da4b603e87" Namespace="kube-system" Pod="coredns-66bc5c9577-krm8w" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-coredns--66bc5c9577--krm8w-eth0" Apr 16 00:16:45.278557 containerd[1514]: 2026-04-16 00:16:44.822 [INFO][4634] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5b71a57fcbc233cf750f05de7b01983abcd96c682a19e4c5063813da4b603e87" HandleID="k8s-pod-network.5b71a57fcbc233cf750f05de7b01983abcd96c682a19e4c5063813da4b603e87" Workload="ci--4459--2--4--n--0840528111-k8s-coredns--66bc5c9577--krm8w-eth0" Apr 16 00:16:45.278557 containerd[1514]: 2026-04-16 00:16:44.837 [INFO][4634] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5b71a57fcbc233cf750f05de7b01983abcd96c682a19e4c5063813da4b603e87" HandleID="k8s-pod-network.5b71a57fcbc233cf750f05de7b01983abcd96c682a19e4c5063813da4b603e87" Workload="ci--4459--2--4--n--0840528111-k8s-coredns--66bc5c9577--krm8w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000375f70), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-2-4-n-0840528111", "pod":"coredns-66bc5c9577-krm8w", "timestamp":"2026-04-16 00:16:44.822129532 +0000 UTC"}, Hostname:"ci-4459-2-4-n-0840528111", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40004c51e0)} Apr 16 00:16:45.278557 containerd[1514]: 2026-04-16 00:16:44.837 [INFO][4634] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 00:16:45.278557 containerd[1514]: 2026-04-16 00:16:45.116 [INFO][4634] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 00:16:45.278557 containerd[1514]: 2026-04-16 00:16:45.117 [INFO][4634] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-0840528111' Apr 16 00:16:45.278557 containerd[1514]: 2026-04-16 00:16:45.145 [INFO][4634] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5b71a57fcbc233cf750f05de7b01983abcd96c682a19e4c5063813da4b603e87" host="ci-4459-2-4-n-0840528111" Apr 16 00:16:45.278557 containerd[1514]: 2026-04-16 00:16:45.171 [INFO][4634] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-0840528111" Apr 16 00:16:45.278557 containerd[1514]: 2026-04-16 00:16:45.192 [INFO][4634] ipam/ipam.go 526: Trying affinity for 192.168.36.0/26 host="ci-4459-2-4-n-0840528111" Apr 16 00:16:45.278557 containerd[1514]: 2026-04-16 00:16:45.196 [INFO][4634] ipam/ipam.go 160: Attempting to load block cidr=192.168.36.0/26 host="ci-4459-2-4-n-0840528111" Apr 16 00:16:45.278557 containerd[1514]: 2026-04-16 00:16:45.201 [INFO][4634] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.36.0/26 host="ci-4459-2-4-n-0840528111" Apr 16 00:16:45.278557 containerd[1514]: 2026-04-16 00:16:45.201 [INFO][4634] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.36.0/26 handle="k8s-pod-network.5b71a57fcbc233cf750f05de7b01983abcd96c682a19e4c5063813da4b603e87" host="ci-4459-2-4-n-0840528111" Apr 16 00:16:45.278557 containerd[1514]: 2026-04-16 00:16:45.204 [INFO][4634] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5b71a57fcbc233cf750f05de7b01983abcd96c682a19e4c5063813da4b603e87 Apr 16 00:16:45.278557 containerd[1514]: 2026-04-16 00:16:45.213 [INFO][4634] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.36.0/26 handle="k8s-pod-network.5b71a57fcbc233cf750f05de7b01983abcd96c682a19e4c5063813da4b603e87" host="ci-4459-2-4-n-0840528111" Apr 16 00:16:45.278557 containerd[1514]: 2026-04-16 00:16:45.229 [INFO][4634] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.36.8/26] block=192.168.36.0/26 handle="k8s-pod-network.5b71a57fcbc233cf750f05de7b01983abcd96c682a19e4c5063813da4b603e87" host="ci-4459-2-4-n-0840528111" Apr 16 00:16:45.278557 containerd[1514]: 2026-04-16 00:16:45.229 [INFO][4634] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.36.8/26] handle="k8s-pod-network.5b71a57fcbc233cf750f05de7b01983abcd96c682a19e4c5063813da4b603e87" host="ci-4459-2-4-n-0840528111" Apr 16 00:16:45.278557 containerd[1514]: 2026-04-16 00:16:45.229 [INFO][4634] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 00:16:45.278557 containerd[1514]: 2026-04-16 00:16:45.229 [INFO][4634] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.36.8/26] IPv6=[] ContainerID="5b71a57fcbc233cf750f05de7b01983abcd96c682a19e4c5063813da4b603e87" HandleID="k8s-pod-network.5b71a57fcbc233cf750f05de7b01983abcd96c682a19e4c5063813da4b603e87" Workload="ci--4459--2--4--n--0840528111-k8s-coredns--66bc5c9577--krm8w-eth0" Apr 16 00:16:45.279098 containerd[1514]: 2026-04-16 00:16:45.240 [INFO][4561] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5b71a57fcbc233cf750f05de7b01983abcd96c682a19e4c5063813da4b603e87" Namespace="kube-system" Pod="coredns-66bc5c9577-krm8w" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-coredns--66bc5c9577--krm8w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--0840528111-k8s-coredns--66bc5c9577--krm8w-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"986c9599-0625-4935-992c-bbdd4ed4b0da", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 0, 15, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-0840528111", ContainerID:"", Pod:"coredns-66bc5c9577-krm8w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7fb0f0cb1f2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 00:16:45.279098 containerd[1514]: 2026-04-16 00:16:45.240 [INFO][4561] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.8/32] ContainerID="5b71a57fcbc233cf750f05de7b01983abcd96c682a19e4c5063813da4b603e87" Namespace="kube-system" Pod="coredns-66bc5c9577-krm8w" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-coredns--66bc5c9577--krm8w-eth0" Apr 16 00:16:45.279098 containerd[1514]: 2026-04-16 00:16:45.240 [INFO][4561] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7fb0f0cb1f2 ContainerID="5b71a57fcbc233cf750f05de7b01983abcd96c682a19e4c5063813da4b603e87" Namespace="kube-system" Pod="coredns-66bc5c9577-krm8w" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-coredns--66bc5c9577--krm8w-eth0" Apr 16 00:16:45.279098 containerd[1514]: 2026-04-16 00:16:45.248 [INFO][4561] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5b71a57fcbc233cf750f05de7b01983abcd96c682a19e4c5063813da4b603e87" Namespace="kube-system" Pod="coredns-66bc5c9577-krm8w" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-coredns--66bc5c9577--krm8w-eth0" Apr 16 00:16:45.279098 containerd[1514]: 2026-04-16 00:16:45.249 [INFO][4561] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5b71a57fcbc233cf750f05de7b01983abcd96c682a19e4c5063813da4b603e87" Namespace="kube-system" Pod="coredns-66bc5c9577-krm8w" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-coredns--66bc5c9577--krm8w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--0840528111-k8s-coredns--66bc5c9577--krm8w-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"986c9599-0625-4935-992c-bbdd4ed4b0da", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 0, 15, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-0840528111", ContainerID:"5b71a57fcbc233cf750f05de7b01983abcd96c682a19e4c5063813da4b603e87", Pod:"coredns-66bc5c9577-krm8w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7fb0f0cb1f2", MAC:"62:2b:72:c6:b1:1a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 00:16:45.280808 containerd[1514]: 2026-04-16 00:16:45.274 [INFO][4561] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5b71a57fcbc233cf750f05de7b01983abcd96c682a19e4c5063813da4b603e87" Namespace="kube-system" Pod="coredns-66bc5c9577-krm8w" WorkloadEndpoint="ci--4459--2--4--n--0840528111-k8s-coredns--66bc5c9577--krm8w-eth0" Apr 16 00:16:45.293512 containerd[1514]: time="2026-04-16T00:16:45.292771090Z" level=info msg="connecting to shim fffe706683d9f30247c4ea9abcf2c26c9cb8586a0d413d85d9937fde2948c29d" address="unix:///run/containerd/s/371852e922705799ef1e81f6a3dbc633d0c4832587db2e456e2f93803971407d" namespace=k8s.io protocol=ttrpc version=3 Apr 16 00:16:45.301910 containerd[1514]: time="2026-04-16T00:16:45.301758684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5bb45b9d-mfsr2,Uid:472bf850-ff7c-4585-ae04-90ee1c340e3a,Namespace:calico-system,Attempt:0,} returns sandbox id \"77aea444e191c28a92e5c7aafb64cb16ec33a519c7d6d35dad4171634b620d53\"" Apr 16 00:16:45.351801 containerd[1514]: time="2026-04-16T00:16:45.351729400Z" level=info msg="connecting to shim 5b71a57fcbc233cf750f05de7b01983abcd96c682a19e4c5063813da4b603e87" address="unix:///run/containerd/s/7c2b441902d83f57e4c5f13d7fb8376a2280a20336276fb34bb5fe60e5355b76" namespace=k8s.io protocol=ttrpc version=3 Apr 16 00:16:45.352391 systemd[1]: Started cri-containerd-fffe706683d9f30247c4ea9abcf2c26c9cb8586a0d413d85d9937fde2948c29d.scope - libcontainer container fffe706683d9f30247c4ea9abcf2c26c9cb8586a0d413d85d9937fde2948c29d. Apr 16 00:16:45.410043 containerd[1514]: time="2026-04-16T00:16:45.408854710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84cb8894f8-djtj2,Uid:e7536550-604c-412d-a9f8-518eab3d01c1,Namespace:calico-system,Attempt:0,} returns sandbox id \"e58e62164f68b4e4fe55140447f2ff13e6f91543495c04d8a8425a47db5aecc1\"" Apr 16 00:16:45.434800 systemd[1]: Started cri-containerd-5b71a57fcbc233cf750f05de7b01983abcd96c682a19e4c5063813da4b603e87.scope - libcontainer container 5b71a57fcbc233cf750f05de7b01983abcd96c682a19e4c5063813da4b603e87. Apr 16 00:16:45.496320 containerd[1514]: time="2026-04-16T00:16:45.495318453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5bb45b9d-xkbr9,Uid:c5f7d55b-5915-4e61-8741-d346763d163a,Namespace:calico-system,Attempt:0,} returns sandbox id \"fffe706683d9f30247c4ea9abcf2c26c9cb8586a0d413d85d9937fde2948c29d\"" Apr 16 00:16:45.506507 containerd[1514]: time="2026-04-16T00:16:45.506422172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-krm8w,Uid:986c9599-0625-4935-992c-bbdd4ed4b0da,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b71a57fcbc233cf750f05de7b01983abcd96c682a19e4c5063813da4b603e87\"" Apr 16 00:16:45.516372 containerd[1514]: time="2026-04-16T00:16:45.516131621Z" level=info msg="CreateContainer within sandbox \"5b71a57fcbc233cf750f05de7b01983abcd96c682a19e4c5063813da4b603e87\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 16 00:16:45.550098 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3466116075.mount: Deactivated successfully. Apr 16 00:16:45.551835 containerd[1514]: time="2026-04-16T00:16:45.551095415Z" level=info msg="Container e5fd33c593167cb2a1382da9a1ded86c0c594f499f95b8c9eab9d0756dc65793: CDI devices from CRI Config.CDIDevices: []" Apr 16 00:16:45.565098 containerd[1514]: time="2026-04-16T00:16:45.565050315Z" level=info msg="CreateContainer within sandbox \"5b71a57fcbc233cf750f05de7b01983abcd96c682a19e4c5063813da4b603e87\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e5fd33c593167cb2a1382da9a1ded86c0c594f499f95b8c9eab9d0756dc65793\"" Apr 16 00:16:45.567651 containerd[1514]: time="2026-04-16T00:16:45.567617690Z" level=info msg="StartContainer for \"e5fd33c593167cb2a1382da9a1ded86c0c594f499f95b8c9eab9d0756dc65793\"" Apr 16 00:16:45.570777 containerd[1514]: time="2026-04-16T00:16:45.570732438Z" level=info msg="connecting to shim e5fd33c593167cb2a1382da9a1ded86c0c594f499f95b8c9eab9d0756dc65793" address="unix:///run/containerd/s/7c2b441902d83f57e4c5f13d7fb8376a2280a20336276fb34bb5fe60e5355b76" protocol=ttrpc version=3 Apr 16 00:16:45.608523 systemd[1]: Started cri-containerd-e5fd33c593167cb2a1382da9a1ded86c0c594f499f95b8c9eab9d0756dc65793.scope - libcontainer container e5fd33c593167cb2a1382da9a1ded86c0c594f499f95b8c9eab9d0756dc65793. Apr 16 00:16:45.659408 containerd[1514]: time="2026-04-16T00:16:45.659319666Z" level=info msg="StartContainer for \"e5fd33c593167cb2a1382da9a1ded86c0c594f499f95b8c9eab9d0756dc65793\" returns successfully" Apr 16 00:16:45.793562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3753424760.mount: Deactivated successfully. Apr 16 00:16:45.843090 kubelet[2684]: I0416 00:16:45.842942 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-krm8w" podStartSLOduration=49.842925621 podStartE2EDuration="49.842925621s" podCreationTimestamp="2026-04-16 00:15:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 00:16:45.842532612 +0000 UTC m=+56.473508002" watchObservedRunningTime="2026-04-16 00:16:45.842925621 +0000 UTC m=+56.473901011" Apr 16 00:16:46.226617 containerd[1514]: time="2026-04-16T00:16:46.226201595Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:16:46.227783 containerd[1514]: time="2026-04-16T00:16:46.227744908Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Apr 16 00:16:46.228910 containerd[1514]: time="2026-04-16T00:16:46.228880051Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:16:46.232420 containerd[1514]: time="2026-04-16T00:16:46.232347084Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:16:46.233199 containerd[1514]: time="2026-04-16T00:16:46.233019018Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 3.316440955s" Apr 16 00:16:46.233199 containerd[1514]: time="2026-04-16T00:16:46.233054539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Apr 16 00:16:46.234962 containerd[1514]: time="2026-04-16T00:16:46.234445248Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 16 00:16:46.238727 containerd[1514]: time="2026-04-16T00:16:46.238686537Z" level=info msg="CreateContainer within sandbox \"d53a6df9d662c0755a6e984f34529d0b5eddb4fa107f1428b1c2b90324a710f5\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 16 00:16:46.247240 containerd[1514]: time="2026-04-16T00:16:46.246702546Z" level=info msg="Container 08cf2176005cf2ff43b46ff9872dfdd9b630e8fceaeeeda67e618c6e20012e84: CDI devices from CRI Config.CDIDevices: []" Apr 16 00:16:46.260172 containerd[1514]: time="2026-04-16T00:16:46.260116867Z" level=info msg="CreateContainer within sandbox \"d53a6df9d662c0755a6e984f34529d0b5eddb4fa107f1428b1c2b90324a710f5\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"08cf2176005cf2ff43b46ff9872dfdd9b630e8fceaeeeda67e618c6e20012e84\"" Apr 16 00:16:46.261126 containerd[1514]: time="2026-04-16T00:16:46.261086088Z" level=info msg="StartContainer for \"08cf2176005cf2ff43b46ff9872dfdd9b630e8fceaeeeda67e618c6e20012e84\"" Apr 16 00:16:46.262774 containerd[1514]: time="2026-04-16T00:16:46.262657801Z" level=info msg="connecting to shim 08cf2176005cf2ff43b46ff9872dfdd9b630e8fceaeeeda67e618c6e20012e84" address="unix:///run/containerd/s/e67953a5fd2f7381fa95afdfbd1da8df2951088ebfc11b065d07498890254020" protocol=ttrpc version=3 Apr 16 00:16:46.288415 systemd[1]: Started cri-containerd-08cf2176005cf2ff43b46ff9872dfdd9b630e8fceaeeeda67e618c6e20012e84.scope - libcontainer container 08cf2176005cf2ff43b46ff9872dfdd9b630e8fceaeeeda67e618c6e20012e84. Apr 16 00:16:46.341211 containerd[1514]: time="2026-04-16T00:16:46.341123649Z" level=info msg="StartContainer for \"08cf2176005cf2ff43b46ff9872dfdd9b630e8fceaeeeda67e618c6e20012e84\" returns successfully" Apr 16 00:16:46.433863 systemd-networkd[1414]: cali53c1ca815ef: Gained IPv6LL Apr 16 00:16:46.624979 systemd-networkd[1414]: cali9432ac81cbf: Gained IPv6LL Apr 16 00:16:46.753674 systemd-networkd[1414]: calia55c23a3318: Gained IPv6LL Apr 16 00:16:46.855986 kubelet[2684]: I0416 00:16:46.855896 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-2mdz9" podStartSLOduration=32.536217231 podStartE2EDuration="35.855872178s" podCreationTimestamp="2026-04-16 00:16:11 +0000 UTC" firstStartedPulling="2026-04-16 00:16:42.914486495 +0000 UTC m=+53.545461885" lastFinishedPulling="2026-04-16 00:16:46.234141442 +0000 UTC m=+56.865116832" observedRunningTime="2026-04-16 00:16:46.854131502 +0000 UTC m=+57.485106852" watchObservedRunningTime="2026-04-16 00:16:46.855872178 +0000 UTC m=+57.486847728" Apr 16 00:16:46.944461 systemd-networkd[1414]: cali7fb0f0cb1f2: Gained IPv6LL Apr 16 00:16:49.910166 containerd[1514]: time="2026-04-16T00:16:49.910006492Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:16:49.911232 containerd[1514]: time="2026-04-16T00:16:49.911196395Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Apr 16 00:16:49.912075 containerd[1514]: time="2026-04-16T00:16:49.911996611Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:16:49.917155 containerd[1514]: time="2026-04-16T00:16:49.917060349Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:16:49.917949 containerd[1514]: time="2026-04-16T00:16:49.917836804Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 3.683353155s" Apr 16 00:16:49.917949 containerd[1514]: time="2026-04-16T00:16:49.917877445Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 16 00:16:49.921228 containerd[1514]: time="2026-04-16T00:16:49.921131308Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 16 00:16:49.923879 containerd[1514]: time="2026-04-16T00:16:49.923810881Z" level=info msg="CreateContainer within sandbox \"77aea444e191c28a92e5c7aafb64cb16ec33a519c7d6d35dad4171634b620d53\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 16 00:16:49.937063 containerd[1514]: time="2026-04-16T00:16:49.936016278Z" level=info msg="Container b102ba61f8216505645e7cd316005b1b3e5c170a73df80ee0c01d23dfbf013f2: CDI devices from CRI Config.CDIDevices: []" Apr 16 00:16:49.947049 containerd[1514]: time="2026-04-16T00:16:49.946924091Z" level=info msg="CreateContainer within sandbox \"77aea444e191c28a92e5c7aafb64cb16ec33a519c7d6d35dad4171634b620d53\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b102ba61f8216505645e7cd316005b1b3e5c170a73df80ee0c01d23dfbf013f2\"" Apr 16 00:16:49.948794 containerd[1514]: time="2026-04-16T00:16:49.948754886Z" level=info msg="StartContainer for \"b102ba61f8216505645e7cd316005b1b3e5c170a73df80ee0c01d23dfbf013f2\"" Apr 16 00:16:49.950433 containerd[1514]: time="2026-04-16T00:16:49.950376558Z" level=info msg="connecting to shim b102ba61f8216505645e7cd316005b1b3e5c170a73df80ee0c01d23dfbf013f2" address="unix:///run/containerd/s/105f7205ac0d407d3d5c18bb4b0076ada915d85a0bbd670a1bc251bddf69ab8c" protocol=ttrpc version=3 Apr 16 00:16:49.977460 systemd[1]: Started cri-containerd-b102ba61f8216505645e7cd316005b1b3e5c170a73df80ee0c01d23dfbf013f2.scope - libcontainer container b102ba61f8216505645e7cd316005b1b3e5c170a73df80ee0c01d23dfbf013f2. Apr 16 00:16:50.031505 containerd[1514]: time="2026-04-16T00:16:50.031460323Z" level=info msg="StartContainer for \"b102ba61f8216505645e7cd316005b1b3e5c170a73df80ee0c01d23dfbf013f2\" returns successfully" Apr 16 00:16:51.893583 kubelet[2684]: I0416 00:16:51.892532 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6d5bb45b9d-mfsr2" podStartSLOduration=37.27823831 podStartE2EDuration="41.892513468s" podCreationTimestamp="2026-04-16 00:16:10 +0000 UTC" firstStartedPulling="2026-04-16 00:16:45.305888172 +0000 UTC m=+55.936863522" lastFinishedPulling="2026-04-16 00:16:49.92016329 +0000 UTC m=+60.551138680" observedRunningTime="2026-04-16 00:16:50.870456582 +0000 UTC m=+61.501431972" watchObservedRunningTime="2026-04-16 00:16:51.892513468 +0000 UTC m=+62.523488858" Apr 16 00:16:53.896923 containerd[1514]: time="2026-04-16T00:16:53.896108172Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:16:53.898734 containerd[1514]: time="2026-04-16T00:16:53.898701257Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Apr 16 00:16:53.899819 containerd[1514]: time="2026-04-16T00:16:53.899787197Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:16:53.904085 containerd[1514]: time="2026-04-16T00:16:53.904046232Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:16:53.905720 containerd[1514]: time="2026-04-16T00:16:53.905674581Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 3.984370148s" Apr 16 00:16:53.905925 containerd[1514]: time="2026-04-16T00:16:53.905707621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Apr 16 00:16:53.908027 containerd[1514]: time="2026-04-16T00:16:53.907590774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 16 00:16:53.928690 containerd[1514]: time="2026-04-16T00:16:53.928650426Z" level=info msg="CreateContainer within sandbox \"e58e62164f68b4e4fe55140447f2ff13e6f91543495c04d8a8425a47db5aecc1\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 16 00:16:53.939148 containerd[1514]: time="2026-04-16T00:16:53.938396678Z" level=info msg="Container ecd62b19397e4d96f3da92df48bc306224225fa7f80d7c90ca258ec0d2a149b8: CDI devices from CRI Config.CDIDevices: []" Apr 16 00:16:53.951980 containerd[1514]: time="2026-04-16T00:16:53.951924957Z" level=info msg="CreateContainer within sandbox \"e58e62164f68b4e4fe55140447f2ff13e6f91543495c04d8a8425a47db5aecc1\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ecd62b19397e4d96f3da92df48bc306224225fa7f80d7c90ca258ec0d2a149b8\"" Apr 16 00:16:53.953318 containerd[1514]: time="2026-04-16T00:16:53.952991296Z" level=info msg="StartContainer for \"ecd62b19397e4d96f3da92df48bc306224225fa7f80d7c90ca258ec0d2a149b8\"" Apr 16 00:16:53.954764 containerd[1514]: time="2026-04-16T00:16:53.954730327Z" level=info msg="connecting to shim ecd62b19397e4d96f3da92df48bc306224225fa7f80d7c90ca258ec0d2a149b8" address="unix:///run/containerd/s/d80bc5a98599743e21dadfe12b8aa007db463b02c3282e7953c3d91bcbe57c86" protocol=ttrpc version=3 Apr 16 00:16:53.979424 systemd[1]: Started cri-containerd-ecd62b19397e4d96f3da92df48bc306224225fa7f80d7c90ca258ec0d2a149b8.scope - libcontainer container ecd62b19397e4d96f3da92df48bc306224225fa7f80d7c90ca258ec0d2a149b8. Apr 16 00:16:54.035056 containerd[1514]: time="2026-04-16T00:16:54.035013530Z" level=info msg="StartContainer for \"ecd62b19397e4d96f3da92df48bc306224225fa7f80d7c90ca258ec0d2a149b8\" returns successfully" Apr 16 00:16:54.310046 containerd[1514]: time="2026-04-16T00:16:54.309988429Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:16:54.311210 containerd[1514]: time="2026-04-16T00:16:54.311026727Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 16 00:16:54.313337 containerd[1514]: time="2026-04-16T00:16:54.313295126Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 405.60271ms" Apr 16 00:16:54.313337 containerd[1514]: time="2026-04-16T00:16:54.313335407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 16 00:16:54.318560 containerd[1514]: time="2026-04-16T00:16:54.318460095Z" level=info msg="CreateContainer within sandbox \"fffe706683d9f30247c4ea9abcf2c26c9cb8586a0d413d85d9937fde2948c29d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 16 00:16:54.327441 containerd[1514]: time="2026-04-16T00:16:54.327236727Z" level=info msg="Container 7a2bca75b5652298963493d2bcb82ca2fba6b1317e58bdf92e5ea89e356a83a1: CDI devices from CRI Config.CDIDevices: []" Apr 16 00:16:54.337390 containerd[1514]: time="2026-04-16T00:16:54.337313620Z" level=info msg="CreateContainer within sandbox \"fffe706683d9f30247c4ea9abcf2c26c9cb8586a0d413d85d9937fde2948c29d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7a2bca75b5652298963493d2bcb82ca2fba6b1317e58bdf92e5ea89e356a83a1\"" Apr 16 00:16:54.340891 containerd[1514]: time="2026-04-16T00:16:54.339437897Z" level=info msg="StartContainer for \"7a2bca75b5652298963493d2bcb82ca2fba6b1317e58bdf92e5ea89e356a83a1\"" Apr 16 00:16:54.342102 containerd[1514]: time="2026-04-16T00:16:54.341882379Z" level=info msg="connecting to shim 7a2bca75b5652298963493d2bcb82ca2fba6b1317e58bdf92e5ea89e356a83a1" address="unix:///run/containerd/s/371852e922705799ef1e81f6a3dbc633d0c4832587db2e456e2f93803971407d" protocol=ttrpc version=3 Apr 16 00:16:54.361394 systemd[1]: Started cri-containerd-7a2bca75b5652298963493d2bcb82ca2fba6b1317e58bdf92e5ea89e356a83a1.scope - libcontainer container 7a2bca75b5652298963493d2bcb82ca2fba6b1317e58bdf92e5ea89e356a83a1. Apr 16 00:16:54.410126 containerd[1514]: time="2026-04-16T00:16:54.410092355Z" level=info msg="StartContainer for \"7a2bca75b5652298963493d2bcb82ca2fba6b1317e58bdf92e5ea89e356a83a1\" returns successfully" Apr 16 00:16:54.888220 kubelet[2684]: I0416 00:16:54.887655 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6d5bb45b9d-xkbr9" podStartSLOduration=36.072353134 podStartE2EDuration="44.887629905s" podCreationTimestamp="2026-04-16 00:16:10 +0000 UTC" firstStartedPulling="2026-04-16 00:16:45.499027453 +0000 UTC m=+56.130002843" lastFinishedPulling="2026-04-16 00:16:54.314304224 +0000 UTC m=+64.945279614" observedRunningTime="2026-04-16 00:16:54.886307403 +0000 UTC m=+65.517282833" watchObservedRunningTime="2026-04-16 00:16:54.887629905 +0000 UTC m=+65.518605295" Apr 16 00:16:54.973197 kubelet[2684]: I0416 00:16:54.972970 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-84cb8894f8-djtj2" podStartSLOduration=34.479604706 podStartE2EDuration="42.972950336s" podCreationTimestamp="2026-04-16 00:16:12 +0000 UTC" firstStartedPulling="2026-04-16 00:16:45.413859578 +0000 UTC m=+56.044834968" lastFinishedPulling="2026-04-16 00:16:53.907205168 +0000 UTC m=+64.538180598" observedRunningTime="2026-04-16 00:16:54.924807226 +0000 UTC m=+65.555782616" watchObservedRunningTime="2026-04-16 00:16:54.972950336 +0000 UTC m=+65.603925726" Apr 16 00:18:15.449856 systemd[1]: Started sshd@7-88.198.131.37:22-4.175.71.9:47510.service - OpenSSH per-connection server daemon (4.175.71.9:47510). Apr 16 00:18:15.605328 sshd[5528]: Accepted publickey for core from 4.175.71.9 port 47510 ssh2: RSA SHA256:FiR974sjzPvW6bQBEQo8MAp+1XsnBVWQJvv311fPFzI Apr 16 00:18:15.608876 sshd-session[5528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:18:15.614684 systemd-logind[1486]: New session 8 of user core. Apr 16 00:18:15.621576 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 16 00:18:15.761392 sshd[5531]: Connection closed by 4.175.71.9 port 47510 Apr 16 00:18:15.760894 sshd-session[5528]: pam_unix(sshd:session): session closed for user core Apr 16 00:18:15.766797 systemd[1]: sshd@7-88.198.131.37:22-4.175.71.9:47510.service: Deactivated successfully. Apr 16 00:18:15.769158 systemd[1]: session-8.scope: Deactivated successfully. Apr 16 00:18:15.771171 systemd-logind[1486]: Session 8 logged out. Waiting for processes to exit. Apr 16 00:18:15.772887 systemd-logind[1486]: Removed session 8. Apr 16 00:18:20.791974 systemd[1]: Started sshd@8-88.198.131.37:22-4.175.71.9:47516.service - OpenSSH per-connection server daemon (4.175.71.9:47516). Apr 16 00:18:20.928975 sshd[5588]: Accepted publickey for core from 4.175.71.9 port 47516 ssh2: RSA SHA256:FiR974sjzPvW6bQBEQo8MAp+1XsnBVWQJvv311fPFzI Apr 16 00:18:20.931210 sshd-session[5588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:18:20.938067 systemd-logind[1486]: New session 9 of user core. Apr 16 00:18:20.944563 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 16 00:18:21.069646 sshd[5591]: Connection closed by 4.175.71.9 port 47516 Apr 16 00:18:21.070747 sshd-session[5588]: pam_unix(sshd:session): session closed for user core Apr 16 00:18:21.077290 systemd[1]: sshd@8-88.198.131.37:22-4.175.71.9:47516.service: Deactivated successfully. Apr 16 00:18:21.081415 systemd[1]: session-9.scope: Deactivated successfully. Apr 16 00:18:21.083119 systemd-logind[1486]: Session 9 logged out. Waiting for processes to exit. Apr 16 00:18:21.084877 systemd-logind[1486]: Removed session 9. Apr 16 00:18:26.097294 systemd[1]: Started sshd@9-88.198.131.37:22-4.175.71.9:33004.service - OpenSSH per-connection server daemon (4.175.71.9:33004). Apr 16 00:18:26.247267 sshd[5626]: Accepted publickey for core from 4.175.71.9 port 33004 ssh2: RSA SHA256:FiR974sjzPvW6bQBEQo8MAp+1XsnBVWQJvv311fPFzI Apr 16 00:18:26.248735 sshd-session[5626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:18:26.255558 systemd-logind[1486]: New session 10 of user core. Apr 16 00:18:26.262716 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 16 00:18:26.386499 sshd[5629]: Connection closed by 4.175.71.9 port 33004 Apr 16 00:18:26.387661 sshd-session[5626]: pam_unix(sshd:session): session closed for user core Apr 16 00:18:26.393257 systemd-logind[1486]: Session 10 logged out. Waiting for processes to exit. Apr 16 00:18:26.393568 systemd[1]: sshd@9-88.198.131.37:22-4.175.71.9:33004.service: Deactivated successfully. Apr 16 00:18:26.396714 systemd[1]: session-10.scope: Deactivated successfully. Apr 16 00:18:26.399376 systemd-logind[1486]: Removed session 10. Apr 16 00:18:31.417008 systemd[1]: Started sshd@10-88.198.131.37:22-4.175.71.9:33012.service - OpenSSH per-connection server daemon (4.175.71.9:33012). Apr 16 00:18:31.559496 sshd[5661]: Accepted publickey for core from 4.175.71.9 port 33012 ssh2: RSA SHA256:FiR974sjzPvW6bQBEQo8MAp+1XsnBVWQJvv311fPFzI Apr 16 00:18:31.561404 sshd-session[5661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:18:31.566521 systemd-logind[1486]: New session 11 of user core. Apr 16 00:18:31.571485 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 16 00:18:31.695409 sshd[5664]: Connection closed by 4.175.71.9 port 33012 Apr 16 00:18:31.697512 sshd-session[5661]: pam_unix(sshd:session): session closed for user core Apr 16 00:18:31.702650 systemd[1]: sshd@10-88.198.131.37:22-4.175.71.9:33012.service: Deactivated successfully. Apr 16 00:18:31.708505 systemd[1]: session-11.scope: Deactivated successfully. Apr 16 00:18:31.711089 systemd-logind[1486]: Session 11 logged out. Waiting for processes to exit. Apr 16 00:18:31.728118 systemd[1]: Started sshd@11-88.198.131.37:22-4.175.71.9:33020.service - OpenSSH per-connection server daemon (4.175.71.9:33020). Apr 16 00:18:31.729806 systemd-logind[1486]: Removed session 11. Apr 16 00:18:31.861479 sshd[5676]: Accepted publickey for core from 4.175.71.9 port 33020 ssh2: RSA SHA256:FiR974sjzPvW6bQBEQo8MAp+1XsnBVWQJvv311fPFzI Apr 16 00:18:31.865418 sshd-session[5676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:18:31.872832 systemd-logind[1486]: New session 12 of user core. Apr 16 00:18:31.881495 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 16 00:18:32.047393 sshd[5703]: Connection closed by 4.175.71.9 port 33020 Apr 16 00:18:32.047795 sshd-session[5676]: pam_unix(sshd:session): session closed for user core Apr 16 00:18:32.056078 systemd-logind[1486]: Session 12 logged out. Waiting for processes to exit. Apr 16 00:18:32.056719 systemd[1]: sshd@11-88.198.131.37:22-4.175.71.9:33020.service: Deactivated successfully. Apr 16 00:18:32.060693 systemd[1]: session-12.scope: Deactivated successfully. Apr 16 00:18:32.076575 systemd[1]: Started sshd@12-88.198.131.37:22-4.175.71.9:33034.service - OpenSSH per-connection server daemon (4.175.71.9:33034). Apr 16 00:18:32.079120 systemd-logind[1486]: Removed session 12. Apr 16 00:18:32.220673 sshd[5713]: Accepted publickey for core from 4.175.71.9 port 33034 ssh2: RSA SHA256:FiR974sjzPvW6bQBEQo8MAp+1XsnBVWQJvv311fPFzI Apr 16 00:18:32.222772 sshd-session[5713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:18:32.227968 systemd-logind[1486]: New session 13 of user core. Apr 16 00:18:32.236956 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 16 00:18:32.366018 sshd[5716]: Connection closed by 4.175.71.9 port 33034 Apr 16 00:18:32.367626 sshd-session[5713]: pam_unix(sshd:session): session closed for user core Apr 16 00:18:32.372477 systemd[1]: sshd@12-88.198.131.37:22-4.175.71.9:33034.service: Deactivated successfully. Apr 16 00:18:32.374635 systemd[1]: session-13.scope: Deactivated successfully. Apr 16 00:18:32.376918 systemd-logind[1486]: Session 13 logged out. Waiting for processes to exit. Apr 16 00:18:32.378608 systemd-logind[1486]: Removed session 13. Apr 16 00:18:37.399450 systemd[1]: Started sshd@13-88.198.131.37:22-4.175.71.9:54030.service - OpenSSH per-connection server daemon (4.175.71.9:54030). Apr 16 00:18:37.533123 sshd[5728]: Accepted publickey for core from 4.175.71.9 port 54030 ssh2: RSA SHA256:FiR974sjzPvW6bQBEQo8MAp+1XsnBVWQJvv311fPFzI Apr 16 00:18:37.535698 sshd-session[5728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:18:37.542620 systemd-logind[1486]: New session 14 of user core. Apr 16 00:18:37.547431 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 16 00:18:37.672967 sshd[5731]: Connection closed by 4.175.71.9 port 54030 Apr 16 00:18:37.673921 sshd-session[5728]: pam_unix(sshd:session): session closed for user core Apr 16 00:18:37.680472 systemd-logind[1486]: Session 14 logged out. Waiting for processes to exit. Apr 16 00:18:37.682336 systemd[1]: sshd@13-88.198.131.37:22-4.175.71.9:54030.service: Deactivated successfully. Apr 16 00:18:37.686912 systemd[1]: session-14.scope: Deactivated successfully. Apr 16 00:18:37.700607 systemd[1]: Started sshd@14-88.198.131.37:22-4.175.71.9:54038.service - OpenSSH per-connection server daemon (4.175.71.9:54038). Apr 16 00:18:37.702409 systemd-logind[1486]: Removed session 14. Apr 16 00:18:37.834751 sshd[5742]: Accepted publickey for core from 4.175.71.9 port 54038 ssh2: RSA SHA256:FiR974sjzPvW6bQBEQo8MAp+1XsnBVWQJvv311fPFzI Apr 16 00:18:37.836622 sshd-session[5742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:18:37.842340 systemd-logind[1486]: New session 15 of user core. Apr 16 00:18:37.848371 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 16 00:18:38.142675 sshd[5745]: Connection closed by 4.175.71.9 port 54038 Apr 16 00:18:38.144372 sshd-session[5742]: pam_unix(sshd:session): session closed for user core Apr 16 00:18:38.150042 systemd[1]: sshd@14-88.198.131.37:22-4.175.71.9:54038.service: Deactivated successfully. Apr 16 00:18:38.153163 systemd[1]: session-15.scope: Deactivated successfully. Apr 16 00:18:38.155083 systemd-logind[1486]: Session 15 logged out. Waiting for processes to exit. Apr 16 00:18:38.171699 systemd[1]: Started sshd@15-88.198.131.37:22-4.175.71.9:54048.service - OpenSSH per-connection server daemon (4.175.71.9:54048). Apr 16 00:18:38.174422 systemd-logind[1486]: Removed session 15. Apr 16 00:18:38.302230 sshd[5755]: Accepted publickey for core from 4.175.71.9 port 54048 ssh2: RSA SHA256:FiR974sjzPvW6bQBEQo8MAp+1XsnBVWQJvv311fPFzI Apr 16 00:18:38.305361 sshd-session[5755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:18:38.311954 systemd-logind[1486]: New session 16 of user core. Apr 16 00:18:38.317452 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 16 00:18:38.961036 sshd[5758]: Connection closed by 4.175.71.9 port 54048 Apr 16 00:18:38.961641 sshd-session[5755]: pam_unix(sshd:session): session closed for user core Apr 16 00:18:38.969690 systemd-logind[1486]: Session 16 logged out. Waiting for processes to exit. Apr 16 00:18:38.970315 systemd[1]: sshd@15-88.198.131.37:22-4.175.71.9:54048.service: Deactivated successfully. Apr 16 00:18:38.975086 systemd[1]: session-16.scope: Deactivated successfully. Apr 16 00:18:38.988920 systemd[1]: Started sshd@16-88.198.131.37:22-4.175.71.9:54050.service - OpenSSH per-connection server daemon (4.175.71.9:54050). Apr 16 00:18:38.990259 systemd-logind[1486]: Removed session 16. Apr 16 00:18:39.135473 sshd[5803]: Accepted publickey for core from 4.175.71.9 port 54050 ssh2: RSA SHA256:FiR974sjzPvW6bQBEQo8MAp+1XsnBVWQJvv311fPFzI Apr 16 00:18:39.137523 sshd-session[5803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:18:39.144957 systemd-logind[1486]: New session 17 of user core. Apr 16 00:18:39.152488 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 16 00:18:39.438914 sshd[5806]: Connection closed by 4.175.71.9 port 54050 Apr 16 00:18:39.439707 sshd-session[5803]: pam_unix(sshd:session): session closed for user core Apr 16 00:18:39.448359 systemd[1]: sshd@16-88.198.131.37:22-4.175.71.9:54050.service: Deactivated successfully. Apr 16 00:18:39.451997 systemd[1]: session-17.scope: Deactivated successfully. Apr 16 00:18:39.454462 systemd-logind[1486]: Session 17 logged out. Waiting for processes to exit. Apr 16 00:18:39.467790 systemd[1]: Started sshd@17-88.198.131.37:22-4.175.71.9:54056.service - OpenSSH per-connection server daemon (4.175.71.9:54056). Apr 16 00:18:39.469276 systemd-logind[1486]: Removed session 17. Apr 16 00:18:39.609166 sshd[5816]: Accepted publickey for core from 4.175.71.9 port 54056 ssh2: RSA SHA256:FiR974sjzPvW6bQBEQo8MAp+1XsnBVWQJvv311fPFzI Apr 16 00:18:39.612748 sshd-session[5816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:18:39.618385 systemd-logind[1486]: New session 18 of user core. Apr 16 00:18:39.625646 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 16 00:18:39.747130 sshd[5819]: Connection closed by 4.175.71.9 port 54056 Apr 16 00:18:39.748474 sshd-session[5816]: pam_unix(sshd:session): session closed for user core Apr 16 00:18:39.753796 systemd-logind[1486]: Session 18 logged out. Waiting for processes to exit. Apr 16 00:18:39.754088 systemd[1]: sshd@17-88.198.131.37:22-4.175.71.9:54056.service: Deactivated successfully. Apr 16 00:18:39.757081 systemd[1]: session-18.scope: Deactivated successfully. Apr 16 00:18:39.759389 systemd-logind[1486]: Removed session 18. Apr 16 00:18:44.774977 systemd[1]: Started sshd@18-88.198.131.37:22-4.175.71.9:54068.service - OpenSSH per-connection server daemon (4.175.71.9:54068). Apr 16 00:18:44.910683 sshd[5834]: Accepted publickey for core from 4.175.71.9 port 54068 ssh2: RSA SHA256:FiR974sjzPvW6bQBEQo8MAp+1XsnBVWQJvv311fPFzI Apr 16 00:18:44.913716 sshd-session[5834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:18:44.919873 systemd-logind[1486]: New session 19 of user core. Apr 16 00:18:44.925511 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 16 00:18:45.053550 sshd[5837]: Connection closed by 4.175.71.9 port 54068 Apr 16 00:18:45.054878 sshd-session[5834]: pam_unix(sshd:session): session closed for user core Apr 16 00:18:45.061721 systemd-logind[1486]: Session 19 logged out. Waiting for processes to exit. Apr 16 00:18:45.062104 systemd[1]: sshd@18-88.198.131.37:22-4.175.71.9:54068.service: Deactivated successfully. Apr 16 00:18:45.066594 systemd[1]: session-19.scope: Deactivated successfully. Apr 16 00:18:45.071352 systemd-logind[1486]: Removed session 19. Apr 16 00:18:50.082470 systemd[1]: Started sshd@19-88.198.131.37:22-4.175.71.9:35822.service - OpenSSH per-connection server daemon (4.175.71.9:35822). Apr 16 00:18:50.212265 sshd[5874]: Accepted publickey for core from 4.175.71.9 port 35822 ssh2: RSA SHA256:FiR974sjzPvW6bQBEQo8MAp+1XsnBVWQJvv311fPFzI Apr 16 00:18:50.215642 sshd-session[5874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:18:50.220254 systemd-logind[1486]: New session 20 of user core. Apr 16 00:18:50.228506 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 16 00:18:50.361359 sshd[5877]: Connection closed by 4.175.71.9 port 35822 Apr 16 00:18:50.362490 sshd-session[5874]: pam_unix(sshd:session): session closed for user core Apr 16 00:18:50.370932 systemd-logind[1486]: Session 20 logged out. Waiting for processes to exit. Apr 16 00:18:50.371292 systemd[1]: sshd@19-88.198.131.37:22-4.175.71.9:35822.service: Deactivated successfully. Apr 16 00:18:50.373457 systemd[1]: session-20.scope: Deactivated successfully. Apr 16 00:18:50.375835 systemd-logind[1486]: Removed session 20. Apr 16 00:18:55.392426 systemd[1]: Started sshd@20-88.198.131.37:22-4.175.71.9:57218.service - OpenSSH per-connection server daemon (4.175.71.9:57218). Apr 16 00:18:55.529884 sshd[5911]: Accepted publickey for core from 4.175.71.9 port 57218 ssh2: RSA SHA256:FiR974sjzPvW6bQBEQo8MAp+1XsnBVWQJvv311fPFzI Apr 16 00:18:55.532439 sshd-session[5911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:18:55.540326 systemd-logind[1486]: New session 21 of user core. Apr 16 00:18:55.546509 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 16 00:18:55.673722 sshd[5914]: Connection closed by 4.175.71.9 port 57218 Apr 16 00:18:55.674501 sshd-session[5911]: pam_unix(sshd:session): session closed for user core Apr 16 00:18:55.680082 systemd[1]: sshd@20-88.198.131.37:22-4.175.71.9:57218.service: Deactivated successfully. Apr 16 00:18:55.683486 systemd[1]: session-21.scope: Deactivated successfully. Apr 16 00:18:55.687637 systemd-logind[1486]: Session 21 logged out. Waiting for processes to exit. Apr 16 00:18:55.689760 systemd-logind[1486]: Removed session 21. Apr 16 00:19:11.399627 systemd[1]: cri-containerd-f8422208af7bb30e997dee1fa6921007f04c1019467a8ef4acbc1979fd20de44.scope: Deactivated successfully. Apr 16 00:19:11.400341 systemd[1]: cri-containerd-f8422208af7bb30e997dee1fa6921007f04c1019467a8ef4acbc1979fd20de44.scope: Consumed 3.980s CPU time, 64.9M memory peak, 2.4M read from disk. Apr 16 00:19:11.403597 containerd[1514]: time="2026-04-16T00:19:11.403557034Z" level=info msg="received container exit event container_id:\"f8422208af7bb30e997dee1fa6921007f04c1019467a8ef4acbc1979fd20de44\" id:\"f8422208af7bb30e997dee1fa6921007f04c1019467a8ef4acbc1979fd20de44\" pid:2536 exit_status:1 exited_at:{seconds:1776298751 nanos:400696024}" Apr 16 00:19:11.432738 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8422208af7bb30e997dee1fa6921007f04c1019467a8ef4acbc1979fd20de44-rootfs.mount: Deactivated successfully. Apr 16 00:19:11.849199 kubelet[2684]: E0416 00:19:11.848503 2684 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:47092->10.0.0.2:2379: read: connection timed out" Apr 16 00:19:12.328797 kubelet[2684]: I0416 00:19:12.328083 2684 scope.go:117] "RemoveContainer" containerID="f8422208af7bb30e997dee1fa6921007f04c1019467a8ef4acbc1979fd20de44" Apr 16 00:19:12.331551 containerd[1514]: time="2026-04-16T00:19:12.331510896Z" level=info msg="CreateContainer within sandbox \"379b0159edd0af464f965f62cb7a63362c4eb36c2e012072c9e91f249a0fdfc7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 16 00:19:12.346553 containerd[1514]: time="2026-04-16T00:19:12.346495838Z" level=info msg="Container 259fc87128464abd607f9d082ca808c90a716cee61811a2026ae9a85b87dc439: CDI devices from CRI Config.CDIDevices: []" Apr 16 00:19:12.358763 containerd[1514]: time="2026-04-16T00:19:12.358695172Z" level=info msg="CreateContainer within sandbox \"379b0159edd0af464f965f62cb7a63362c4eb36c2e012072c9e91f249a0fdfc7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"259fc87128464abd607f9d082ca808c90a716cee61811a2026ae9a85b87dc439\"" Apr 16 00:19:12.359848 containerd[1514]: time="2026-04-16T00:19:12.359648628Z" level=info msg="StartContainer for \"259fc87128464abd607f9d082ca808c90a716cee61811a2026ae9a85b87dc439\"" Apr 16 00:19:12.361059 containerd[1514]: time="2026-04-16T00:19:12.361024413Z" level=info msg="connecting to shim 259fc87128464abd607f9d082ca808c90a716cee61811a2026ae9a85b87dc439" address="unix:///run/containerd/s/788c33d8b49947a327258bf8225ab72166f266ce4847340ece134ca534d0b896" protocol=ttrpc version=3 Apr 16 00:19:12.376557 systemd[1]: cri-containerd-6453e38f63b15c63a84241757dcbce7d5dcf9507b84274a690ea3fcf222e44b5.scope: Deactivated successfully. Apr 16 00:19:12.377062 systemd[1]: cri-containerd-6453e38f63b15c63a84241757dcbce7d5dcf9507b84274a690ea3fcf222e44b5.scope: Consumed 15.263s CPU time, 126.4M memory peak, 3.3M read from disk. Apr 16 00:19:12.384174 containerd[1514]: time="2026-04-16T00:19:12.384038495Z" level=info msg="received container exit event container_id:\"6453e38f63b15c63a84241757dcbce7d5dcf9507b84274a690ea3fcf222e44b5\" id:\"6453e38f63b15c63a84241757dcbce7d5dcf9507b84274a690ea3fcf222e44b5\" pid:3014 exit_status:1 exited_at:{seconds:1776298752 nanos:383435085}" Apr 16 00:19:12.396661 systemd[1]: Started cri-containerd-259fc87128464abd607f9d082ca808c90a716cee61811a2026ae9a85b87dc439.scope - libcontainer container 259fc87128464abd607f9d082ca808c90a716cee61811a2026ae9a85b87dc439. Apr 16 00:19:12.418007 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6453e38f63b15c63a84241757dcbce7d5dcf9507b84274a690ea3fcf222e44b5-rootfs.mount: Deactivated successfully. Apr 16 00:19:12.460677 containerd[1514]: time="2026-04-16T00:19:12.460617916Z" level=info msg="StartContainer for \"259fc87128464abd607f9d082ca808c90a716cee61811a2026ae9a85b87dc439\" returns successfully" Apr 16 00:19:13.336229 kubelet[2684]: I0416 00:19:13.336157 2684 scope.go:117] "RemoveContainer" containerID="6453e38f63b15c63a84241757dcbce7d5dcf9507b84274a690ea3fcf222e44b5" Apr 16 00:19:13.339364 containerd[1514]: time="2026-04-16T00:19:13.339309156Z" level=info msg="CreateContainer within sandbox \"c8407bf730ed8184e429fe5f383ac30806c20800777650c2c99ba3f4e7607217\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 16 00:19:13.355192 containerd[1514]: time="2026-04-16T00:19:13.353933850Z" level=info msg="Container 6d75c4e280d0663d8587b598941c8a6d9e8fc5d9c52489f0d4bda67c7a933dd4: CDI devices from CRI Config.CDIDevices: []" Apr 16 00:19:13.370522 containerd[1514]: time="2026-04-16T00:19:13.370464216Z" level=info msg="CreateContainer within sandbox \"c8407bf730ed8184e429fe5f383ac30806c20800777650c2c99ba3f4e7607217\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"6d75c4e280d0663d8587b598941c8a6d9e8fc5d9c52489f0d4bda67c7a933dd4\"" Apr 16 00:19:13.372548 containerd[1514]: time="2026-04-16T00:19:13.372510772Z" level=info msg="StartContainer for \"6d75c4e280d0663d8587b598941c8a6d9e8fc5d9c52489f0d4bda67c7a933dd4\"" Apr 16 00:19:13.373896 containerd[1514]: time="2026-04-16T00:19:13.373857035Z" level=info msg="connecting to shim 6d75c4e280d0663d8587b598941c8a6d9e8fc5d9c52489f0d4bda67c7a933dd4" address="unix:///run/containerd/s/980f6a5689d5184356e0c77f074e3f58a54297ecb6b37e15375a22a8473b2df4" protocol=ttrpc version=3 Apr 16 00:19:13.400422 systemd[1]: Started cri-containerd-6d75c4e280d0663d8587b598941c8a6d9e8fc5d9c52489f0d4bda67c7a933dd4.scope - libcontainer container 6d75c4e280d0663d8587b598941c8a6d9e8fc5d9c52489f0d4bda67c7a933dd4. Apr 16 00:19:13.442412 containerd[1514]: time="2026-04-16T00:19:13.442366263Z" level=info msg="StartContainer for \"6d75c4e280d0663d8587b598941c8a6d9e8fc5d9c52489f0d4bda67c7a933dd4\" returns successfully" Apr 16 00:19:13.772541 kubelet[2684]: E0416 00:19:13.765974 2684 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:46722->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4459-2-4-n-0840528111.18a6ae4bba005cc8 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4459-2-4-n-0840528111,UID:5fde35acccd2d66f7431a41f7e967688,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4459-2-4-n-0840528111,},FirstTimestamp:2026-04-16 00:19:03.310605512 +0000 UTC m=+193.941580902,LastTimestamp:2026-04-16 00:19:03.310605512 +0000 UTC m=+193.941580902,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-4-n-0840528111,}"