Apr 16 23:28:48.806522 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 16 23:28:48.806546 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Thu Apr 16 22:10:49 -00 2026 Apr 16 23:28:48.806556 kernel: KASLR enabled Apr 16 23:28:48.806562 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Apr 16 23:28:48.806568 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390b8118 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Apr 16 23:28:48.806573 kernel: random: crng init done Apr 16 23:28:48.806580 kernel: secureboot: Secure boot disabled Apr 16 23:28:48.806585 kernel: ACPI: Early table checksum verification disabled Apr 16 23:28:48.806591 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Apr 16 23:28:48.806597 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Apr 16 23:28:48.806604 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 23:28:48.806610 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 23:28:48.806622 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 23:28:48.806629 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 23:28:48.806636 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 23:28:48.806645 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 23:28:48.806651 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 23:28:48.806657 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 23:28:48.806664 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 23:28:48.806670 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Apr 16 23:28:48.806676 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Apr 16 23:28:48.806682 kernel: ACPI: Use ACPI SPCR as default console: Yes Apr 16 23:28:48.806688 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Apr 16 23:28:48.806753 kernel: NODE_DATA(0) allocated [mem 0x13967da00-0x139684fff] Apr 16 23:28:48.806760 kernel: Zone ranges: Apr 16 23:28:48.806766 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 16 23:28:48.806775 kernel: DMA32 empty Apr 16 23:28:48.806781 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Apr 16 23:28:48.806787 kernel: Device empty Apr 16 23:28:48.806792 kernel: Movable zone start for each node Apr 16 23:28:48.806798 kernel: Early memory node ranges Apr 16 23:28:48.806805 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Apr 16 23:28:48.806811 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Apr 16 23:28:48.806817 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Apr 16 23:28:48.806823 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Apr 16 23:28:48.806829 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Apr 16 23:28:48.806835 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Apr 16 23:28:48.806841 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Apr 16 23:28:48.806848 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Apr 16 23:28:48.806854 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Apr 16 23:28:48.806863 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Apr 16 23:28:48.806869 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Apr 16 23:28:48.806876 kernel: cma: Reserved 16 MiB at 0x00000000ff000000 on node -1 Apr 16 23:28:48.806883 kernel: psci: probing for conduit method from ACPI. Apr 16 23:28:48.806890 kernel: psci: PSCIv1.1 detected in firmware. Apr 16 23:28:48.806896 kernel: psci: Using standard PSCI v0.2 function IDs Apr 16 23:28:48.806903 kernel: psci: Trusted OS migration not required Apr 16 23:28:48.806909 kernel: psci: SMC Calling Convention v1.1 Apr 16 23:28:48.806916 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Apr 16 23:28:48.806922 kernel: percpu: Embedded 33 pages/cpu s97752 r8192 d29224 u135168 Apr 16 23:28:48.806929 kernel: pcpu-alloc: s97752 r8192 d29224 u135168 alloc=33*4096 Apr 16 23:28:48.806935 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 16 23:28:48.806942 kernel: Detected PIPT I-cache on CPU0 Apr 16 23:28:48.806948 kernel: CPU features: detected: GIC system register CPU interface Apr 16 23:28:48.806956 kernel: CPU features: detected: Spectre-v4 Apr 16 23:28:48.806962 kernel: CPU features: detected: Spectre-BHB Apr 16 23:28:48.806968 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 16 23:28:48.806975 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 16 23:28:48.806981 kernel: CPU features: detected: ARM erratum 1418040 Apr 16 23:28:48.806987 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 16 23:28:48.806994 kernel: alternatives: applying boot alternatives Apr 16 23:28:48.807001 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=c4961845f9869114226296d88644496bf9e4629823927a5e8ae22de79f1c7b59 Apr 16 23:28:48.807008 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 16 23:28:48.807015 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 16 23:28:48.807021 kernel: Fallback order for Node 0: 0 Apr 16 23:28:48.807029 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1024000 Apr 16 23:28:48.807035 kernel: Policy zone: Normal Apr 16 23:28:48.807077 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 16 23:28:48.807084 kernel: software IO TLB: area num 2. Apr 16 23:28:48.807090 kernel: software IO TLB: mapped [mem 0x00000000fb000000-0x00000000ff000000] (64MB) Apr 16 23:28:48.807096 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 16 23:28:48.807103 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 16 23:28:48.807110 kernel: rcu: RCU event tracing is enabled. Apr 16 23:28:48.807116 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 16 23:28:48.807123 kernel: Trampoline variant of Tasks RCU enabled. Apr 16 23:28:48.807129 kernel: Tracing variant of Tasks RCU enabled. Apr 16 23:28:48.807136 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 16 23:28:48.807145 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 16 23:28:48.807152 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 16 23:28:48.807158 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 16 23:28:48.807164 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 16 23:28:48.807171 kernel: GICv3: 256 SPIs implemented Apr 16 23:28:48.807177 kernel: GICv3: 0 Extended SPIs implemented Apr 16 23:28:48.807183 kernel: Root IRQ handler: gic_handle_irq Apr 16 23:28:48.807189 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Apr 16 23:28:48.807196 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Apr 16 23:28:48.807202 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Apr 16 23:28:48.807208 kernel: ITS [mem 0x08080000-0x0809ffff] Apr 16 23:28:48.807216 kernel: ITS@0x0000000008080000: allocated 8192 Devices @100100000 (indirect, esz 8, psz 64K, shr 1) Apr 16 23:28:48.807223 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @100110000 (flat, esz 8, psz 64K, shr 1) Apr 16 23:28:48.807229 kernel: GICv3: using LPI property table @0x0000000100120000 Apr 16 23:28:48.807236 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000100130000 Apr 16 23:28:48.807242 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 16 23:28:48.807249 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 16 23:28:48.807255 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 16 23:28:48.807262 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 16 23:28:48.807268 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 16 23:28:48.807275 kernel: Console: colour dummy device 80x25 Apr 16 23:28:48.807282 kernel: ACPI: Core revision 20240827 Apr 16 23:28:48.807290 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 16 23:28:48.807297 kernel: pid_max: default: 32768 minimum: 301 Apr 16 23:28:48.807304 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 16 23:28:48.807310 kernel: landlock: Up and running. Apr 16 23:28:48.807317 kernel: SELinux: Initializing. Apr 16 23:28:48.807324 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 16 23:28:48.807331 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 16 23:28:48.807338 kernel: rcu: Hierarchical SRCU implementation. Apr 16 23:28:48.807344 kernel: rcu: Max phase no-delay instances is 400. Apr 16 23:28:48.807353 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 16 23:28:48.807359 kernel: Remapping and enabling EFI services. Apr 16 23:28:48.807366 kernel: smp: Bringing up secondary CPUs ... Apr 16 23:28:48.807372 kernel: Detected PIPT I-cache on CPU1 Apr 16 23:28:48.807379 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Apr 16 23:28:48.807385 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100140000 Apr 16 23:28:48.807392 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 16 23:28:48.807398 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 16 23:28:48.807405 kernel: smp: Brought up 1 node, 2 CPUs Apr 16 23:28:48.807412 kernel: SMP: Total of 2 processors activated. Apr 16 23:28:48.807424 kernel: CPU: All CPU(s) started at EL1 Apr 16 23:28:48.807431 kernel: CPU features: detected: 32-bit EL0 Support Apr 16 23:28:48.807440 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 16 23:28:48.807447 kernel: CPU features: detected: Common not Private translations Apr 16 23:28:48.807453 kernel: CPU features: detected: CRC32 instructions Apr 16 23:28:48.807460 kernel: CPU features: detected: Enhanced Virtualization Traps Apr 16 23:28:48.807467 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 16 23:28:48.807476 kernel: CPU features: detected: LSE atomic instructions Apr 16 23:28:48.807483 kernel: CPU features: detected: Privileged Access Never Apr 16 23:28:48.807490 kernel: CPU features: detected: RAS Extension Support Apr 16 23:28:48.807497 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Apr 16 23:28:48.807504 kernel: alternatives: applying system-wide alternatives Apr 16 23:28:48.807511 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Apr 16 23:28:48.807518 kernel: Memory: 3858780K/4096000K available (11200K kernel code, 2458K rwdata, 9092K rodata, 39552K init, 1038K bss, 215732K reserved, 16384K cma-reserved) Apr 16 23:28:48.807525 kernel: devtmpfs: initialized Apr 16 23:28:48.807532 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 16 23:28:48.807541 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 16 23:28:48.807548 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 16 23:28:48.807555 kernel: 0 pages in range for non-PLT usage Apr 16 23:28:48.807562 kernel: 508384 pages in range for PLT usage Apr 16 23:28:48.807569 kernel: pinctrl core: initialized pinctrl subsystem Apr 16 23:28:48.807575 kernel: SMBIOS 3.0.0 present. Apr 16 23:28:48.807582 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Apr 16 23:28:48.807589 kernel: DMI: Memory slots populated: 1/1 Apr 16 23:28:48.807596 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 16 23:28:48.807605 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 16 23:28:48.807612 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 16 23:28:48.807619 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 16 23:28:48.807626 kernel: audit: initializing netlink subsys (disabled) Apr 16 23:28:48.807633 kernel: audit: type=2000 audit(0.015:1): state=initialized audit_enabled=0 res=1 Apr 16 23:28:48.807640 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 16 23:28:48.807647 kernel: cpuidle: using governor menu Apr 16 23:28:48.807654 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 16 23:28:48.807661 kernel: ASID allocator initialised with 32768 entries Apr 16 23:28:48.807669 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 16 23:28:48.807676 kernel: Serial: AMBA PL011 UART driver Apr 16 23:28:48.807682 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 16 23:28:48.807689 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 16 23:28:48.808990 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 16 23:28:48.809001 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 16 23:28:48.809008 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 16 23:28:48.809015 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 16 23:28:48.809022 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 16 23:28:48.809035 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 16 23:28:48.809075 kernel: ACPI: Added _OSI(Module Device) Apr 16 23:28:48.809082 kernel: ACPI: Added _OSI(Processor Device) Apr 16 23:28:48.809089 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 16 23:28:48.809096 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 16 23:28:48.809104 kernel: ACPI: Interpreter enabled Apr 16 23:28:48.809110 kernel: ACPI: Using GIC for interrupt routing Apr 16 23:28:48.809117 kernel: ACPI: MCFG table detected, 1 entries Apr 16 23:28:48.809124 kernel: ACPI: CPU0 has been hot-added Apr 16 23:28:48.809131 kernel: ACPI: CPU1 has been hot-added Apr 16 23:28:48.809140 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Apr 16 23:28:48.809147 kernel: printk: legacy console [ttyAMA0] enabled Apr 16 23:28:48.809154 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 16 23:28:48.809306 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 16 23:28:48.809372 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 16 23:28:48.809446 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 16 23:28:48.809504 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Apr 16 23:28:48.809565 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Apr 16 23:28:48.809574 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Apr 16 23:28:48.809582 kernel: PCI host bridge to bus 0000:00 Apr 16 23:28:48.809650 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Apr 16 23:28:48.810763 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 16 23:28:48.810849 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Apr 16 23:28:48.810905 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 16 23:28:48.810994 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Apr 16 23:28:48.811116 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 conventional PCI endpoint Apr 16 23:28:48.811188 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11289000-0x11289fff] Apr 16 23:28:48.811251 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref] Apr 16 23:28:48.811317 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:28:48.811376 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11288000-0x11288fff] Apr 16 23:28:48.811439 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 16 23:28:48.811497 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff] Apr 16 23:28:48.811555 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80000fffff 64bit pref] Apr 16 23:28:48.811621 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:28:48.811680 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11287000-0x11287fff] Apr 16 23:28:48.812851 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 16 23:28:48.812928 kernel: pci 0000:00:02.1: bridge window [mem 0x10e00000-0x10ffffff] Apr 16 23:28:48.812998 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:28:48.813108 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11286000-0x11286fff] Apr 16 23:28:48.813181 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 16 23:28:48.813241 kernel: pci 0000:00:02.2: bridge window [mem 0x10c00000-0x10dfffff] Apr 16 23:28:48.813299 kernel: pci 0000:00:02.2: bridge window [mem 0x8000100000-0x80001fffff 64bit pref] Apr 16 23:28:48.813371 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:28:48.813430 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11285000-0x11285fff] Apr 16 23:28:48.813491 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 16 23:28:48.813549 kernel: pci 0000:00:02.3: bridge window [mem 0x10a00000-0x10bfffff] Apr 16 23:28:48.813608 kernel: pci 0000:00:02.3: bridge window [mem 0x8000200000-0x80002fffff 64bit pref] Apr 16 23:28:48.813675 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:28:48.814267 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11284000-0x11284fff] Apr 16 23:28:48.814352 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 16 23:28:48.814412 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Apr 16 23:28:48.814476 kernel: pci 0000:00:02.4: bridge window [mem 0x8000300000-0x80003fffff 64bit pref] Apr 16 23:28:48.814543 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:28:48.814602 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11283000-0x11283fff] Apr 16 23:28:48.814660 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 16 23:28:48.814738 kernel: pci 0000:00:02.5: bridge window [mem 0x10600000-0x107fffff] Apr 16 23:28:48.814799 kernel: pci 0000:00:02.5: bridge window [mem 0x8000400000-0x80004fffff 64bit pref] Apr 16 23:28:48.814865 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:28:48.814927 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11282000-0x11282fff] Apr 16 23:28:48.814985 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 16 23:28:48.815079 kernel: pci 0000:00:02.6: bridge window [mem 0x10400000-0x105fffff] Apr 16 23:28:48.815148 kernel: pci 0000:00:02.6: bridge window [mem 0x8000500000-0x80005fffff 64bit pref] Apr 16 23:28:48.815220 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:28:48.815280 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11281000-0x11281fff] Apr 16 23:28:48.815342 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 16 23:28:48.815399 kernel: pci 0000:00:02.7: bridge window [mem 0x10200000-0x103fffff] Apr 16 23:28:48.815464 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:28:48.815525 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11280000-0x11280fff] Apr 16 23:28:48.815582 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 16 23:28:48.815640 kernel: pci 0000:00:03.0: bridge window [mem 0x10000000-0x101fffff] Apr 16 23:28:48.816395 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 conventional PCI endpoint Apr 16 23:28:48.816492 kernel: pci 0000:00:04.0: BAR 0 [io 0x0000-0x0007] Apr 16 23:28:48.816566 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Apr 16 23:28:48.816630 kernel: pci 0000:01:00.0: BAR 1 [mem 0x11000000-0x11000fff] Apr 16 23:28:48.816691 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Apr 16 23:28:48.816784 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref] Apr 16 23:28:48.816856 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Apr 16 23:28:48.816917 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10e00000-0x10e03fff 64bit] Apr 16 23:28:48.816995 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 PCIe Endpoint Apr 16 23:28:48.817099 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10c00000-0x10c00fff] Apr 16 23:28:48.817165 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000100000-0x8000103fff 64bit pref] Apr 16 23:28:48.817235 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Apr 16 23:28:48.817295 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000200000-0x8000203fff 64bit pref] Apr 16 23:28:48.817363 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Apr 16 23:28:48.817428 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff] Apr 16 23:28:48.817490 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000300000-0x8000303fff 64bit pref] Apr 16 23:28:48.817557 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 PCIe Endpoint Apr 16 23:28:48.817617 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10600000-0x10600fff] Apr 16 23:28:48.817685 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref] Apr 16 23:28:48.817788 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Apr 16 23:28:48.817859 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10400000-0x10400fff] Apr 16 23:28:48.817942 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000500000-0x8000503fff 64bit pref] Apr 16 23:28:48.818015 kernel: pci 0000:07:00.0: ROM [mem 0xfff80000-0xffffffff pref] Apr 16 23:28:48.818105 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Apr 16 23:28:48.818167 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Apr 16 23:28:48.818226 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Apr 16 23:28:48.818287 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Apr 16 23:28:48.818345 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Apr 16 23:28:48.818408 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Apr 16 23:28:48.818468 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Apr 16 23:28:48.818528 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Apr 16 23:28:48.818586 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Apr 16 23:28:48.818649 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Apr 16 23:28:48.818759 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Apr 16 23:28:48.818826 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Apr 16 23:28:48.818891 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Apr 16 23:28:48.818948 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Apr 16 23:28:48.819006 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Apr 16 23:28:48.819085 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Apr 16 23:28:48.819152 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Apr 16 23:28:48.819211 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Apr 16 23:28:48.819272 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 16 23:28:48.819337 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Apr 16 23:28:48.819396 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Apr 16 23:28:48.819460 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 16 23:28:48.819520 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Apr 16 23:28:48.819581 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Apr 16 23:28:48.819645 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 16 23:28:48.820238 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Apr 16 23:28:48.820331 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Apr 16 23:28:48.820394 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff]: assigned Apr 16 23:28:48.820453 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref]: assigned Apr 16 23:28:48.820513 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff]: assigned Apr 16 23:28:48.820570 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref]: assigned Apr 16 23:28:48.820629 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff]: assigned Apr 16 23:28:48.820687 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref]: assigned Apr 16 23:28:48.821240 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff]: assigned Apr 16 23:28:48.821303 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref]: assigned Apr 16 23:28:48.821364 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff]: assigned Apr 16 23:28:48.821422 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref]: assigned Apr 16 23:28:48.821482 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff]: assigned Apr 16 23:28:48.821541 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref]: assigned Apr 16 23:28:48.821605 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff]: assigned Apr 16 23:28:48.821667 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref]: assigned Apr 16 23:28:48.821976 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff]: assigned Apr 16 23:28:48.822094 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref]: assigned Apr 16 23:28:48.822166 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff]: assigned Apr 16 23:28:48.822225 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref]: assigned Apr 16 23:28:48.822289 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8001200000-0x8001203fff 64bit pref]: assigned Apr 16 23:28:48.822347 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11200000-0x11200fff]: assigned Apr 16 23:28:48.822406 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11201000-0x11201fff]: assigned Apr 16 23:28:48.822470 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Apr 16 23:28:48.822529 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11202000-0x11202fff]: assigned Apr 16 23:28:48.822587 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Apr 16 23:28:48.822646 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11203000-0x11203fff]: assigned Apr 16 23:28:48.822792 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Apr 16 23:28:48.822868 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11204000-0x11204fff]: assigned Apr 16 23:28:48.822926 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Apr 16 23:28:48.822985 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11205000-0x11205fff]: assigned Apr 16 23:28:48.823061 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Apr 16 23:28:48.823130 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11206000-0x11206fff]: assigned Apr 16 23:28:48.823190 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Apr 16 23:28:48.823248 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11207000-0x11207fff]: assigned Apr 16 23:28:48.823311 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Apr 16 23:28:48.823376 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11208000-0x11208fff]: assigned Apr 16 23:28:48.823434 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Apr 16 23:28:48.823492 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11209000-0x11209fff]: assigned Apr 16 23:28:48.823559 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff]: assigned Apr 16 23:28:48.823624 kernel: pci 0000:00:04.0: BAR 0 [io 0xa000-0xa007]: assigned Apr 16 23:28:48.824446 kernel: pci 0000:01:00.0: ROM [mem 0x10000000-0x1007ffff pref]: assigned Apr 16 23:28:48.824555 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Apr 16 23:28:48.824626 kernel: pci 0000:01:00.0: BAR 1 [mem 0x10080000-0x10080fff]: assigned Apr 16 23:28:48.824687 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 16 23:28:48.824776 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Apr 16 23:28:48.824837 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Apr 16 23:28:48.824895 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Apr 16 23:28:48.824962 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10200000-0x10203fff 64bit]: assigned Apr 16 23:28:48.825022 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 16 23:28:48.825101 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Apr 16 23:28:48.825167 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Apr 16 23:28:48.825226 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Apr 16 23:28:48.825292 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref]: assigned Apr 16 23:28:48.825362 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10400000-0x10400fff]: assigned Apr 16 23:28:48.825422 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 16 23:28:48.825481 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Apr 16 23:28:48.825543 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Apr 16 23:28:48.825602 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Apr 16 23:28:48.825669 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref]: assigned Apr 16 23:28:48.826073 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 16 23:28:48.826150 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Apr 16 23:28:48.826209 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Apr 16 23:28:48.826275 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Apr 16 23:28:48.826350 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000800000-0x8000803fff 64bit pref]: assigned Apr 16 23:28:48.826410 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff]: assigned Apr 16 23:28:48.826469 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 16 23:28:48.826527 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Apr 16 23:28:48.826585 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Apr 16 23:28:48.826643 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Apr 16 23:28:48.826741 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000a00000-0x8000a03fff 64bit pref]: assigned Apr 16 23:28:48.826809 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10a00000-0x10a00fff]: assigned Apr 16 23:28:48.826869 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 16 23:28:48.826940 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Apr 16 23:28:48.827001 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Apr 16 23:28:48.827101 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 16 23:28:48.827173 kernel: pci 0000:07:00.0: ROM [mem 0x10c00000-0x10c7ffff pref]: assigned Apr 16 23:28:48.827247 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000c00000-0x8000c03fff 64bit pref]: assigned Apr 16 23:28:48.827314 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10c80000-0x10c80fff]: assigned Apr 16 23:28:48.827374 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 16 23:28:48.827435 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Apr 16 23:28:48.827497 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Apr 16 23:28:48.827557 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 16 23:28:48.827616 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 16 23:28:48.827674 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Apr 16 23:28:48.827748 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Apr 16 23:28:48.827809 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 16 23:28:48.827869 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 16 23:28:48.827927 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Apr 16 23:28:48.827984 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Apr 16 23:28:48.828058 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Apr 16 23:28:48.828124 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Apr 16 23:28:48.828178 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 16 23:28:48.828231 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Apr 16 23:28:48.828383 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Apr 16 23:28:48.828447 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Apr 16 23:28:48.828508 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Apr 16 23:28:48.828573 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Apr 16 23:28:48.828639 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Apr 16 23:28:48.828727 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Apr 16 23:28:48.828802 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Apr 16 23:28:48.828860 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Apr 16 23:28:48.828920 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Apr 16 23:28:48.828995 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Apr 16 23:28:48.829066 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Apr 16 23:28:48.829122 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Apr 16 23:28:48.829182 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Apr 16 23:28:48.829237 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Apr 16 23:28:48.829290 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Apr 16 23:28:48.829355 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Apr 16 23:28:48.829413 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Apr 16 23:28:48.829468 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 16 23:28:48.829530 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Apr 16 23:28:48.829584 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Apr 16 23:28:48.829637 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 16 23:28:48.830669 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Apr 16 23:28:48.830897 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Apr 16 23:28:48.830964 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 16 23:28:48.831029 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Apr 16 23:28:48.831117 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Apr 16 23:28:48.831181 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Apr 16 23:28:48.831191 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 16 23:28:48.831199 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 16 23:28:48.831212 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 16 23:28:48.831219 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 16 23:28:48.831227 kernel: iommu: Default domain type: Translated Apr 16 23:28:48.831235 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 16 23:28:48.831242 kernel: efivars: Registered efivars operations Apr 16 23:28:48.831249 kernel: vgaarb: loaded Apr 16 23:28:48.831257 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 16 23:28:48.831264 kernel: VFS: Disk quotas dquot_6.6.0 Apr 16 23:28:48.831271 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 16 23:28:48.831280 kernel: pnp: PnP ACPI init Apr 16 23:28:48.831354 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Apr 16 23:28:48.831365 kernel: pnp: PnP ACPI: found 1 devices Apr 16 23:28:48.831373 kernel: NET: Registered PF_INET protocol family Apr 16 23:28:48.831380 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 16 23:28:48.831388 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 16 23:28:48.831395 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 16 23:28:48.831403 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 16 23:28:48.831412 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 16 23:28:48.831421 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 16 23:28:48.831429 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 16 23:28:48.831436 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 16 23:28:48.831444 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 16 23:28:48.831511 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Apr 16 23:28:48.831522 kernel: PCI: CLS 0 bytes, default 64 Apr 16 23:28:48.831529 kernel: kvm [1]: HYP mode not available Apr 16 23:28:48.831537 kernel: Initialise system trusted keyrings Apr 16 23:28:48.831546 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 16 23:28:48.831554 kernel: Key type asymmetric registered Apr 16 23:28:48.831561 kernel: Asymmetric key parser 'x509' registered Apr 16 23:28:48.831568 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Apr 16 23:28:48.831576 kernel: io scheduler mq-deadline registered Apr 16 23:28:48.831583 kernel: io scheduler kyber registered Apr 16 23:28:48.831590 kernel: io scheduler bfq registered Apr 16 23:28:48.831599 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 16 23:28:48.831659 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Apr 16 23:28:48.831746 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Apr 16 23:28:48.831820 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 23:28:48.831881 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Apr 16 23:28:48.831942 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Apr 16 23:28:48.832000 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 23:28:48.832080 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Apr 16 23:28:48.832142 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Apr 16 23:28:48.832201 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 23:28:48.832267 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Apr 16 23:28:48.832327 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Apr 16 23:28:48.832385 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 23:28:48.832446 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Apr 16 23:28:48.832505 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Apr 16 23:28:48.832564 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 23:28:48.832634 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Apr 16 23:28:48.833486 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Apr 16 23:28:48.834362 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 23:28:48.834478 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Apr 16 23:28:48.834542 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Apr 16 23:28:48.834602 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 23:28:48.834662 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Apr 16 23:28:48.834740 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Apr 16 23:28:48.834803 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 23:28:48.834821 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Apr 16 23:28:48.834883 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Apr 16 23:28:48.834945 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Apr 16 23:28:48.835005 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 23:28:48.835016 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 16 23:28:48.835024 kernel: ACPI: button: Power Button [PWRB] Apr 16 23:28:48.835031 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 16 23:28:48.835148 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Apr 16 23:28:48.835224 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Apr 16 23:28:48.835241 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 16 23:28:48.835248 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 16 23:28:48.835312 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Apr 16 23:28:48.835323 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Apr 16 23:28:48.835330 kernel: thunder_xcv, ver 1.0 Apr 16 23:28:48.835338 kernel: thunder_bgx, ver 1.0 Apr 16 23:28:48.835345 kernel: nicpf, ver 1.0 Apr 16 23:28:48.835353 kernel: nicvf, ver 1.0 Apr 16 23:28:48.835435 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 16 23:28:48.835496 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-04-16T23:28:48 UTC (1776382128) Apr 16 23:28:48.835506 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 16 23:28:48.835513 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Apr 16 23:28:48.835521 kernel: watchdog: NMI not fully supported Apr 16 23:28:48.835528 kernel: watchdog: Hard watchdog permanently disabled Apr 16 23:28:48.835536 kernel: NET: Registered PF_INET6 protocol family Apr 16 23:28:48.835543 kernel: Segment Routing with IPv6 Apr 16 23:28:48.835550 kernel: In-situ OAM (IOAM) with IPv6 Apr 16 23:28:48.835561 kernel: NET: Registered PF_PACKET protocol family Apr 16 23:28:48.835568 kernel: Key type dns_resolver registered Apr 16 23:28:48.835576 kernel: registered taskstats version 1 Apr 16 23:28:48.835583 kernel: Loading compiled-in X.509 certificates Apr 16 23:28:48.835591 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: 4acad53138393591155ecb80320b4c1550e344f8' Apr 16 23:28:48.835598 kernel: Demotion targets for Node 0: null Apr 16 23:28:48.835605 kernel: Key type .fscrypt registered Apr 16 23:28:48.835613 kernel: Key type fscrypt-provisioning registered Apr 16 23:28:48.835620 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 16 23:28:48.835629 kernel: ima: Allocated hash algorithm: sha1 Apr 16 23:28:48.835636 kernel: ima: No architecture policies found Apr 16 23:28:48.835643 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 16 23:28:48.835651 kernel: clk: Disabling unused clocks Apr 16 23:28:48.835658 kernel: PM: genpd: Disabling unused power domains Apr 16 23:28:48.835665 kernel: Warning: unable to open an initial console. Apr 16 23:28:48.835673 kernel: Freeing unused kernel memory: 39552K Apr 16 23:28:48.835680 kernel: Run /init as init process Apr 16 23:28:48.835687 kernel: with arguments: Apr 16 23:28:48.836779 kernel: /init Apr 16 23:28:48.836793 kernel: with environment: Apr 16 23:28:48.836801 kernel: HOME=/ Apr 16 23:28:48.836808 kernel: TERM=linux Apr 16 23:28:48.836818 systemd[1]: Successfully made /usr/ read-only. Apr 16 23:28:48.836830 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 16 23:28:48.836839 systemd[1]: Detected virtualization kvm. Apr 16 23:28:48.836847 systemd[1]: Detected architecture arm64. Apr 16 23:28:48.836860 systemd[1]: Running in initrd. Apr 16 23:28:48.836868 systemd[1]: No hostname configured, using default hostname. Apr 16 23:28:48.836876 systemd[1]: Hostname set to . Apr 16 23:28:48.836884 systemd[1]: Initializing machine ID from VM UUID. Apr 16 23:28:48.836891 systemd[1]: Queued start job for default target initrd.target. Apr 16 23:28:48.836899 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 23:28:48.836907 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 23:28:48.836916 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 16 23:28:48.836926 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 23:28:48.836934 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 16 23:28:48.836942 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 16 23:28:48.836951 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 16 23:28:48.836959 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 16 23:28:48.836967 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 23:28:48.836977 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 23:28:48.836985 systemd[1]: Reached target paths.target - Path Units. Apr 16 23:28:48.836993 systemd[1]: Reached target slices.target - Slice Units. Apr 16 23:28:48.837001 systemd[1]: Reached target swap.target - Swaps. Apr 16 23:28:48.837009 systemd[1]: Reached target timers.target - Timer Units. Apr 16 23:28:48.837017 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 23:28:48.837024 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 23:28:48.837032 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 16 23:28:48.837055 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 16 23:28:48.837067 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 23:28:48.837075 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 23:28:48.837083 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 23:28:48.837091 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 23:28:48.837098 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 16 23:28:48.837106 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 23:28:48.837114 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 16 23:28:48.837123 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 16 23:28:48.837132 systemd[1]: Starting systemd-fsck-usr.service... Apr 16 23:28:48.837141 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 23:28:48.837149 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 23:28:48.837158 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 23:28:48.837165 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 16 23:28:48.837174 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 23:28:48.837183 systemd[1]: Finished systemd-fsck-usr.service. Apr 16 23:28:48.837191 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 16 23:28:48.837199 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:28:48.837243 systemd-journald[245]: Collecting audit messages is disabled. Apr 16 23:28:48.837266 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 16 23:28:48.837274 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 16 23:28:48.837283 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 23:28:48.837291 kernel: Bridge firewalling registered Apr 16 23:28:48.837299 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 23:28:48.837308 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 23:28:48.837316 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 23:28:48.837327 systemd-journald[245]: Journal started Apr 16 23:28:48.837346 systemd-journald[245]: Runtime Journal (/run/log/journal/8dcbff955eb04ed591b10efd80f981c0) is 8M, max 76.5M, 68.5M free. Apr 16 23:28:48.789870 systemd-modules-load[247]: Inserted module 'overlay' Apr 16 23:28:48.815741 systemd-modules-load[247]: Inserted module 'br_netfilter' Apr 16 23:28:48.842790 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 23:28:48.844807 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 23:28:48.846852 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 23:28:48.849027 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 23:28:48.853907 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 16 23:28:48.856838 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 23:28:48.879408 systemd-tmpfiles[283]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 16 23:28:48.884996 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 23:28:48.888350 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 23:28:48.891200 dracut-cmdline[282]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=c4961845f9869114226296d88644496bf9e4629823927a5e8ae22de79f1c7b59 Apr 16 23:28:48.929846 systemd-resolved[296]: Positive Trust Anchors: Apr 16 23:28:48.930637 systemd-resolved[296]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 23:28:48.930673 systemd-resolved[296]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 23:28:48.941311 systemd-resolved[296]: Defaulting to hostname 'linux'. Apr 16 23:28:48.942862 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 23:28:48.944210 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 23:28:48.997783 kernel: SCSI subsystem initialized Apr 16 23:28:49.001767 kernel: Loading iSCSI transport class v2.0-870. Apr 16 23:28:49.009750 kernel: iscsi: registered transport (tcp) Apr 16 23:28:49.022848 kernel: iscsi: registered transport (qla4xxx) Apr 16 23:28:49.022913 kernel: QLogic iSCSI HBA Driver Apr 16 23:28:49.043771 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 16 23:28:49.067432 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 23:28:49.073580 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 16 23:28:49.122512 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 16 23:28:49.125682 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 16 23:28:49.188756 kernel: raid6: neonx8 gen() 15517 MB/s Apr 16 23:28:49.202749 kernel: raid6: neonx4 gen() 15517 MB/s Apr 16 23:28:49.219791 kernel: raid6: neonx2 gen() 12996 MB/s Apr 16 23:28:49.236777 kernel: raid6: neonx1 gen() 10255 MB/s Apr 16 23:28:49.253775 kernel: raid6: int64x8 gen() 6742 MB/s Apr 16 23:28:49.270772 kernel: raid6: int64x4 gen() 7195 MB/s Apr 16 23:28:49.287759 kernel: raid6: int64x2 gen() 6007 MB/s Apr 16 23:28:49.304790 kernel: raid6: int64x1 gen() 4986 MB/s Apr 16 23:28:49.304875 kernel: raid6: using algorithm neonx8 gen() 15517 MB/s Apr 16 23:28:49.321767 kernel: raid6: .... xor() 11958 MB/s, rmw enabled Apr 16 23:28:49.321848 kernel: raid6: using neon recovery algorithm Apr 16 23:28:49.326749 kernel: xor: measuring software checksum speed Apr 16 23:28:49.326833 kernel: 8regs : 21618 MB/sec Apr 16 23:28:49.326852 kernel: 32regs : 19501 MB/sec Apr 16 23:28:49.327782 kernel: arm64_neon : 25801 MB/sec Apr 16 23:28:49.327822 kernel: xor: using function: arm64_neon (25801 MB/sec) Apr 16 23:28:49.380771 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 16 23:28:49.389107 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 16 23:28:49.391523 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 23:28:49.425666 systemd-udevd[494]: Using default interface naming scheme 'v255'. Apr 16 23:28:49.429960 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 23:28:49.435461 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 16 23:28:49.464863 dracut-pre-trigger[505]: rd.md=0: removing MD RAID activation Apr 16 23:28:49.490869 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 23:28:49.493119 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 23:28:49.557794 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 23:28:49.559755 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 16 23:28:49.673723 kernel: virtio_scsi virtio5: 2/0/0 default/read/poll queues Apr 16 23:28:49.675492 kernel: scsi host0: Virtio SCSI HBA Apr 16 23:28:49.680715 kernel: ACPI: bus type USB registered Apr 16 23:28:49.680761 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 16 23:28:49.681975 kernel: usbcore: registered new interface driver usbfs Apr 16 23:28:49.682017 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 16 23:28:49.684315 kernel: usbcore: registered new interface driver hub Apr 16 23:28:49.684349 kernel: usbcore: registered new device driver usb Apr 16 23:28:49.712155 kernel: sd 0:0:0:1: Power-on or device reset occurred Apr 16 23:28:49.712352 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Apr 16 23:28:49.712432 kernel: sd 0:0:0:1: [sda] Write Protect is off Apr 16 23:28:49.713389 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Apr 16 23:28:49.713552 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 16 23:28:49.720066 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 23:28:49.720219 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:28:49.721914 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 23:28:49.730860 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 16 23:28:49.730881 kernel: GPT:17805311 != 80003071 Apr 16 23:28:49.730891 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 16 23:28:49.730899 kernel: GPT:17805311 != 80003071 Apr 16 23:28:49.730907 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 16 23:28:49.730916 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 16 23:28:49.730925 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Apr 16 23:28:49.733975 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 23:28:49.739710 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 16 23:28:49.739893 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 16 23:28:49.739973 kernel: sr 0:0:0:0: Power-on or device reset occurred Apr 16 23:28:49.742722 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 16 23:28:49.742935 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Apr 16 23:28:49.743042 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 16 23:28:49.744961 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 16 23:28:49.745151 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Apr 16 23:28:49.745244 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 16 23:28:49.745983 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 16 23:28:49.750280 kernel: hub 1-0:1.0: USB hub found Apr 16 23:28:49.754891 kernel: hub 1-0:1.0: 4 ports detected Apr 16 23:28:49.759737 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 16 23:28:49.763293 kernel: hub 2-0:1.0: USB hub found Apr 16 23:28:49.763479 kernel: hub 2-0:1.0: 4 ports detected Apr 16 23:28:49.771420 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:28:49.808469 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 16 23:28:49.834114 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 16 23:28:49.843082 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 16 23:28:49.843802 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 16 23:28:49.846972 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 16 23:28:49.856924 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 16 23:28:49.861336 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 23:28:49.862105 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 23:28:49.863869 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 23:28:49.867867 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 16 23:28:49.871885 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 16 23:28:49.888894 disk-uuid[602]: Primary Header is updated. Apr 16 23:28:49.888894 disk-uuid[602]: Secondary Entries is updated. Apr 16 23:28:49.888894 disk-uuid[602]: Secondary Header is updated. Apr 16 23:28:49.894490 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 16 23:28:49.899739 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 16 23:28:49.995780 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 16 23:28:50.128723 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Apr 16 23:28:50.128777 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 16 23:28:50.129914 kernel: usbcore: registered new interface driver usbhid Apr 16 23:28:50.129946 kernel: usbhid: USB HID core driver Apr 16 23:28:50.233853 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Apr 16 23:28:50.361741 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Apr 16 23:28:50.415522 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Apr 16 23:28:50.919075 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 16 23:28:50.919134 disk-uuid[607]: The operation has completed successfully. Apr 16 23:28:50.967980 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 16 23:28:50.968140 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 16 23:28:50.999593 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 16 23:28:51.015416 sh[627]: Success Apr 16 23:28:51.028846 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 16 23:28:51.028950 kernel: device-mapper: uevent: version 1.0.3 Apr 16 23:28:51.030138 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 16 23:28:51.039741 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Apr 16 23:28:51.087076 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 16 23:28:51.089622 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 16 23:28:51.097109 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 16 23:28:51.107717 kernel: BTRFS: device fsid 10cedb9e-43f1-4d98-9b55-3b84c3a61868 devid 1 transid 33 /dev/mapper/usr (254:0) scanned by mount (639) Apr 16 23:28:51.109806 kernel: BTRFS info (device dm-0): first mount of filesystem 10cedb9e-43f1-4d98-9b55-3b84c3a61868 Apr 16 23:28:51.109859 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 16 23:28:51.117887 kernel: BTRFS info (device dm-0 state E): enabling ssd optimizations Apr 16 23:28:51.117945 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 16 23:28:51.117955 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 16 23:28:51.120256 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 16 23:28:51.122238 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 16 23:28:51.124356 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 16 23:28:51.125664 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 16 23:28:51.129541 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 16 23:28:51.159970 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (670) Apr 16 23:28:51.161755 kernel: BTRFS info (device sda6): first mount of filesystem 29b48a10-1a8e-4627-ab21-f0862573351d Apr 16 23:28:51.161810 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 16 23:28:51.167765 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 16 23:28:51.167826 kernel: BTRFS info (device sda6): turning on async discard Apr 16 23:28:51.167841 kernel: BTRFS info (device sda6): enabling free space tree Apr 16 23:28:51.172720 kernel: BTRFS info (device sda6): last unmount of filesystem 29b48a10-1a8e-4627-ab21-f0862573351d Apr 16 23:28:51.175730 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 16 23:28:51.178773 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 16 23:28:51.288844 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 23:28:51.300414 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 23:28:51.321064 ignition[717]: Ignition 2.22.0 Apr 16 23:28:51.321079 ignition[717]: Stage: fetch-offline Apr 16 23:28:51.321114 ignition[717]: no configs at "/usr/lib/ignition/base.d" Apr 16 23:28:51.321122 ignition[717]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 16 23:28:51.321203 ignition[717]: parsed url from cmdline: "" Apr 16 23:28:51.325479 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 23:28:51.321206 ignition[717]: no config URL provided Apr 16 23:28:51.321212 ignition[717]: reading system config file "/usr/lib/ignition/user.ign" Apr 16 23:28:51.321218 ignition[717]: no config at "/usr/lib/ignition/user.ign" Apr 16 23:28:51.321223 ignition[717]: failed to fetch config: resource requires networking Apr 16 23:28:51.321482 ignition[717]: Ignition finished successfully Apr 16 23:28:51.353989 systemd-networkd[814]: lo: Link UP Apr 16 23:28:51.354007 systemd-networkd[814]: lo: Gained carrier Apr 16 23:28:51.355608 systemd-networkd[814]: Enumeration completed Apr 16 23:28:51.356069 systemd-networkd[814]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:28:51.356072 systemd-networkd[814]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 23:28:51.356577 systemd-networkd[814]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:28:51.356580 systemd-networkd[814]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 23:28:51.356603 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 23:28:51.356856 systemd-networkd[814]: eth0: Link UP Apr 16 23:28:51.356985 systemd-networkd[814]: eth1: Link UP Apr 16 23:28:51.357126 systemd-networkd[814]: eth0: Gained carrier Apr 16 23:28:51.357134 systemd-networkd[814]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:28:51.358654 systemd[1]: Reached target network.target - Network. Apr 16 23:28:51.360742 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 16 23:28:51.363363 systemd-networkd[814]: eth1: Gained carrier Apr 16 23:28:51.363374 systemd-networkd[814]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:28:51.388067 ignition[818]: Ignition 2.22.0 Apr 16 23:28:51.388079 ignition[818]: Stage: fetch Apr 16 23:28:51.388220 ignition[818]: no configs at "/usr/lib/ignition/base.d" Apr 16 23:28:51.388230 ignition[818]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 16 23:28:51.388308 ignition[818]: parsed url from cmdline: "" Apr 16 23:28:51.388311 ignition[818]: no config URL provided Apr 16 23:28:51.388315 ignition[818]: reading system config file "/usr/lib/ignition/user.ign" Apr 16 23:28:51.388321 ignition[818]: no config at "/usr/lib/ignition/user.ign" Apr 16 23:28:51.388353 ignition[818]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 16 23:28:51.389859 ignition[818]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 16 23:28:51.403813 systemd-networkd[814]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 16 23:28:51.418846 systemd-networkd[814]: eth0: DHCPv4 address 46.224.1.2/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 16 23:28:51.590551 ignition[818]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Apr 16 23:28:51.598554 ignition[818]: GET result: OK Apr 16 23:28:51.598775 ignition[818]: parsing config with SHA512: d1996d82eba0d30312228177254d93240d3049f8d1be5e6550e05f73afd31efa3acefcca18698e8acb8d81634c4557299d9876ef44bcdbba9ef28f0b6173ea3b Apr 16 23:28:51.605083 unknown[818]: fetched base config from "system" Apr 16 23:28:51.605098 unknown[818]: fetched base config from "system" Apr 16 23:28:51.605398 ignition[818]: fetch: fetch complete Apr 16 23:28:51.605103 unknown[818]: fetched user config from "hetzner" Apr 16 23:28:51.605402 ignition[818]: fetch: fetch passed Apr 16 23:28:51.610780 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 16 23:28:51.605445 ignition[818]: Ignition finished successfully Apr 16 23:28:51.614399 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 16 23:28:51.649529 ignition[826]: Ignition 2.22.0 Apr 16 23:28:51.649549 ignition[826]: Stage: kargs Apr 16 23:28:51.651437 ignition[826]: no configs at "/usr/lib/ignition/base.d" Apr 16 23:28:51.651455 ignition[826]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 16 23:28:51.654424 ignition[826]: kargs: kargs passed Apr 16 23:28:51.654484 ignition[826]: Ignition finished successfully Apr 16 23:28:51.657126 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 16 23:28:51.660649 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 16 23:28:51.693130 ignition[832]: Ignition 2.22.0 Apr 16 23:28:51.693150 ignition[832]: Stage: disks Apr 16 23:28:51.693332 ignition[832]: no configs at "/usr/lib/ignition/base.d" Apr 16 23:28:51.693344 ignition[832]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 16 23:28:51.694429 ignition[832]: disks: disks passed Apr 16 23:28:51.694490 ignition[832]: Ignition finished successfully Apr 16 23:28:51.696995 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 16 23:28:51.698058 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 16 23:28:51.699127 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 16 23:28:51.700716 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 23:28:51.702077 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 23:28:51.703078 systemd[1]: Reached target basic.target - Basic System. Apr 16 23:28:51.705004 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 16 23:28:51.737918 systemd-fsck[840]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Apr 16 23:28:51.743079 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 16 23:28:51.746115 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 16 23:28:51.833931 kernel: EXT4-fs (sda9): mounted filesystem 717eabe0-7ee2-4bf7-a9aa-0d27bb05c125 r/w with ordered data mode. Quota mode: none. Apr 16 23:28:51.834920 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 16 23:28:51.836251 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 16 23:28:51.838969 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 23:28:51.841299 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 16 23:28:51.846204 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 16 23:28:51.846852 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 16 23:28:51.846880 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 23:28:51.857809 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 16 23:28:51.861653 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 16 23:28:51.871720 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (848) Apr 16 23:28:51.873713 kernel: BTRFS info (device sda6): first mount of filesystem 29b48a10-1a8e-4627-ab21-f0862573351d Apr 16 23:28:51.873756 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 16 23:28:51.879323 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 16 23:28:51.879369 kernel: BTRFS info (device sda6): turning on async discard Apr 16 23:28:51.879379 kernel: BTRFS info (device sda6): enabling free space tree Apr 16 23:28:51.881885 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 23:28:51.913792 coreos-metadata[850]: Apr 16 23:28:51.913 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 16 23:28:51.916183 coreos-metadata[850]: Apr 16 23:28:51.916 INFO Fetch successful Apr 16 23:28:51.916183 coreos-metadata[850]: Apr 16 23:28:51.916 INFO wrote hostname ci-4459-2-4-n-fff9fc0546 to /sysroot/etc/hostname Apr 16 23:28:51.919260 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 16 23:28:51.923269 initrd-setup-root[876]: cut: /sysroot/etc/passwd: No such file or directory Apr 16 23:28:51.929637 initrd-setup-root[883]: cut: /sysroot/etc/group: No such file or directory Apr 16 23:28:51.935167 initrd-setup-root[890]: cut: /sysroot/etc/shadow: No such file or directory Apr 16 23:28:51.942718 initrd-setup-root[897]: cut: /sysroot/etc/gshadow: No such file or directory Apr 16 23:28:52.042733 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 16 23:28:52.046880 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 16 23:28:52.048157 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 16 23:28:52.067714 kernel: BTRFS info (device sda6): last unmount of filesystem 29b48a10-1a8e-4627-ab21-f0862573351d Apr 16 23:28:52.096520 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 16 23:28:52.101312 ignition[965]: INFO : Ignition 2.22.0 Apr 16 23:28:52.102420 ignition[965]: INFO : Stage: mount Apr 16 23:28:52.103069 ignition[965]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 23:28:52.104770 ignition[965]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 16 23:28:52.105558 ignition[965]: INFO : mount: mount passed Apr 16 23:28:52.105558 ignition[965]: INFO : Ignition finished successfully Apr 16 23:28:52.108448 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 16 23:28:52.110127 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 16 23:28:52.112198 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 16 23:28:52.135982 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 23:28:52.161452 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (977) Apr 16 23:28:52.161526 kernel: BTRFS info (device sda6): first mount of filesystem 29b48a10-1a8e-4627-ab21-f0862573351d Apr 16 23:28:52.161553 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 16 23:28:52.165913 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 16 23:28:52.165976 kernel: BTRFS info (device sda6): turning on async discard Apr 16 23:28:52.165992 kernel: BTRFS info (device sda6): enabling free space tree Apr 16 23:28:52.168500 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 23:28:52.202726 ignition[994]: INFO : Ignition 2.22.0 Apr 16 23:28:52.202726 ignition[994]: INFO : Stage: files Apr 16 23:28:52.202726 ignition[994]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 23:28:52.202726 ignition[994]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 16 23:28:52.207393 ignition[994]: DEBUG : files: compiled without relabeling support, skipping Apr 16 23:28:52.207393 ignition[994]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 16 23:28:52.207393 ignition[994]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 16 23:28:52.211295 ignition[994]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 16 23:28:52.213167 ignition[994]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 16 23:28:52.215086 ignition[994]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 16 23:28:52.214899 unknown[994]: wrote ssh authorized keys file for user: core Apr 16 23:28:52.219900 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 16 23:28:52.219900 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Apr 16 23:28:52.270250 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 16 23:28:52.383837 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 16 23:28:52.383837 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 16 23:28:52.383837 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 16 23:28:52.387674 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 16 23:28:52.387674 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 16 23:28:52.387674 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 23:28:52.387674 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 23:28:52.387674 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 23:28:52.387674 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 23:28:52.387674 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 23:28:52.387674 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 23:28:52.387674 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Apr 16 23:28:52.387674 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Apr 16 23:28:52.387674 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Apr 16 23:28:52.387674 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-arm64.raw: attempt #1 Apr 16 23:28:52.528003 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 16 23:28:53.012122 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Apr 16 23:28:53.012122 ignition[994]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 16 23:28:53.016300 ignition[994]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 23:28:53.016300 ignition[994]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 23:28:53.016300 ignition[994]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 16 23:28:53.016300 ignition[994]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 16 23:28:53.016300 ignition[994]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 16 23:28:53.016300 ignition[994]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 16 23:28:53.016300 ignition[994]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 16 23:28:53.016300 ignition[994]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Apr 16 23:28:53.016300 ignition[994]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Apr 16 23:28:53.016300 ignition[994]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 16 23:28:53.016300 ignition[994]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 16 23:28:53.016300 ignition[994]: INFO : files: files passed Apr 16 23:28:53.016300 ignition[994]: INFO : Ignition finished successfully Apr 16 23:28:53.019746 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 16 23:28:53.022310 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 16 23:28:53.025960 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 16 23:28:53.039782 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 16 23:28:53.039905 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 16 23:28:53.051788 initrd-setup-root-after-ignition[1024]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 23:28:53.053757 initrd-setup-root-after-ignition[1028]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 23:28:53.054614 initrd-setup-root-after-ignition[1024]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 16 23:28:53.056675 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 23:28:53.057599 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 16 23:28:53.059958 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 16 23:28:53.116684 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 16 23:28:53.116923 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 16 23:28:53.120068 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 16 23:28:53.122538 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 16 23:28:53.123448 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 16 23:28:53.124352 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 16 23:28:53.162810 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 23:28:53.165183 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 16 23:28:53.194180 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 16 23:28:53.195904 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 23:28:53.197112 systemd[1]: Stopped target timers.target - Timer Units. Apr 16 23:28:53.198571 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 16 23:28:53.198778 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 23:28:53.200514 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 16 23:28:53.202038 systemd[1]: Stopped target basic.target - Basic System. Apr 16 23:28:53.202958 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 16 23:28:53.203973 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 23:28:53.205116 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 16 23:28:53.206215 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 16 23:28:53.207338 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 16 23:28:53.208429 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 23:28:53.209514 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 16 23:28:53.210569 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 16 23:28:53.211509 systemd[1]: Stopped target swap.target - Swaps. Apr 16 23:28:53.212322 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 16 23:28:53.212494 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 16 23:28:53.213757 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 16 23:28:53.214919 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 23:28:53.215916 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 16 23:28:53.217761 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 23:28:53.218495 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 16 23:28:53.218611 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 16 23:28:53.220646 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 16 23:28:53.220820 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 23:28:53.222076 systemd[1]: ignition-files.service: Deactivated successfully. Apr 16 23:28:53.222228 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 16 23:28:53.223153 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 16 23:28:53.223295 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 16 23:28:53.226895 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 16 23:28:53.227441 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 16 23:28:53.229874 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 23:28:53.232025 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 16 23:28:53.234463 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 16 23:28:53.234648 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 23:28:53.236312 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 16 23:28:53.236453 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 23:28:53.243996 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 16 23:28:53.244111 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 16 23:28:53.245928 systemd-networkd[814]: eth1: Gained IPv6LL Apr 16 23:28:53.253089 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 16 23:28:53.265822 ignition[1048]: INFO : Ignition 2.22.0 Apr 16 23:28:53.265822 ignition[1048]: INFO : Stage: umount Apr 16 23:28:53.270220 ignition[1048]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 23:28:53.270220 ignition[1048]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 16 23:28:53.270220 ignition[1048]: INFO : umount: umount passed Apr 16 23:28:53.270220 ignition[1048]: INFO : Ignition finished successfully Apr 16 23:28:53.270536 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 16 23:28:53.270913 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 16 23:28:53.272155 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 16 23:28:53.272219 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 16 23:28:53.275523 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 16 23:28:53.275573 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 16 23:28:53.276675 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 16 23:28:53.277559 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 16 23:28:53.279734 systemd[1]: Stopped target network.target - Network. Apr 16 23:28:53.280920 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 16 23:28:53.280993 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 23:28:53.282623 systemd[1]: Stopped target paths.target - Path Units. Apr 16 23:28:53.283618 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 16 23:28:53.285602 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 23:28:53.287183 systemd[1]: Stopped target slices.target - Slice Units. Apr 16 23:28:53.291587 systemd[1]: Stopped target sockets.target - Socket Units. Apr 16 23:28:53.294423 systemd[1]: iscsid.socket: Deactivated successfully. Apr 16 23:28:53.294471 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 23:28:53.295246 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 16 23:28:53.295279 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 23:28:53.296530 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 16 23:28:53.296591 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 16 23:28:53.297283 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 16 23:28:53.297320 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 16 23:28:53.298543 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 16 23:28:53.300802 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 16 23:28:53.303176 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 16 23:28:53.303273 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 16 23:28:53.304445 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 16 23:28:53.304539 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 16 23:28:53.318485 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 16 23:28:53.318664 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 16 23:28:53.322371 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 16 23:28:53.322629 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 16 23:28:53.323249 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 16 23:28:53.328277 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 16 23:28:53.328848 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 16 23:28:53.329730 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 16 23:28:53.329780 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 16 23:28:53.332318 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 16 23:28:53.333804 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 16 23:28:53.333863 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 23:28:53.335875 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 16 23:28:53.335929 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 16 23:28:53.338908 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 16 23:28:53.338954 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 16 23:28:53.339785 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 16 23:28:53.339831 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 23:28:53.342823 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 23:28:53.346063 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 16 23:28:53.346135 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 16 23:28:53.360654 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 16 23:28:53.360860 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 16 23:28:53.362284 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 16 23:28:53.362486 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 23:28:53.365108 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 16 23:28:53.365179 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 16 23:28:53.366372 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 16 23:28:53.366402 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 23:28:53.367408 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 16 23:28:53.367457 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 16 23:28:53.369105 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 16 23:28:53.369151 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 16 23:28:53.370751 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 16 23:28:53.370806 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 23:28:53.373198 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 16 23:28:53.374789 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 16 23:28:53.374854 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 23:28:53.377722 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 16 23:28:53.377772 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 23:28:53.379831 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 23:28:53.379881 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:28:53.383318 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Apr 16 23:28:53.383393 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Apr 16 23:28:53.383431 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 16 23:28:53.391689 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 16 23:28:53.391924 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 16 23:28:53.394187 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 16 23:28:53.396538 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 16 23:28:53.423374 systemd[1]: Switching root. Apr 16 23:28:53.458920 systemd-journald[245]: Journal stopped Apr 16 23:28:54.348112 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Apr 16 23:28:54.348178 kernel: SELinux: policy capability network_peer_controls=1 Apr 16 23:28:54.348194 kernel: SELinux: policy capability open_perms=1 Apr 16 23:28:54.348203 kernel: SELinux: policy capability extended_socket_class=1 Apr 16 23:28:54.348212 kernel: SELinux: policy capability always_check_network=0 Apr 16 23:28:54.348224 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 16 23:28:54.348233 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 16 23:28:54.348242 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 16 23:28:54.348251 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 16 23:28:54.348260 kernel: SELinux: policy capability userspace_initial_context=0 Apr 16 23:28:54.348271 kernel: audit: type=1403 audit(1776382133.585:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 16 23:28:54.348284 systemd[1]: Successfully loaded SELinux policy in 50.898ms. Apr 16 23:28:54.348302 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.234ms. Apr 16 23:28:54.348313 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 16 23:28:54.348324 systemd[1]: Detected virtualization kvm. Apr 16 23:28:54.348334 systemd[1]: Detected architecture arm64. Apr 16 23:28:54.348346 systemd[1]: Detected first boot. Apr 16 23:28:54.348356 systemd[1]: Hostname set to . Apr 16 23:28:54.348367 systemd[1]: Initializing machine ID from VM UUID. Apr 16 23:28:54.348377 zram_generator::config[1092]: No configuration found. Apr 16 23:28:54.348387 kernel: NET: Registered PF_VSOCK protocol family Apr 16 23:28:54.348397 systemd[1]: Populated /etc with preset unit settings. Apr 16 23:28:54.348408 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 16 23:28:54.348418 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 16 23:28:54.348431 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 16 23:28:54.348442 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 16 23:28:54.348457 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 16 23:28:54.348467 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 16 23:28:54.348477 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 16 23:28:54.348487 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 16 23:28:54.348497 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 16 23:28:54.348507 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 16 23:28:54.348518 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 16 23:28:54.348529 systemd[1]: Created slice user.slice - User and Session Slice. Apr 16 23:28:54.348542 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 23:28:54.348552 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 23:28:54.348563 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 16 23:28:54.348572 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 16 23:28:54.348582 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 16 23:28:54.348594 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 23:28:54.348605 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Apr 16 23:28:54.348615 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 23:28:54.348625 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 23:28:54.348635 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 16 23:28:54.348645 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 16 23:28:54.348655 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 16 23:28:54.348665 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 16 23:28:54.348677 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 23:28:54.348687 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 23:28:54.348711 systemd[1]: Reached target slices.target - Slice Units. Apr 16 23:28:54.348724 systemd[1]: Reached target swap.target - Swaps. Apr 16 23:28:54.348734 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 16 23:28:54.348744 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 16 23:28:54.348754 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 16 23:28:54.348764 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 23:28:54.348774 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 23:28:54.348786 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 23:28:54.348797 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 16 23:28:54.348806 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 16 23:28:54.348816 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 16 23:28:54.348828 systemd[1]: Mounting media.mount - External Media Directory... Apr 16 23:28:54.348837 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 16 23:28:54.348847 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 16 23:28:54.348857 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 16 23:28:54.348868 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 16 23:28:54.348879 systemd[1]: Reached target machines.target - Containers. Apr 16 23:28:54.348889 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 16 23:28:54.348900 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 23:28:54.348910 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 23:28:54.348920 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 16 23:28:54.348930 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 23:28:54.348942 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 23:28:54.348954 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 23:28:54.348965 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 16 23:28:54.348975 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 23:28:54.348986 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 16 23:28:54.349036 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 16 23:28:54.349052 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 16 23:28:54.349062 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 16 23:28:54.349072 systemd[1]: Stopped systemd-fsck-usr.service. Apr 16 23:28:54.349082 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 23:28:54.349094 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 23:28:54.349104 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 23:28:54.349114 kernel: fuse: init (API version 7.41) Apr 16 23:28:54.349125 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 16 23:28:54.349137 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 16 23:28:54.349147 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 16 23:28:54.349157 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 23:28:54.349169 systemd[1]: verity-setup.service: Deactivated successfully. Apr 16 23:28:54.349180 systemd[1]: Stopped verity-setup.service. Apr 16 23:28:54.349190 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 16 23:28:54.349220 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 16 23:28:54.349232 systemd[1]: Mounted media.mount - External Media Directory. Apr 16 23:28:54.349243 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 16 23:28:54.349253 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 16 23:28:54.349263 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 16 23:28:54.349273 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 23:28:54.349284 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 16 23:28:54.349294 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 16 23:28:54.349304 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 23:28:54.349316 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 23:28:54.349326 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 23:28:54.349336 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 23:28:54.349346 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 16 23:28:54.349355 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 16 23:28:54.349365 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 23:28:54.349403 systemd-journald[1156]: Collecting audit messages is disabled. Apr 16 23:28:54.349429 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 23:28:54.349439 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 16 23:28:54.349466 systemd-journald[1156]: Journal started Apr 16 23:28:54.349490 systemd-journald[1156]: Runtime Journal (/run/log/journal/8dcbff955eb04ed591b10efd80f981c0) is 8M, max 76.5M, 68.5M free. Apr 16 23:28:54.351127 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 16 23:28:54.083453 systemd[1]: Queued start job for default target multi-user.target. Apr 16 23:28:54.093855 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 16 23:28:54.094349 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 16 23:28:54.357714 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 16 23:28:54.363103 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 16 23:28:54.369068 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 16 23:28:54.369132 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 23:28:54.371214 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 16 23:28:54.379535 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 16 23:28:54.379599 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 23:28:54.385119 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 16 23:28:54.391743 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 23:28:54.391816 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 16 23:28:54.399719 kernel: ACPI: bus type drm_connector registered Apr 16 23:28:54.401956 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 23:28:54.406743 kernel: loop: module loaded Apr 16 23:28:54.409723 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 16 23:28:54.414717 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 23:28:54.414673 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 23:28:54.414860 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 23:28:54.416221 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 23:28:54.416376 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 23:28:54.418190 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 16 23:28:54.419185 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 16 23:28:54.420973 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 16 23:28:54.423978 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 16 23:28:54.425920 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 16 23:28:54.446761 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 16 23:28:54.457247 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 16 23:28:54.463187 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 16 23:28:54.463901 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 23:28:54.464713 kernel: loop0: detected capacity change from 0 to 119840 Apr 16 23:28:54.465295 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 16 23:28:54.486820 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 23:28:54.496062 systemd-journald[1156]: Time spent on flushing to /var/log/journal/8dcbff955eb04ed591b10efd80f981c0 is 37.095ms for 1176 entries. Apr 16 23:28:54.496062 systemd-journald[1156]: System Journal (/var/log/journal/8dcbff955eb04ed591b10efd80f981c0) is 8M, max 584.8M, 576.8M free. Apr 16 23:28:54.546968 systemd-journald[1156]: Received client request to flush runtime journal. Apr 16 23:28:54.547059 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 16 23:28:54.547076 kernel: loop1: detected capacity change from 0 to 8 Apr 16 23:28:54.538554 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 16 23:28:54.548431 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 23:28:54.557203 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 16 23:28:54.572790 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 16 23:28:54.580056 systemd-tmpfiles[1227]: ACLs are not supported, ignoring. Apr 16 23:28:54.580390 systemd-tmpfiles[1227]: ACLs are not supported, ignoring. Apr 16 23:28:54.580722 kernel: loop2: detected capacity change from 0 to 197488 Apr 16 23:28:54.592145 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 23:28:54.615435 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 23:28:54.624730 kernel: loop3: detected capacity change from 0 to 100632 Apr 16 23:28:54.665750 kernel: loop4: detected capacity change from 0 to 119840 Apr 16 23:28:54.689738 kernel: loop5: detected capacity change from 0 to 8 Apr 16 23:28:54.693889 kernel: loop6: detected capacity change from 0 to 197488 Apr 16 23:28:54.722737 kernel: loop7: detected capacity change from 0 to 100632 Apr 16 23:28:54.736175 (sd-merge)[1239]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 16 23:28:54.736644 (sd-merge)[1239]: Merged extensions into '/usr'. Apr 16 23:28:54.744784 systemd[1]: Reload requested from client PID 1193 ('systemd-sysext') (unit systemd-sysext.service)... Apr 16 23:28:54.745099 systemd[1]: Reloading... Apr 16 23:28:54.877717 zram_generator::config[1268]: No configuration found. Apr 16 23:28:55.037263 ldconfig[1185]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 16 23:28:55.083820 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 16 23:28:55.084071 systemd[1]: Reloading finished in 338 ms. Apr 16 23:28:55.109748 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 16 23:28:55.110983 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 16 23:28:55.112216 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 16 23:28:55.121268 systemd[1]: Starting ensure-sysext.service... Apr 16 23:28:55.126923 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 23:28:55.133223 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 23:28:55.148294 systemd[1]: Reload requested from client PID 1303 ('systemctl') (unit ensure-sysext.service)... Apr 16 23:28:55.148315 systemd[1]: Reloading... Apr 16 23:28:55.169152 systemd-tmpfiles[1304]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 16 23:28:55.169882 systemd-tmpfiles[1304]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 16 23:28:55.170148 systemd-tmpfiles[1304]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 16 23:28:55.170368 systemd-tmpfiles[1304]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 16 23:28:55.173282 systemd-tmpfiles[1304]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 16 23:28:55.173510 systemd-tmpfiles[1304]: ACLs are not supported, ignoring. Apr 16 23:28:55.173552 systemd-tmpfiles[1304]: ACLs are not supported, ignoring. Apr 16 23:28:55.179266 systemd-tmpfiles[1304]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 23:28:55.181484 systemd-tmpfiles[1304]: Skipping /boot Apr 16 23:28:55.197115 systemd-udevd[1305]: Using default interface naming scheme 'v255'. Apr 16 23:28:55.198171 systemd-tmpfiles[1304]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 23:28:55.198187 systemd-tmpfiles[1304]: Skipping /boot Apr 16 23:28:55.249724 zram_generator::config[1338]: No configuration found. Apr 16 23:28:55.491789 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Apr 16 23:28:55.491976 systemd[1]: Reloading finished in 343 ms. Apr 16 23:28:55.500374 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 23:28:55.508848 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 23:28:55.523135 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 16 23:28:55.530201 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 16 23:28:55.535111 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 16 23:28:55.556723 kernel: mousedev: PS/2 mouse device common for all mice Apr 16 23:28:55.563402 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 23:28:55.569468 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 23:28:55.572981 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 16 23:28:55.582168 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 23:28:55.583361 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 23:28:55.586467 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 23:28:55.588623 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 23:28:55.589933 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 23:28:55.590073 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 23:28:55.593304 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 23:28:55.593467 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 23:28:55.593561 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 23:28:55.599917 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 16 23:28:55.605488 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 23:28:55.613894 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 23:28:55.615896 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 23:28:55.616060 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 23:28:55.616865 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 16 23:28:55.637708 systemd[1]: Finished ensure-sysext.service. Apr 16 23:28:55.649238 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 16 23:28:55.654537 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 23:28:55.656114 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 23:28:55.664819 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 16 23:28:55.669378 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 16 23:28:55.690770 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 16 23:28:55.698757 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 16 23:28:55.700645 augenrules[1452]: No rules Apr 16 23:28:55.703576 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 23:28:55.706780 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 23:28:55.712043 systemd[1]: audit-rules.service: Deactivated successfully. Apr 16 23:28:55.712326 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 16 23:28:55.713194 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 23:28:55.716252 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 23:28:55.719290 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 23:28:55.719455 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 23:28:55.720515 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 16 23:28:55.735726 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 23:28:55.735797 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 23:28:55.753159 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 16 23:28:55.755763 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 16 23:28:55.758525 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 16 23:28:55.788356 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Apr 16 23:28:55.788498 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 23:28:55.790631 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 23:28:55.793939 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 23:28:55.800946 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 23:28:55.801642 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 23:28:55.801686 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 23:28:55.801741 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 16 23:28:55.813144 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Apr 16 23:28:55.813263 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 16 23:28:55.813277 kernel: [drm] features: -context_init Apr 16 23:28:55.833521 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 23:28:55.833769 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 23:28:55.837319 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 23:28:55.838045 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 23:28:55.841717 kernel: [drm] number of scanouts: 1 Apr 16 23:28:55.841773 kernel: [drm] number of cap sets: 0 Apr 16 23:28:55.842162 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 23:28:55.842773 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 23:28:55.848715 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Apr 16 23:28:55.855080 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 23:28:55.855154 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 23:28:55.866228 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 16 23:28:55.876501 kernel: Console: switching to colour frame buffer device 160x50 Apr 16 23:28:55.893706 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 16 23:28:55.901957 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 23:28:55.918847 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 23:28:55.919081 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:28:55.921476 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 23:28:56.056496 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:28:56.075416 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 16 23:28:56.076336 systemd[1]: Reached target time-set.target - System Time Set. Apr 16 23:28:56.082147 systemd-resolved[1419]: Positive Trust Anchors: Apr 16 23:28:56.083622 systemd-resolved[1419]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 23:28:56.083662 systemd-resolved[1419]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 23:28:56.088612 systemd-resolved[1419]: Using system hostname 'ci-4459-2-4-n-fff9fc0546'. Apr 16 23:28:56.090334 systemd-networkd[1418]: lo: Link UP Apr 16 23:28:56.090344 systemd-networkd[1418]: lo: Gained carrier Apr 16 23:28:56.092556 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 23:28:56.092918 systemd-networkd[1418]: Enumeration completed Apr 16 23:28:56.093414 systemd-networkd[1418]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:28:56.093417 systemd-networkd[1418]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 23:28:56.094840 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 23:28:56.094870 systemd-networkd[1418]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:28:56.094873 systemd-networkd[1418]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 23:28:56.095723 systemd-networkd[1418]: eth0: Link UP Apr 16 23:28:56.095854 systemd-networkd[1418]: eth0: Gained carrier Apr 16 23:28:56.095870 systemd-networkd[1418]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:28:56.096617 systemd[1]: Reached target network.target - Network. Apr 16 23:28:56.097316 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 23:28:56.098246 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 23:28:56.098964 systemd-networkd[1418]: eth1: Link UP Apr 16 23:28:56.099431 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 16 23:28:56.100341 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 16 23:28:56.100567 systemd-networkd[1418]: eth1: Gained carrier Apr 16 23:28:56.100597 systemd-networkd[1418]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:28:56.101581 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 16 23:28:56.102406 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 16 23:28:56.103385 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 16 23:28:56.104294 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 16 23:28:56.104384 systemd[1]: Reached target paths.target - Path Units. Apr 16 23:28:56.105049 systemd[1]: Reached target timers.target - Timer Units. Apr 16 23:28:56.106880 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 16 23:28:56.109539 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 16 23:28:56.112536 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 16 23:28:56.113606 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 16 23:28:56.114466 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 16 23:28:56.120232 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 16 23:28:56.122326 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 16 23:28:56.124975 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 16 23:28:56.128951 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 16 23:28:56.130233 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 16 23:28:56.131279 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 23:28:56.132914 systemd[1]: Reached target basic.target - Basic System. Apr 16 23:28:56.133522 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 16 23:28:56.133546 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 16 23:28:56.137081 systemd[1]: Starting containerd.service - containerd container runtime... Apr 16 23:28:56.149925 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 16 23:28:56.150172 systemd-networkd[1418]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 16 23:28:56.153937 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 16 23:28:56.157325 systemd-networkd[1418]: eth0: DHCPv4 address 46.224.1.2/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 16 23:28:56.157746 systemd-timesyncd[1447]: Network configuration changed, trying to establish connection. Apr 16 23:28:56.157829 systemd-timesyncd[1447]: Network configuration changed, trying to establish connection. Apr 16 23:28:56.158041 systemd-timesyncd[1447]: Network configuration changed, trying to establish connection. Apr 16 23:28:56.159260 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 16 23:28:56.162340 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 16 23:28:56.166673 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 16 23:28:56.167433 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 16 23:28:56.171873 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 16 23:28:56.174364 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 16 23:28:56.176932 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 16 23:28:56.185948 jq[1511]: false Apr 16 23:28:56.179656 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 16 23:28:56.186592 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 16 23:28:56.195998 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 16 23:28:56.198840 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 16 23:28:56.199398 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 16 23:28:56.202049 systemd[1]: Starting update-engine.service - Update Engine... Apr 16 23:28:56.210975 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 16 23:28:56.219316 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 16 23:28:56.224866 coreos-metadata[1508]: Apr 16 23:28:56.223 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 16 23:28:56.224866 coreos-metadata[1508]: Apr 16 23:28:56.224 INFO Fetch successful Apr 16 23:28:56.224866 coreos-metadata[1508]: Apr 16 23:28:56.224 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 16 23:28:56.220314 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 16 23:28:56.225348 coreos-metadata[1508]: Apr 16 23:28:56.225 INFO Fetch successful Apr 16 23:28:56.220752 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 16 23:28:56.241975 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 16 23:28:56.246101 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 16 23:28:56.257110 jq[1522]: true Apr 16 23:28:56.284543 extend-filesystems[1512]: Found /dev/sda6 Apr 16 23:28:56.293442 extend-filesystems[1512]: Found /dev/sda9 Apr 16 23:28:56.292090 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 16 23:28:56.305681 extend-filesystems[1512]: Checking size of /dev/sda9 Apr 16 23:28:56.305631 dbus-daemon[1509]: [system] SELinux support is enabled Apr 16 23:28:56.305845 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 16 23:28:56.319107 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 16 23:28:56.319149 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 16 23:28:56.323065 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 16 23:28:56.323090 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 16 23:28:56.324103 systemd[1]: motdgen.service: Deactivated successfully. Apr 16 23:28:56.325090 (ntainerd)[1544]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 16 23:28:56.326069 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 16 23:28:56.336164 tar[1525]: linux-arm64/LICENSE Apr 16 23:28:56.336164 tar[1525]: linux-arm64/helm Apr 16 23:28:56.342797 jq[1542]: true Apr 16 23:28:56.349872 extend-filesystems[1512]: Resized partition /dev/sda9 Apr 16 23:28:56.362727 extend-filesystems[1562]: resize2fs 1.47.3 (8-Jul-2025) Apr 16 23:28:56.363536 update_engine[1521]: I20260416 23:28:56.359424 1521 main.cc:92] Flatcar Update Engine starting Apr 16 23:28:56.379094 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Apr 16 23:28:56.379160 update_engine[1521]: I20260416 23:28:56.378881 1521 update_check_scheduler.cc:74] Next update check in 11m47s Apr 16 23:28:56.374875 systemd[1]: Started update-engine.service - Update Engine. Apr 16 23:28:56.400039 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 16 23:28:56.458209 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 16 23:28:56.460589 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 16 23:28:56.466481 systemd-logind[1520]: New seat seat0. Apr 16 23:28:56.476379 systemd-logind[1520]: Watching system buttons on /dev/input/event0 (Power Button) Apr 16 23:28:56.476408 systemd-logind[1520]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Apr 16 23:28:56.476643 systemd[1]: Started systemd-logind.service - User Login Management. Apr 16 23:28:56.535174 bash[1588]: Updated "/home/core/.ssh/authorized_keys" Apr 16 23:28:56.567155 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Apr 16 23:28:56.543749 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 16 23:28:56.559579 systemd[1]: Starting sshkeys.service... Apr 16 23:28:56.570593 extend-filesystems[1562]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 16 23:28:56.570593 extend-filesystems[1562]: old_desc_blocks = 1, new_desc_blocks = 5 Apr 16 23:28:56.570593 extend-filesystems[1562]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Apr 16 23:28:56.576987 extend-filesystems[1512]: Resized filesystem in /dev/sda9 Apr 16 23:28:56.581328 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 16 23:28:56.581567 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 16 23:28:56.625177 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 16 23:28:56.631948 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 16 23:28:56.655519 locksmithd[1566]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 16 23:28:56.719766 coreos-metadata[1599]: Apr 16 23:28:56.718 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 16 23:28:56.723813 coreos-metadata[1599]: Apr 16 23:28:56.723 INFO Fetch successful Apr 16 23:28:56.726368 unknown[1599]: wrote ssh authorized keys file for user: core Apr 16 23:28:56.746713 containerd[1544]: time="2026-04-16T23:28:56Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 16 23:28:56.747553 containerd[1544]: time="2026-04-16T23:28:56.747282000Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Apr 16 23:28:56.763486 update-ssh-keys[1602]: Updated "/home/core/.ssh/authorized_keys" Apr 16 23:28:56.765568 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 16 23:28:56.772818 containerd[1544]: time="2026-04-16T23:28:56.772779440Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.16µs" Apr 16 23:28:56.772932 containerd[1544]: time="2026-04-16T23:28:56.772915400Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 16 23:28:56.773004 containerd[1544]: time="2026-04-16T23:28:56.772988680Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 16 23:28:56.773197 containerd[1544]: time="2026-04-16T23:28:56.773178000Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 16 23:28:56.773349 systemd[1]: Finished sshkeys.service. Apr 16 23:28:56.773866 containerd[1544]: time="2026-04-16T23:28:56.773844240Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 16 23:28:56.773954 containerd[1544]: time="2026-04-16T23:28:56.773940640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 16 23:28:56.774835 containerd[1544]: time="2026-04-16T23:28:56.774810240Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 16 23:28:56.774901 containerd[1544]: time="2026-04-16T23:28:56.774887320Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 16 23:28:56.775218 containerd[1544]: time="2026-04-16T23:28:56.775193320Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 16 23:28:56.775610 containerd[1544]: time="2026-04-16T23:28:56.775580360Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 16 23:28:56.776165 containerd[1544]: time="2026-04-16T23:28:56.776145520Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 16 23:28:56.776237 containerd[1544]: time="2026-04-16T23:28:56.776223920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 16 23:28:56.776372 containerd[1544]: time="2026-04-16T23:28:56.776352560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 16 23:28:56.777920 containerd[1544]: time="2026-04-16T23:28:56.777884160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 16 23:28:56.777983 containerd[1544]: time="2026-04-16T23:28:56.777931520Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 16 23:28:56.777983 containerd[1544]: time="2026-04-16T23:28:56.777944560Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 16 23:28:56.778040 containerd[1544]: time="2026-04-16T23:28:56.777994800Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 16 23:28:56.778270 containerd[1544]: time="2026-04-16T23:28:56.778245320Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 16 23:28:56.778333 containerd[1544]: time="2026-04-16T23:28:56.778314280Z" level=info msg="metadata content store policy set" policy=shared Apr 16 23:28:56.782649 containerd[1544]: time="2026-04-16T23:28:56.782609720Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 16 23:28:56.782732 containerd[1544]: time="2026-04-16T23:28:56.782668560Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 16 23:28:56.782732 containerd[1544]: time="2026-04-16T23:28:56.782683440Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 16 23:28:56.782786 containerd[1544]: time="2026-04-16T23:28:56.782747360Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 16 23:28:56.782786 containerd[1544]: time="2026-04-16T23:28:56.782762960Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 16 23:28:56.782786 containerd[1544]: time="2026-04-16T23:28:56.782773280Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 16 23:28:56.782839 containerd[1544]: time="2026-04-16T23:28:56.782787040Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 16 23:28:56.782839 containerd[1544]: time="2026-04-16T23:28:56.782806840Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 16 23:28:56.782839 containerd[1544]: time="2026-04-16T23:28:56.782819240Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 16 23:28:56.782839 containerd[1544]: time="2026-04-16T23:28:56.782829280Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 16 23:28:56.782903 containerd[1544]: time="2026-04-16T23:28:56.782839000Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 16 23:28:56.782903 containerd[1544]: time="2026-04-16T23:28:56.782852960Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 16 23:28:56.783821 containerd[1544]: time="2026-04-16T23:28:56.783790840Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 16 23:28:56.783867 containerd[1544]: time="2026-04-16T23:28:56.783842600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 16 23:28:56.783867 containerd[1544]: time="2026-04-16T23:28:56.783859960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 16 23:28:56.783900 containerd[1544]: time="2026-04-16T23:28:56.783874320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 16 23:28:56.783900 containerd[1544]: time="2026-04-16T23:28:56.783890240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 16 23:28:56.783956 containerd[1544]: time="2026-04-16T23:28:56.783901680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 16 23:28:56.783956 containerd[1544]: time="2026-04-16T23:28:56.783913200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 16 23:28:56.783956 containerd[1544]: time="2026-04-16T23:28:56.783924080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 16 23:28:56.783956 containerd[1544]: time="2026-04-16T23:28:56.783935440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 16 23:28:56.783956 containerd[1544]: time="2026-04-16T23:28:56.783946560Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 16 23:28:56.783956 containerd[1544]: time="2026-04-16T23:28:56.783956920Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 16 23:28:56.784703 containerd[1544]: time="2026-04-16T23:28:56.784173280Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 16 23:28:56.784703 containerd[1544]: time="2026-04-16T23:28:56.784198240Z" level=info msg="Start snapshots syncer" Apr 16 23:28:56.784703 containerd[1544]: time="2026-04-16T23:28:56.784230560Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 16 23:28:56.784767 containerd[1544]: time="2026-04-16T23:28:56.784511440Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 16 23:28:56.784767 containerd[1544]: time="2026-04-16T23:28:56.784562640Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 16 23:28:56.784767 containerd[1544]: time="2026-04-16T23:28:56.784617440Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 16 23:28:56.785855 containerd[1544]: time="2026-04-16T23:28:56.785824600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 16 23:28:56.785905 containerd[1544]: time="2026-04-16T23:28:56.785862400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 16 23:28:56.785905 containerd[1544]: time="2026-04-16T23:28:56.785875120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 16 23:28:56.785905 containerd[1544]: time="2026-04-16T23:28:56.785887960Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 16 23:28:56.785905 containerd[1544]: time="2026-04-16T23:28:56.785900000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 16 23:28:56.786002 containerd[1544]: time="2026-04-16T23:28:56.785916440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 16 23:28:56.786002 containerd[1544]: time="2026-04-16T23:28:56.785928720Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 16 23:28:56.786002 containerd[1544]: time="2026-04-16T23:28:56.785986720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 16 23:28:56.786002 containerd[1544]: time="2026-04-16T23:28:56.786001920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 16 23:28:56.786138 containerd[1544]: time="2026-04-16T23:28:56.786013440Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 16 23:28:56.786138 containerd[1544]: time="2026-04-16T23:28:56.786072480Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 16 23:28:56.786138 containerd[1544]: time="2026-04-16T23:28:56.786092440Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 16 23:28:56.786138 containerd[1544]: time="2026-04-16T23:28:56.786101320Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 16 23:28:56.786138 containerd[1544]: time="2026-04-16T23:28:56.786110520Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 16 23:28:56.786138 containerd[1544]: time="2026-04-16T23:28:56.786118200Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 16 23:28:56.786138 containerd[1544]: time="2026-04-16T23:28:56.786134160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 16 23:28:56.786259 containerd[1544]: time="2026-04-16T23:28:56.786146760Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 16 23:28:56.786259 containerd[1544]: time="2026-04-16T23:28:56.786223720Z" level=info msg="runtime interface created" Apr 16 23:28:56.786259 containerd[1544]: time="2026-04-16T23:28:56.786228960Z" level=info msg="created NRI interface" Apr 16 23:28:56.786259 containerd[1544]: time="2026-04-16T23:28:56.786237040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 16 23:28:56.786259 containerd[1544]: time="2026-04-16T23:28:56.786248880Z" level=info msg="Connect containerd service" Apr 16 23:28:56.786336 containerd[1544]: time="2026-04-16T23:28:56.786270200Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 16 23:28:56.788718 containerd[1544]: time="2026-04-16T23:28:56.787794680Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 16 23:28:56.936052 containerd[1544]: time="2026-04-16T23:28:56.934773320Z" level=info msg="Start subscribing containerd event" Apr 16 23:28:56.936052 containerd[1544]: time="2026-04-16T23:28:56.935908440Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 16 23:28:56.936259 containerd[1544]: time="2026-04-16T23:28:56.936227480Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 16 23:28:56.937010 containerd[1544]: time="2026-04-16T23:28:56.936056000Z" level=info msg="Start recovering state" Apr 16 23:28:56.937193 containerd[1544]: time="2026-04-16T23:28:56.937179240Z" level=info msg="Start event monitor" Apr 16 23:28:56.937283 containerd[1544]: time="2026-04-16T23:28:56.937271680Z" level=info msg="Start cni network conf syncer for default" Apr 16 23:28:56.937361 containerd[1544]: time="2026-04-16T23:28:56.937349320Z" level=info msg="Start streaming server" Apr 16 23:28:56.937413 containerd[1544]: time="2026-04-16T23:28:56.937397360Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 16 23:28:56.937458 containerd[1544]: time="2026-04-16T23:28:56.937448480Z" level=info msg="runtime interface starting up..." Apr 16 23:28:56.937500 containerd[1544]: time="2026-04-16T23:28:56.937490480Z" level=info msg="starting plugins..." Apr 16 23:28:56.937564 containerd[1544]: time="2026-04-16T23:28:56.937553120Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 16 23:28:56.940243 containerd[1544]: time="2026-04-16T23:28:56.939432920Z" level=info msg="containerd successfully booted in 0.195211s" Apr 16 23:28:56.939558 systemd[1]: Started containerd.service - containerd container runtime. Apr 16 23:28:56.949049 sshd_keygen[1532]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 16 23:28:56.969969 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 16 23:28:56.975233 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 16 23:28:56.997077 tar[1525]: linux-arm64/README.md Apr 16 23:28:56.997238 systemd[1]: issuegen.service: Deactivated successfully. Apr 16 23:28:56.997670 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 16 23:28:57.004827 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 16 23:28:57.023211 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 16 23:28:57.028312 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 16 23:28:57.032635 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 16 23:28:57.037857 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Apr 16 23:28:57.038814 systemd[1]: Reached target getty.target - Login Prompts. Apr 16 23:28:57.786819 systemd-networkd[1418]: eth1: Gained IPv6LL Apr 16 23:28:57.787458 systemd-timesyncd[1447]: Network configuration changed, trying to establish connection. Apr 16 23:28:57.790905 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 16 23:28:57.793177 systemd[1]: Reached target network-online.target - Network is Online. Apr 16 23:28:57.795957 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:28:57.800012 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 16 23:28:57.838779 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 16 23:28:58.107174 systemd-networkd[1418]: eth0: Gained IPv6LL Apr 16 23:28:58.107682 systemd-timesyncd[1447]: Network configuration changed, trying to establish connection. Apr 16 23:28:58.559170 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:28:58.562570 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 16 23:28:58.567819 systemd[1]: Startup finished in 2.320s (kernel) + 4.980s (initrd) + 5.031s (userspace) = 12.332s. Apr 16 23:28:58.569149 (kubelet)[1656]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 23:28:59.007691 kubelet[1656]: E0416 23:28:59.007522 1656 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 23:28:59.011527 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 23:28:59.011676 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 23:28:59.012592 systemd[1]: kubelet.service: Consumed 805ms CPU time, 247.5M memory peak. Apr 16 23:29:09.262626 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 16 23:29:09.265097 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:29:09.414255 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:29:09.425596 (kubelet)[1676]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 23:29:09.470391 kubelet[1676]: E0416 23:29:09.470326 1676 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 23:29:09.474128 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 23:29:09.474330 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 23:29:09.474976 systemd[1]: kubelet.service: Consumed 166ms CPU time, 107.7M memory peak. Apr 16 23:29:19.725139 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 16 23:29:19.727241 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:29:19.873844 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:29:19.889289 (kubelet)[1692]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 23:29:19.933461 kubelet[1692]: E0416 23:29:19.933408 1692 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 23:29:19.936387 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 23:29:19.936564 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 23:29:19.937241 systemd[1]: kubelet.service: Consumed 161ms CPU time, 107M memory peak. Apr 16 23:29:28.367372 systemd-timesyncd[1447]: Contacted time server 185.216.176.59:123 (2.flatcar.pool.ntp.org). Apr 16 23:29:28.367450 systemd-timesyncd[1447]: Initial clock synchronization to Thu 2026-04-16 23:29:28.473620 UTC. Apr 16 23:29:30.187826 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 16 23:29:30.191360 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:29:30.353487 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:29:30.367481 (kubelet)[1707]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 23:29:30.419193 kubelet[1707]: E0416 23:29:30.419125 1707 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 23:29:30.422664 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 23:29:30.422928 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 23:29:30.423440 systemd[1]: kubelet.service: Consumed 171ms CPU time, 104.4M memory peak. Apr 16 23:29:38.452207 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 16 23:29:38.454094 systemd[1]: Started sshd@0-46.224.1.2:22-50.85.169.122:36340.service - OpenSSH per-connection server daemon (50.85.169.122:36340). Apr 16 23:29:38.595181 sshd[1715]: Accepted publickey for core from 50.85.169.122 port 36340 ssh2: RSA SHA256:YEfK51SBTL1bMgxZgn6/4J+7cuIr/XFmOhH6oMsPne8 Apr 16 23:29:38.597899 sshd-session[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:29:38.611154 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 16 23:29:38.612879 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 16 23:29:38.616867 systemd-logind[1520]: New session 1 of user core. Apr 16 23:29:38.644139 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 16 23:29:38.648887 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 16 23:29:38.668800 (systemd)[1720]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 16 23:29:38.672373 systemd-logind[1520]: New session c1 of user core. Apr 16 23:29:38.803408 systemd[1720]: Queued start job for default target default.target. Apr 16 23:29:38.814935 systemd[1720]: Created slice app.slice - User Application Slice. Apr 16 23:29:38.815215 systemd[1720]: Reached target paths.target - Paths. Apr 16 23:29:38.815559 systemd[1720]: Reached target timers.target - Timers. Apr 16 23:29:38.818284 systemd[1720]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 16 23:29:38.833024 systemd[1720]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 16 23:29:38.833126 systemd[1720]: Reached target sockets.target - Sockets. Apr 16 23:29:38.833166 systemd[1720]: Reached target basic.target - Basic System. Apr 16 23:29:38.833192 systemd[1720]: Reached target default.target - Main User Target. Apr 16 23:29:38.833216 systemd[1720]: Startup finished in 152ms. Apr 16 23:29:38.833390 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 16 23:29:38.841139 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 16 23:29:38.900916 systemd[1]: Started sshd@1-46.224.1.2:22-50.85.169.122:36344.service - OpenSSH per-connection server daemon (50.85.169.122:36344). Apr 16 23:29:39.032801 sshd[1731]: Accepted publickey for core from 50.85.169.122 port 36344 ssh2: RSA SHA256:YEfK51SBTL1bMgxZgn6/4J+7cuIr/XFmOhH6oMsPne8 Apr 16 23:29:39.034633 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:29:39.040553 systemd-logind[1520]: New session 2 of user core. Apr 16 23:29:39.046086 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 16 23:29:39.092036 sshd[1734]: Connection closed by 50.85.169.122 port 36344 Apr 16 23:29:39.093350 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Apr 16 23:29:39.098857 systemd[1]: sshd@1-46.224.1.2:22-50.85.169.122:36344.service: Deactivated successfully. Apr 16 23:29:39.101443 systemd[1]: session-2.scope: Deactivated successfully. Apr 16 23:29:39.103877 systemd-logind[1520]: Session 2 logged out. Waiting for processes to exit. Apr 16 23:29:39.123941 systemd[1]: Started sshd@2-46.224.1.2:22-50.85.169.122:36346.service - OpenSSH per-connection server daemon (50.85.169.122:36346). Apr 16 23:29:39.125145 systemd-logind[1520]: Removed session 2. Apr 16 23:29:39.245225 sshd[1740]: Accepted publickey for core from 50.85.169.122 port 36346 ssh2: RSA SHA256:YEfK51SBTL1bMgxZgn6/4J+7cuIr/XFmOhH6oMsPne8 Apr 16 23:29:39.246997 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:29:39.254651 systemd-logind[1520]: New session 3 of user core. Apr 16 23:29:39.262083 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 16 23:29:39.301039 sshd[1743]: Connection closed by 50.85.169.122 port 36346 Apr 16 23:29:39.301850 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Apr 16 23:29:39.308400 systemd-logind[1520]: Session 3 logged out. Waiting for processes to exit. Apr 16 23:29:39.309272 systemd[1]: sshd@2-46.224.1.2:22-50.85.169.122:36346.service: Deactivated successfully. Apr 16 23:29:39.313838 systemd[1]: session-3.scope: Deactivated successfully. Apr 16 23:29:39.316244 systemd-logind[1520]: Removed session 3. Apr 16 23:29:39.326859 systemd[1]: Started sshd@3-46.224.1.2:22-50.85.169.122:36348.service - OpenSSH per-connection server daemon (50.85.169.122:36348). Apr 16 23:29:39.454055 sshd[1749]: Accepted publickey for core from 50.85.169.122 port 36348 ssh2: RSA SHA256:YEfK51SBTL1bMgxZgn6/4J+7cuIr/XFmOhH6oMsPne8 Apr 16 23:29:39.456908 sshd-session[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:29:39.463227 systemd-logind[1520]: New session 4 of user core. Apr 16 23:29:39.472043 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 16 23:29:39.516426 sshd[1752]: Connection closed by 50.85.169.122 port 36348 Apr 16 23:29:39.517476 sshd-session[1749]: pam_unix(sshd:session): session closed for user core Apr 16 23:29:39.522997 systemd[1]: sshd@3-46.224.1.2:22-50.85.169.122:36348.service: Deactivated successfully. Apr 16 23:29:39.525280 systemd[1]: session-4.scope: Deactivated successfully. Apr 16 23:29:39.527049 systemd-logind[1520]: Session 4 logged out. Waiting for processes to exit. Apr 16 23:29:39.528647 systemd-logind[1520]: Removed session 4. Apr 16 23:29:39.544243 systemd[1]: Started sshd@4-46.224.1.2:22-50.85.169.122:44422.service - OpenSSH per-connection server daemon (50.85.169.122:44422). Apr 16 23:29:39.668866 sshd[1758]: Accepted publickey for core from 50.85.169.122 port 44422 ssh2: RSA SHA256:YEfK51SBTL1bMgxZgn6/4J+7cuIr/XFmOhH6oMsPne8 Apr 16 23:29:39.670930 sshd-session[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:29:39.678240 systemd-logind[1520]: New session 5 of user core. Apr 16 23:29:39.685021 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 16 23:29:39.723857 sudo[1762]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 16 23:29:39.724158 sudo[1762]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 23:29:39.738208 sudo[1762]: pam_unix(sudo:session): session closed for user root Apr 16 23:29:39.754455 sshd[1761]: Connection closed by 50.85.169.122 port 44422 Apr 16 23:29:39.754255 sshd-session[1758]: pam_unix(sshd:session): session closed for user core Apr 16 23:29:39.763118 systemd[1]: sshd@4-46.224.1.2:22-50.85.169.122:44422.service: Deactivated successfully. Apr 16 23:29:39.765603 systemd[1]: session-5.scope: Deactivated successfully. Apr 16 23:29:39.766939 systemd-logind[1520]: Session 5 logged out. Waiting for processes to exit. Apr 16 23:29:39.769025 systemd-logind[1520]: Removed session 5. Apr 16 23:29:39.789769 systemd[1]: Started sshd@5-46.224.1.2:22-50.85.169.122:44432.service - OpenSSH per-connection server daemon (50.85.169.122:44432). Apr 16 23:29:39.924796 sshd[1768]: Accepted publickey for core from 50.85.169.122 port 44432 ssh2: RSA SHA256:YEfK51SBTL1bMgxZgn6/4J+7cuIr/XFmOhH6oMsPne8 Apr 16 23:29:39.926201 sshd-session[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:29:39.932283 systemd-logind[1520]: New session 6 of user core. Apr 16 23:29:39.940114 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 16 23:29:39.969623 sudo[1773]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 16 23:29:39.970005 sudo[1773]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 23:29:39.974472 sudo[1773]: pam_unix(sudo:session): session closed for user root Apr 16 23:29:39.980489 sudo[1772]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 16 23:29:39.981131 sudo[1772]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 23:29:39.992883 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 16 23:29:40.033227 augenrules[1795]: No rules Apr 16 23:29:40.034904 systemd[1]: audit-rules.service: Deactivated successfully. Apr 16 23:29:40.036747 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 16 23:29:40.038261 sudo[1772]: pam_unix(sudo:session): session closed for user root Apr 16 23:29:40.054758 sshd[1771]: Connection closed by 50.85.169.122 port 44432 Apr 16 23:29:40.055593 sshd-session[1768]: pam_unix(sshd:session): session closed for user core Apr 16 23:29:40.061153 systemd[1]: sshd@5-46.224.1.2:22-50.85.169.122:44432.service: Deactivated successfully. Apr 16 23:29:40.064428 systemd[1]: session-6.scope: Deactivated successfully. Apr 16 23:29:40.066075 systemd-logind[1520]: Session 6 logged out. Waiting for processes to exit. Apr 16 23:29:40.067573 systemd-logind[1520]: Removed session 6. Apr 16 23:29:40.086137 systemd[1]: Started sshd@6-46.224.1.2:22-50.85.169.122:44440.service - OpenSSH per-connection server daemon (50.85.169.122:44440). Apr 16 23:29:40.212766 sshd[1804]: Accepted publickey for core from 50.85.169.122 port 44440 ssh2: RSA SHA256:YEfK51SBTL1bMgxZgn6/4J+7cuIr/XFmOhH6oMsPne8 Apr 16 23:29:40.215071 sshd-session[1804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:29:40.222794 systemd-logind[1520]: New session 7 of user core. Apr 16 23:29:40.228095 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 16 23:29:40.258429 sudo[1808]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 16 23:29:40.258870 sudo[1808]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 23:29:40.586928 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 16 23:29:40.588891 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 16 23:29:40.591877 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:29:40.604284 (dockerd)[1826]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 16 23:29:40.746531 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:29:40.758287 (kubelet)[1840]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 23:29:40.799165 kubelet[1840]: E0416 23:29:40.799066 1840 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 23:29:40.801587 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 23:29:40.801725 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 23:29:40.801984 systemd[1]: kubelet.service: Consumed 154ms CPU time, 107.1M memory peak. Apr 16 23:29:40.843865 dockerd[1826]: time="2026-04-16T23:29:40.843730875Z" level=info msg="Starting up" Apr 16 23:29:40.847193 dockerd[1826]: time="2026-04-16T23:29:40.847129329Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 16 23:29:40.862323 dockerd[1826]: time="2026-04-16T23:29:40.862267018Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 16 23:29:40.884512 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3610144504-merged.mount: Deactivated successfully. Apr 16 23:29:40.908489 dockerd[1826]: time="2026-04-16T23:29:40.908406142Z" level=info msg="Loading containers: start." Apr 16 23:29:40.922790 kernel: Initializing XFRM netlink socket Apr 16 23:29:41.179616 systemd-networkd[1418]: docker0: Link UP Apr 16 23:29:41.186758 dockerd[1826]: time="2026-04-16T23:29:41.186603589Z" level=info msg="Loading containers: done." Apr 16 23:29:41.208278 dockerd[1826]: time="2026-04-16T23:29:41.207933543Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 16 23:29:41.208278 dockerd[1826]: time="2026-04-16T23:29:41.208028277Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Apr 16 23:29:41.208278 dockerd[1826]: time="2026-04-16T23:29:41.208118077Z" level=info msg="Initializing buildkit" Apr 16 23:29:41.235853 update_engine[1521]: I20260416 23:29:41.235800 1521 update_attempter.cc:509] Updating boot flags... Apr 16 23:29:41.243783 dockerd[1826]: time="2026-04-16T23:29:41.242784065Z" level=info msg="Completed buildkit initialization" Apr 16 23:29:41.261731 dockerd[1826]: time="2026-04-16T23:29:41.261366879Z" level=info msg="Daemon has completed initialization" Apr 16 23:29:41.261731 dockerd[1826]: time="2026-04-16T23:29:41.261453952Z" level=info msg="API listen on /run/docker.sock" Apr 16 23:29:41.261832 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 16 23:29:41.723822 containerd[1544]: time="2026-04-16T23:29:41.723624530Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\"" Apr 16 23:29:41.881252 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1112791661-merged.mount: Deactivated successfully. Apr 16 23:29:42.275907 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount347942382.mount: Deactivated successfully. Apr 16 23:29:43.212733 containerd[1544]: time="2026-04-16T23:29:43.212586604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:29:43.215072 containerd[1544]: time="2026-04-16T23:29:43.215016458Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.4: active requests=0, bytes read=24608883" Apr 16 23:29:43.216298 containerd[1544]: time="2026-04-16T23:29:43.216265254Z" level=info msg="ImageCreate event name:\"sha256:09c946ff1743c56c0d49ef90ba95500741e0534f2f590ec98c924e4673ee3096\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:29:43.219501 containerd[1544]: time="2026-04-16T23:29:43.219442999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:29:43.221986 containerd[1544]: time="2026-04-16T23:29:43.221556326Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.4\" with image id \"sha256:09c946ff1743c56c0d49ef90ba95500741e0534f2f590ec98c924e4673ee3096\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\", size \"24605384\" in 1.497890771s" Apr 16 23:29:43.221986 containerd[1544]: time="2026-04-16T23:29:43.221596728Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\" returns image reference \"sha256:09c946ff1743c56c0d49ef90ba95500741e0534f2f590ec98c924e4673ee3096\"" Apr 16 23:29:43.222153 containerd[1544]: time="2026-04-16T23:29:43.222096591Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\"" Apr 16 23:29:44.217561 containerd[1544]: time="2026-04-16T23:29:44.216281764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:29:44.217561 containerd[1544]: time="2026-04-16T23:29:44.217515294Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.4: active requests=0, bytes read=19073314" Apr 16 23:29:44.218611 containerd[1544]: time="2026-04-16T23:29:44.218559966Z" level=info msg="ImageCreate event name:\"sha256:95ce7d322e267614405a2a0eccfc0a1bdf5664dd9ab089bdfa9ae74d5ccb05a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:29:44.222127 containerd[1544]: time="2026-04-16T23:29:44.222070736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:29:44.225157 containerd[1544]: time="2026-04-16T23:29:44.225116352Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.4\" with image id \"sha256:95ce7d322e267614405a2a0eccfc0a1bdf5664dd9ab089bdfa9ae74d5ccb05a7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\", size \"20579933\" in 1.00297912s" Apr 16 23:29:44.225306 containerd[1544]: time="2026-04-16T23:29:44.225290224Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\" returns image reference \"sha256:95ce7d322e267614405a2a0eccfc0a1bdf5664dd9ab089bdfa9ae74d5ccb05a7\"" Apr 16 23:29:44.226558 containerd[1544]: time="2026-04-16T23:29:44.226527080Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\"" Apr 16 23:29:45.033392 containerd[1544]: time="2026-04-16T23:29:45.032892195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:29:45.033924 containerd[1544]: time="2026-04-16T23:29:45.033878581Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.4: active requests=0, bytes read=13800856" Apr 16 23:29:45.034955 containerd[1544]: time="2026-04-16T23:29:45.034909959Z" level=info msg="ImageCreate event name:\"sha256:77d7d4cb9aa826105b6410a50df1dda7462ec663ced995347d8c171b04b0ee81\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:29:45.038648 containerd[1544]: time="2026-04-16T23:29:45.038574224Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:29:45.039662 containerd[1544]: time="2026-04-16T23:29:45.039618382Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.4\" with image id \"sha256:77d7d4cb9aa826105b6410a50df1dda7462ec663ced995347d8c171b04b0ee81\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\", size \"15307493\" in 813.055038ms" Apr 16 23:29:45.039662 containerd[1544]: time="2026-04-16T23:29:45.039655199Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\" returns image reference \"sha256:77d7d4cb9aa826105b6410a50df1dda7462ec663ced995347d8c171b04b0ee81\"" Apr 16 23:29:45.041208 containerd[1544]: time="2026-04-16T23:29:45.041168492Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\"" Apr 16 23:29:45.860444 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2289228962.mount: Deactivated successfully. Apr 16 23:29:46.068265 containerd[1544]: time="2026-04-16T23:29:46.068205022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:29:46.069443 containerd[1544]: time="2026-04-16T23:29:46.069399982Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.4: active requests=0, bytes read=22340610" Apr 16 23:29:46.070726 containerd[1544]: time="2026-04-16T23:29:46.070440009Z" level=info msg="ImageCreate event name:\"sha256:8c75fb69e773da539298848d12a0a12029818ee910a62f2abd68aa1a5805991c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:29:46.073652 containerd[1544]: time="2026-04-16T23:29:46.072656131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:29:46.073652 containerd[1544]: time="2026-04-16T23:29:46.073258357Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.4\" with image id \"sha256:8c75fb69e773da539298848d12a0a12029818ee910a62f2abd68aa1a5805991c\", repo tag \"registry.k8s.io/kube-proxy:v1.35.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\", size \"22339603\" in 1.032056334s" Apr 16 23:29:46.073652 containerd[1544]: time="2026-04-16T23:29:46.073285634Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\" returns image reference \"sha256:8c75fb69e773da539298848d12a0a12029818ee910a62f2abd68aa1a5805991c\"" Apr 16 23:29:46.074049 containerd[1544]: time="2026-04-16T23:29:46.074022125Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Apr 16 23:29:46.521963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3732551552.mount: Deactivated successfully. Apr 16 23:29:47.371859 containerd[1544]: time="2026-04-16T23:29:47.371798160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:29:47.373444 containerd[1544]: time="2026-04-16T23:29:47.373404689Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=21172309" Apr 16 23:29:47.374111 containerd[1544]: time="2026-04-16T23:29:47.374060397Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:29:47.378967 containerd[1544]: time="2026-04-16T23:29:47.378375739Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:29:47.380898 containerd[1544]: time="2026-04-16T23:29:47.380603094Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"21168808\" in 1.306482317s" Apr 16 23:29:47.380898 containerd[1544]: time="2026-04-16T23:29:47.380661645Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\"" Apr 16 23:29:47.382196 containerd[1544]: time="2026-04-16T23:29:47.382147749Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 16 23:29:47.835966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1352924518.mount: Deactivated successfully. Apr 16 23:29:47.843449 containerd[1544]: time="2026-04-16T23:29:47.843392009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:29:47.845031 containerd[1544]: time="2026-04-16T23:29:47.844987205Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268729" Apr 16 23:29:47.846710 containerd[1544]: time="2026-04-16T23:29:47.846649401Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:29:47.849684 containerd[1544]: time="2026-04-16T23:29:47.849639272Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:29:47.851318 containerd[1544]: time="2026-04-16T23:29:47.851174275Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 468.978708ms" Apr 16 23:29:47.851318 containerd[1544]: time="2026-04-16T23:29:47.851206434Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Apr 16 23:29:47.852041 containerd[1544]: time="2026-04-16T23:29:47.851862662Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Apr 16 23:29:48.274950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount47779330.mount: Deactivated successfully. Apr 16 23:29:48.894188 containerd[1544]: time="2026-04-16T23:29:48.894115076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:29:48.896359 containerd[1544]: time="2026-04-16T23:29:48.896083185Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=21752394" Apr 16 23:29:48.897529 containerd[1544]: time="2026-04-16T23:29:48.897488582Z" level=info msg="ImageCreate event name:\"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:29:48.901550 containerd[1544]: time="2026-04-16T23:29:48.901516816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:29:48.902567 containerd[1544]: time="2026-04-16T23:29:48.902528479Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"21749640\" in 1.050636503s" Apr 16 23:29:48.902567 containerd[1544]: time="2026-04-16T23:29:48.902564877Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\"" Apr 16 23:29:50.908390 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 16 23:29:50.910369 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:29:50.928912 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 16 23:29:50.929192 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 16 23:29:50.930756 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:29:50.933504 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:29:50.964271 systemd[1]: Reload requested from client PID 2290 ('systemctl') (unit session-7.scope)... Apr 16 23:29:50.964405 systemd[1]: Reloading... Apr 16 23:29:51.087721 zram_generator::config[2337]: No configuration found. Apr 16 23:29:51.266803 systemd[1]: Reloading finished in 301 ms. Apr 16 23:29:51.323832 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 16 23:29:51.324263 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 16 23:29:51.325129 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:29:51.325215 systemd[1]: kubelet.service: Consumed 105ms CPU time, 94.9M memory peak. Apr 16 23:29:51.328412 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:29:51.483460 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:29:51.492066 (kubelet)[2382]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 23:29:51.529168 kubelet[2382]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 23:29:51.934488 kubelet[2382]: I0416 23:29:51.934247 2382 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 16 23:29:51.934488 kubelet[2382]: I0416 23:29:51.934309 2382 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 23:29:51.934488 kubelet[2382]: I0416 23:29:51.934335 2382 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 16 23:29:51.934488 kubelet[2382]: I0416 23:29:51.934341 2382 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 23:29:51.935221 kubelet[2382]: I0416 23:29:51.934636 2382 server.go:951] "Client rotation is on, will bootstrap in background" Apr 16 23:29:51.942726 kubelet[2382]: E0416 23:29:51.941742 2382 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://46.224.1.2:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 46.224.1.2:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 23:29:51.943980 kubelet[2382]: I0416 23:29:51.943947 2382 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 23:29:51.949437 kubelet[2382]: I0416 23:29:51.949387 2382 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 16 23:29:51.953641 kubelet[2382]: I0416 23:29:51.953584 2382 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 16 23:29:51.954712 kubelet[2382]: I0416 23:29:51.953861 2382 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 23:29:51.954712 kubelet[2382]: I0416 23:29:51.953890 2382 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-4-n-fff9fc0546","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 23:29:51.954712 kubelet[2382]: I0416 23:29:51.954133 2382 topology_manager.go:143] "Creating topology manager with none policy" Apr 16 23:29:51.954712 kubelet[2382]: I0416 23:29:51.954141 2382 container_manager_linux.go:308] "Creating device plugin manager" Apr 16 23:29:51.954927 kubelet[2382]: I0416 23:29:51.954253 2382 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 16 23:29:51.956887 kubelet[2382]: I0416 23:29:51.956843 2382 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 16 23:29:51.957180 kubelet[2382]: I0416 23:29:51.957155 2382 kubelet.go:482] "Attempting to sync node with API server" Apr 16 23:29:51.957180 kubelet[2382]: I0416 23:29:51.957173 2382 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 23:29:51.957869 kubelet[2382]: I0416 23:29:51.957190 2382 kubelet.go:394] "Adding apiserver pod source" Apr 16 23:29:51.957869 kubelet[2382]: I0416 23:29:51.957199 2382 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 23:29:51.961422 kubelet[2382]: I0416 23:29:51.961377 2382 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 16 23:29:51.962528 kubelet[2382]: I0416 23:29:51.962484 2382 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 23:29:51.962528 kubelet[2382]: I0416 23:29:51.962527 2382 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 16 23:29:51.962670 kubelet[2382]: W0416 23:29:51.962571 2382 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 16 23:29:51.965107 kubelet[2382]: I0416 23:29:51.965083 2382 server.go:1257] "Started kubelet" Apr 16 23:29:51.967846 kubelet[2382]: I0416 23:29:51.967798 2382 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 23:29:51.968922 kubelet[2382]: I0416 23:29:51.968901 2382 server.go:317] "Adding debug handlers to kubelet server" Apr 16 23:29:51.970944 kubelet[2382]: I0416 23:29:51.970857 2382 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 23:29:51.971015 kubelet[2382]: I0416 23:29:51.970955 2382 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 16 23:29:51.971284 kubelet[2382]: I0416 23:29:51.971249 2382 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 23:29:51.973248 kubelet[2382]: E0416 23:29:51.971396 2382 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://46.224.1.2:6443/api/v1/namespaces/default/events\": dial tcp 46.224.1.2:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-2-4-n-fff9fc0546.18a6fa3121681752 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-4-n-fff9fc0546,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-4-n-fff9fc0546,},FirstTimestamp:2026-04-16 23:29:51.965050706 +0000 UTC m=+0.470039186,LastTimestamp:2026-04-16 23:29:51.965050706 +0000 UTC m=+0.470039186,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-4-n-fff9fc0546,}" Apr 16 23:29:51.977075 kubelet[2382]: I0416 23:29:51.976936 2382 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 16 23:29:51.977621 kubelet[2382]: E0416 23:29:51.977586 2382 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 16 23:29:51.978061 kubelet[2382]: I0416 23:29:51.978037 2382 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 23:29:51.981736 kubelet[2382]: E0416 23:29:51.981657 2382 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-fff9fc0546\" not found" Apr 16 23:29:51.982601 kubelet[2382]: I0416 23:29:51.982555 2382 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 16 23:29:51.982842 kubelet[2382]: I0416 23:29:51.982798 2382 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 16 23:29:51.982957 kubelet[2382]: I0416 23:29:51.982856 2382 reconciler.go:29] "Reconciler: start to sync state" Apr 16 23:29:51.984024 kubelet[2382]: I0416 23:29:51.983959 2382 factory.go:223] Registration of the systemd container factory successfully Apr 16 23:29:51.984126 kubelet[2382]: I0416 23:29:51.984080 2382 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 23:29:51.985644 kubelet[2382]: I0416 23:29:51.985608 2382 factory.go:223] Registration of the containerd container factory successfully Apr 16 23:29:52.001309 kubelet[2382]: I0416 23:29:52.001252 2382 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 16 23:29:52.005236 kubelet[2382]: E0416 23:29:52.005061 2382 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://46.224.1.2:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-fff9fc0546?timeout=10s\": dial tcp 46.224.1.2:6443: connect: connection refused" interval="200ms" Apr 16 23:29:52.008076 kubelet[2382]: I0416 23:29:52.008029 2382 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 16 23:29:52.008076 kubelet[2382]: I0416 23:29:52.008068 2382 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 16 23:29:52.008207 kubelet[2382]: I0416 23:29:52.008094 2382 kubelet.go:2501] "Starting kubelet main sync loop" Apr 16 23:29:52.008207 kubelet[2382]: E0416 23:29:52.008158 2382 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 23:29:52.023281 kubelet[2382]: I0416 23:29:52.023083 2382 cpu_manager.go:225] "Starting" policy="none" Apr 16 23:29:52.023281 kubelet[2382]: I0416 23:29:52.023267 2382 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 16 23:29:52.023414 kubelet[2382]: I0416 23:29:52.023293 2382 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 16 23:29:52.024904 kubelet[2382]: I0416 23:29:52.024881 2382 policy_none.go:50] "Start" Apr 16 23:29:52.025554 kubelet[2382]: I0416 23:29:52.025250 2382 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 16 23:29:52.025554 kubelet[2382]: I0416 23:29:52.025278 2382 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 16 23:29:52.026997 kubelet[2382]: I0416 23:29:52.026973 2382 policy_none.go:44] "Start" Apr 16 23:29:52.032035 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 16 23:29:52.042583 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 16 23:29:52.047686 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 16 23:29:52.058041 kubelet[2382]: E0416 23:29:52.057998 2382 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 23:29:52.060330 kubelet[2382]: I0416 23:29:52.060263 2382 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 16 23:29:52.062291 kubelet[2382]: I0416 23:29:52.060861 2382 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 23:29:52.062291 kubelet[2382]: I0416 23:29:52.062037 2382 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 16 23:29:52.063813 kubelet[2382]: E0416 23:29:52.063664 2382 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 23:29:52.064306 kubelet[2382]: E0416 23:29:52.064283 2382 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-2-4-n-fff9fc0546\" not found" Apr 16 23:29:52.128857 systemd[1]: Created slice kubepods-burstable-podf877b2b2ec3e4012e8d944722044aad5.slice - libcontainer container kubepods-burstable-podf877b2b2ec3e4012e8d944722044aad5.slice. Apr 16 23:29:52.149390 kubelet[2382]: E0416 23:29:52.149349 2382 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-fff9fc0546\" not found" node="ci-4459-2-4-n-fff9fc0546" Apr 16 23:29:52.154567 systemd[1]: Created slice kubepods-burstable-pod50c40e2f8151bc0e01acdbfdaa41b8a5.slice - libcontainer container kubepods-burstable-pod50c40e2f8151bc0e01acdbfdaa41b8a5.slice. Apr 16 23:29:52.158619 kubelet[2382]: E0416 23:29:52.158589 2382 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-fff9fc0546\" not found" node="ci-4459-2-4-n-fff9fc0546" Apr 16 23:29:52.161029 systemd[1]: Created slice kubepods-burstable-pod74e6ab4b3a047bb356acf8571be9be24.slice - libcontainer container kubepods-burstable-pod74e6ab4b3a047bb356acf8571be9be24.slice. Apr 16 23:29:52.163302 kubelet[2382]: E0416 23:29:52.163275 2382 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-fff9fc0546\" not found" node="ci-4459-2-4-n-fff9fc0546" Apr 16 23:29:52.164803 kubelet[2382]: I0416 23:29:52.164780 2382 kubelet_node_status.go:74] "Attempting to register node" node="ci-4459-2-4-n-fff9fc0546" Apr 16 23:29:52.165351 kubelet[2382]: E0416 23:29:52.165317 2382 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://46.224.1.2:6443/api/v1/nodes\": dial tcp 46.224.1.2:6443: connect: connection refused" node="ci-4459-2-4-n-fff9fc0546" Apr 16 23:29:52.184211 kubelet[2382]: I0416 23:29:52.184163 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50c40e2f8151bc0e01acdbfdaa41b8a5-kubeconfig\") pod \"kube-scheduler-ci-4459-2-4-n-fff9fc0546\" (UID: \"50c40e2f8151bc0e01acdbfdaa41b8a5\") " pod="kube-system/kube-scheduler-ci-4459-2-4-n-fff9fc0546" Apr 16 23:29:52.186667 kubelet[2382]: I0416 23:29:52.184454 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/74e6ab4b3a047bb356acf8571be9be24-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-4-n-fff9fc0546\" (UID: \"74e6ab4b3a047bb356acf8571be9be24\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-fff9fc0546" Apr 16 23:29:52.186667 kubelet[2382]: I0416 23:29:52.185730 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f877b2b2ec3e4012e8d944722044aad5-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-fff9fc0546\" (UID: \"f877b2b2ec3e4012e8d944722044aad5\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-fff9fc0546" Apr 16 23:29:52.186667 kubelet[2382]: I0416 23:29:52.185771 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f877b2b2ec3e4012e8d944722044aad5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-4-n-fff9fc0546\" (UID: \"f877b2b2ec3e4012e8d944722044aad5\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-fff9fc0546" Apr 16 23:29:52.186667 kubelet[2382]: I0416 23:29:52.185794 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/74e6ab4b3a047bb356acf8571be9be24-ca-certs\") pod \"kube-apiserver-ci-4459-2-4-n-fff9fc0546\" (UID: \"74e6ab4b3a047bb356acf8571be9be24\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-fff9fc0546" Apr 16 23:29:52.186667 kubelet[2382]: I0416 23:29:52.185817 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/74e6ab4b3a047bb356acf8571be9be24-k8s-certs\") pod \"kube-apiserver-ci-4459-2-4-n-fff9fc0546\" (UID: \"74e6ab4b3a047bb356acf8571be9be24\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-fff9fc0546" Apr 16 23:29:52.187104 kubelet[2382]: I0416 23:29:52.185837 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f877b2b2ec3e4012e8d944722044aad5-ca-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-fff9fc0546\" (UID: \"f877b2b2ec3e4012e8d944722044aad5\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-fff9fc0546" Apr 16 23:29:52.187104 kubelet[2382]: I0416 23:29:52.185870 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f877b2b2ec3e4012e8d944722044aad5-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-4-n-fff9fc0546\" (UID: \"f877b2b2ec3e4012e8d944722044aad5\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-fff9fc0546" Apr 16 23:29:52.187104 kubelet[2382]: I0416 23:29:52.185890 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f877b2b2ec3e4012e8d944722044aad5-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-4-n-fff9fc0546\" (UID: \"f877b2b2ec3e4012e8d944722044aad5\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-fff9fc0546" Apr 16 23:29:52.206008 kubelet[2382]: E0416 23:29:52.205921 2382 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://46.224.1.2:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-fff9fc0546?timeout=10s\": dial tcp 46.224.1.2:6443: connect: connection refused" interval="400ms" Apr 16 23:29:52.369131 kubelet[2382]: I0416 23:29:52.369079 2382 kubelet_node_status.go:74] "Attempting to register node" node="ci-4459-2-4-n-fff9fc0546" Apr 16 23:29:52.369745 kubelet[2382]: E0416 23:29:52.369591 2382 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://46.224.1.2:6443/api/v1/nodes\": dial tcp 46.224.1.2:6443: connect: connection refused" node="ci-4459-2-4-n-fff9fc0546" Apr 16 23:29:52.456636 containerd[1544]: time="2026-04-16T23:29:52.456587648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-4-n-fff9fc0546,Uid:f877b2b2ec3e4012e8d944722044aad5,Namespace:kube-system,Attempt:0,}" Apr 16 23:29:52.462456 containerd[1544]: time="2026-04-16T23:29:52.462312136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-4-n-fff9fc0546,Uid:50c40e2f8151bc0e01acdbfdaa41b8a5,Namespace:kube-system,Attempt:0,}" Apr 16 23:29:52.467003 containerd[1544]: time="2026-04-16T23:29:52.466772686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-4-n-fff9fc0546,Uid:74e6ab4b3a047bb356acf8571be9be24,Namespace:kube-system,Attempt:0,}" Apr 16 23:29:52.607292 kubelet[2382]: E0416 23:29:52.607221 2382 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://46.224.1.2:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-fff9fc0546?timeout=10s\": dial tcp 46.224.1.2:6443: connect: connection refused" interval="800ms" Apr 16 23:29:52.773006 kubelet[2382]: I0416 23:29:52.772858 2382 kubelet_node_status.go:74] "Attempting to register node" node="ci-4459-2-4-n-fff9fc0546" Apr 16 23:29:52.773738 kubelet[2382]: E0416 23:29:52.773671 2382 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://46.224.1.2:6443/api/v1/nodes\": dial tcp 46.224.1.2:6443: connect: connection refused" node="ci-4459-2-4-n-fff9fc0546" Apr 16 23:29:52.908653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1364737250.mount: Deactivated successfully. Apr 16 23:29:52.915715 containerd[1544]: time="2026-04-16T23:29:52.915596157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 23:29:52.919098 containerd[1544]: time="2026-04-16T23:29:52.919051446Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Apr 16 23:29:52.921732 containerd[1544]: time="2026-04-16T23:29:52.921517767Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 23:29:52.925149 containerd[1544]: time="2026-04-16T23:29:52.925115104Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 23:29:52.925639 containerd[1544]: time="2026-04-16T23:29:52.925485692Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 16 23:29:52.926652 containerd[1544]: time="2026-04-16T23:29:52.926548988Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 23:29:52.927818 containerd[1544]: time="2026-04-16T23:29:52.927775303Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 16 23:29:52.929163 containerd[1544]: time="2026-04-16T23:29:52.928815184Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 23:29:52.933476 containerd[1544]: time="2026-04-16T23:29:52.933418502Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 474.556852ms" Apr 16 23:29:52.936284 containerd[1544]: time="2026-04-16T23:29:52.936234477Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 467.972114ms" Apr 16 23:29:52.943915 containerd[1544]: time="2026-04-16T23:29:52.943863900Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 478.948399ms" Apr 16 23:29:52.981550 containerd[1544]: time="2026-04-16T23:29:52.981500857Z" level=info msg="connecting to shim e57e77c590ae49f833fd286730ef6dbaf6968dc0c096912d2dd57094eda43b31" address="unix:///run/containerd/s/e76134e22e119c3bf7a3c2ae78710df639c1e6602f2260dac854e1a845ad65ea" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:29:52.982467 containerd[1544]: time="2026-04-16T23:29:52.982272853Z" level=info msg="connecting to shim abe753f053d575166adc9f99e1eb2e2abd746ae86dc55cf6fb457de1efa5e12a" address="unix:///run/containerd/s/b0e51fd40e37ba24564f11f5a3d247441c0dfecd0fb4ead8cb682af73ec76827" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:29:52.986416 containerd[1544]: time="2026-04-16T23:29:52.986368337Z" level=info msg="connecting to shim 764f342d18ea5565ec09894e988ecdba5cdf206434578f50a0410858181c6f00" address="unix:///run/containerd/s/b944e3de76ce2d5ee8cc3198380826bf74d38a44e871a3791b05c2c702aa6cd0" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:29:53.013080 systemd[1]: Started cri-containerd-abe753f053d575166adc9f99e1eb2e2abd746ae86dc55cf6fb457de1efa5e12a.scope - libcontainer container abe753f053d575166adc9f99e1eb2e2abd746ae86dc55cf6fb457de1efa5e12a. Apr 16 23:29:53.024145 systemd[1]: Started cri-containerd-e57e77c590ae49f833fd286730ef6dbaf6968dc0c096912d2dd57094eda43b31.scope - libcontainer container e57e77c590ae49f833fd286730ef6dbaf6968dc0c096912d2dd57094eda43b31. Apr 16 23:29:53.031640 systemd[1]: Started cri-containerd-764f342d18ea5565ec09894e988ecdba5cdf206434578f50a0410858181c6f00.scope - libcontainer container 764f342d18ea5565ec09894e988ecdba5cdf206434578f50a0410858181c6f00. Apr 16 23:29:53.087356 containerd[1544]: time="2026-04-16T23:29:53.087221195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-4-n-fff9fc0546,Uid:f877b2b2ec3e4012e8d944722044aad5,Namespace:kube-system,Attempt:0,} returns sandbox id \"abe753f053d575166adc9f99e1eb2e2abd746ae86dc55cf6fb457de1efa5e12a\"" Apr 16 23:29:53.090945 containerd[1544]: time="2026-04-16T23:29:53.090552752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-4-n-fff9fc0546,Uid:74e6ab4b3a047bb356acf8571be9be24,Namespace:kube-system,Attempt:0,} returns sandbox id \"e57e77c590ae49f833fd286730ef6dbaf6968dc0c096912d2dd57094eda43b31\"" Apr 16 23:29:53.095732 containerd[1544]: time="2026-04-16T23:29:53.095658146Z" level=info msg="CreateContainer within sandbox \"abe753f053d575166adc9f99e1eb2e2abd746ae86dc55cf6fb457de1efa5e12a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 16 23:29:53.096948 containerd[1544]: time="2026-04-16T23:29:53.096841744Z" level=info msg="CreateContainer within sandbox \"e57e77c590ae49f833fd286730ef6dbaf6968dc0c096912d2dd57094eda43b31\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 16 23:29:53.107680 containerd[1544]: time="2026-04-16T23:29:53.107643890Z" level=info msg="Container f172186664fee74882b3d372a3bbe4be272795324dd64dd08b459d3847d1b985: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:29:53.108197 containerd[1544]: time="2026-04-16T23:29:53.108081406Z" level=info msg="Container 85010191924f3492d6486b47d30c48830e9f390aa012f98b80c4be1b533b8403: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:29:53.109948 containerd[1544]: time="2026-04-16T23:29:53.109914795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-4-n-fff9fc0546,Uid:50c40e2f8151bc0e01acdbfdaa41b8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"764f342d18ea5565ec09894e988ecdba5cdf206434578f50a0410858181c6f00\"" Apr 16 23:29:53.115639 containerd[1544]: time="2026-04-16T23:29:53.115550114Z" level=info msg="CreateContainer within sandbox \"764f342d18ea5565ec09894e988ecdba5cdf206434578f50a0410858181c6f00\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 16 23:29:53.122776 containerd[1544]: time="2026-04-16T23:29:53.122635536Z" level=info msg="CreateContainer within sandbox \"e57e77c590ae49f833fd286730ef6dbaf6968dc0c096912d2dd57094eda43b31\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"85010191924f3492d6486b47d30c48830e9f390aa012f98b80c4be1b533b8403\"" Apr 16 23:29:53.122984 containerd[1544]: time="2026-04-16T23:29:53.122726505Z" level=info msg="CreateContainer within sandbox \"abe753f053d575166adc9f99e1eb2e2abd746ae86dc55cf6fb457de1efa5e12a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f172186664fee74882b3d372a3bbe4be272795324dd64dd08b459d3847d1b985\"" Apr 16 23:29:53.124188 containerd[1544]: time="2026-04-16T23:29:53.123848150Z" level=info msg="StartContainer for \"f172186664fee74882b3d372a3bbe4be272795324dd64dd08b459d3847d1b985\"" Apr 16 23:29:53.124869 containerd[1544]: time="2026-04-16T23:29:53.124841646Z" level=info msg="StartContainer for \"85010191924f3492d6486b47d30c48830e9f390aa012f98b80c4be1b533b8403\"" Apr 16 23:29:53.125347 containerd[1544]: time="2026-04-16T23:29:53.125066287Z" level=info msg="connecting to shim f172186664fee74882b3d372a3bbe4be272795324dd64dd08b459d3847d1b985" address="unix:///run/containerd/s/b0e51fd40e37ba24564f11f5a3d247441c0dfecd0fb4ead8cb682af73ec76827" protocol=ttrpc version=3 Apr 16 23:29:53.127275 containerd[1544]: time="2026-04-16T23:29:53.127240019Z" level=info msg="connecting to shim 85010191924f3492d6486b47d30c48830e9f390aa012f98b80c4be1b533b8403" address="unix:///run/containerd/s/e76134e22e119c3bf7a3c2ae78710df639c1e6602f2260dac854e1a845ad65ea" protocol=ttrpc version=3 Apr 16 23:29:53.130600 containerd[1544]: time="2026-04-16T23:29:53.130552446Z" level=info msg="Container 7a55dc625473eb3d0eea23b55e8d576ccda7234d00eb550139336e55186db83e: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:29:53.141191 containerd[1544]: time="2026-04-16T23:29:53.141140797Z" level=info msg="CreateContainer within sandbox \"764f342d18ea5565ec09894e988ecdba5cdf206434578f50a0410858181c6f00\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7a55dc625473eb3d0eea23b55e8d576ccda7234d00eb550139336e55186db83e\"" Apr 16 23:29:53.141913 containerd[1544]: time="2026-04-16T23:29:53.141886239Z" level=info msg="StartContainer for \"7a55dc625473eb3d0eea23b55e8d576ccda7234d00eb550139336e55186db83e\"" Apr 16 23:29:53.145767 containerd[1544]: time="2026-04-16T23:29:53.145567384Z" level=info msg="connecting to shim 7a55dc625473eb3d0eea23b55e8d576ccda7234d00eb550139336e55186db83e" address="unix:///run/containerd/s/b944e3de76ce2d5ee8cc3198380826bf74d38a44e871a3791b05c2c702aa6cd0" protocol=ttrpc version=3 Apr 16 23:29:53.156024 systemd[1]: Started cri-containerd-f172186664fee74882b3d372a3bbe4be272795324dd64dd08b459d3847d1b985.scope - libcontainer container f172186664fee74882b3d372a3bbe4be272795324dd64dd08b459d3847d1b985. Apr 16 23:29:53.162985 systemd[1]: Started cri-containerd-85010191924f3492d6486b47d30c48830e9f390aa012f98b80c4be1b533b8403.scope - libcontainer container 85010191924f3492d6486b47d30c48830e9f390aa012f98b80c4be1b533b8403. Apr 16 23:29:53.172870 systemd[1]: Started cri-containerd-7a55dc625473eb3d0eea23b55e8d576ccda7234d00eb550139336e55186db83e.scope - libcontainer container 7a55dc625473eb3d0eea23b55e8d576ccda7234d00eb550139336e55186db83e. Apr 16 23:29:53.239217 containerd[1544]: time="2026-04-16T23:29:53.238788863Z" level=info msg="StartContainer for \"85010191924f3492d6486b47d30c48830e9f390aa012f98b80c4be1b533b8403\" returns successfully" Apr 16 23:29:53.240888 containerd[1544]: time="2026-04-16T23:29:53.240056506Z" level=info msg="StartContainer for \"f172186664fee74882b3d372a3bbe4be272795324dd64dd08b459d3847d1b985\" returns successfully" Apr 16 23:29:53.263126 containerd[1544]: time="2026-04-16T23:29:53.262959299Z" level=info msg="StartContainer for \"7a55dc625473eb3d0eea23b55e8d576ccda7234d00eb550139336e55186db83e\" returns successfully" Apr 16 23:29:53.576989 kubelet[2382]: I0416 23:29:53.576955 2382 kubelet_node_status.go:74] "Attempting to register node" node="ci-4459-2-4-n-fff9fc0546" Apr 16 23:29:54.027924 kubelet[2382]: E0416 23:29:54.026434 2382 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-fff9fc0546\" not found" node="ci-4459-2-4-n-fff9fc0546" Apr 16 23:29:54.032379 kubelet[2382]: E0416 23:29:54.032075 2382 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-fff9fc0546\" not found" node="ci-4459-2-4-n-fff9fc0546" Apr 16 23:29:54.035519 kubelet[2382]: E0416 23:29:54.035352 2382 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-fff9fc0546\" not found" node="ci-4459-2-4-n-fff9fc0546" Apr 16 23:29:54.888558 kubelet[2382]: E0416 23:29:54.888517 2382 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459-2-4-n-fff9fc0546\" not found" node="ci-4459-2-4-n-fff9fc0546" Apr 16 23:29:55.020681 kubelet[2382]: I0416 23:29:55.020632 2382 kubelet_node_status.go:77] "Successfully registered node" node="ci-4459-2-4-n-fff9fc0546" Apr 16 23:29:55.020681 kubelet[2382]: E0416 23:29:55.020684 2382 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"ci-4459-2-4-n-fff9fc0546\": node \"ci-4459-2-4-n-fff9fc0546\" not found" Apr 16 23:29:55.035953 kubelet[2382]: E0416 23:29:55.035830 2382 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-fff9fc0546\" not found" Apr 16 23:29:55.041109 kubelet[2382]: E0416 23:29:55.041073 2382 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-fff9fc0546\" not found" node="ci-4459-2-4-n-fff9fc0546" Apr 16 23:29:55.047480 kubelet[2382]: E0416 23:29:55.047275 2382 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-fff9fc0546\" not found" node="ci-4459-2-4-n-fff9fc0546" Apr 16 23:29:55.137857 kubelet[2382]: E0416 23:29:55.137769 2382 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-fff9fc0546\" not found" Apr 16 23:29:55.238965 kubelet[2382]: E0416 23:29:55.238907 2382 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-fff9fc0546\" not found" Apr 16 23:29:55.339737 kubelet[2382]: E0416 23:29:55.339480 2382 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-fff9fc0546\" not found" Apr 16 23:29:55.440313 kubelet[2382]: E0416 23:29:55.440174 2382 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-fff9fc0546\" not found" Apr 16 23:29:55.541487 kubelet[2382]: E0416 23:29:55.541327 2382 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-fff9fc0546\" not found" Apr 16 23:29:55.642534 kubelet[2382]: E0416 23:29:55.642440 2382 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-fff9fc0546\" not found" Apr 16 23:29:55.743008 kubelet[2382]: E0416 23:29:55.742886 2382 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-fff9fc0546\" not found" Apr 16 23:29:55.790735 kubelet[2382]: I0416 23:29:55.790135 2382 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-fff9fc0546" Apr 16 23:29:55.804334 kubelet[2382]: I0416 23:29:55.804119 2382 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-4-n-fff9fc0546" Apr 16 23:29:55.812187 kubelet[2382]: I0416 23:29:55.812152 2382 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-4-n-fff9fc0546" Apr 16 23:29:55.963712 kubelet[2382]: I0416 23:29:55.963658 2382 apiserver.go:52] "Watching apiserver" Apr 16 23:29:55.983920 kubelet[2382]: I0416 23:29:55.983872 2382 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 16 23:29:57.456897 systemd[1]: Reload requested from client PID 2670 ('systemctl') (unit session-7.scope)... Apr 16 23:29:57.456916 systemd[1]: Reloading... Apr 16 23:29:57.553784 zram_generator::config[2714]: No configuration found. Apr 16 23:29:57.785594 systemd[1]: Reloading finished in 328 ms. Apr 16 23:29:57.813089 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:29:57.835146 systemd[1]: kubelet.service: Deactivated successfully. Apr 16 23:29:57.835460 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:29:57.835561 systemd[1]: kubelet.service: Consumed 886ms CPU time, 121.6M memory peak. Apr 16 23:29:57.837960 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:29:58.007553 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:29:58.020182 (kubelet)[2759]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 23:29:58.072026 kubelet[2759]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 23:29:58.080767 kubelet[2759]: I0416 23:29:58.078968 2759 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 16 23:29:58.080767 kubelet[2759]: I0416 23:29:58.079012 2759 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 23:29:58.080767 kubelet[2759]: I0416 23:29:58.079036 2759 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 16 23:29:58.080767 kubelet[2759]: I0416 23:29:58.079041 2759 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 23:29:58.082341 kubelet[2759]: I0416 23:29:58.082083 2759 server.go:951] "Client rotation is on, will bootstrap in background" Apr 16 23:29:58.084293 kubelet[2759]: I0416 23:29:58.084260 2759 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 16 23:29:58.086786 kubelet[2759]: I0416 23:29:58.086653 2759 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 23:29:58.094748 kubelet[2759]: I0416 23:29:58.094686 2759 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 16 23:29:58.100020 kubelet[2759]: I0416 23:29:58.099984 2759 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 16 23:29:58.100480 kubelet[2759]: I0416 23:29:58.100407 2759 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 23:29:58.100731 kubelet[2759]: I0416 23:29:58.100545 2759 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-4-n-fff9fc0546","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 23:29:58.100871 kubelet[2759]: I0416 23:29:58.100855 2759 topology_manager.go:143] "Creating topology manager with none policy" Apr 16 23:29:58.100926 kubelet[2759]: I0416 23:29:58.100917 2759 container_manager_linux.go:308] "Creating device plugin manager" Apr 16 23:29:58.100995 kubelet[2759]: I0416 23:29:58.100986 2759 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 16 23:29:58.101321 kubelet[2759]: I0416 23:29:58.101297 2759 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 16 23:29:58.101603 kubelet[2759]: I0416 23:29:58.101583 2759 kubelet.go:482] "Attempting to sync node with API server" Apr 16 23:29:58.101766 kubelet[2759]: I0416 23:29:58.101748 2759 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 23:29:58.101870 kubelet[2759]: I0416 23:29:58.101856 2759 kubelet.go:394] "Adding apiserver pod source" Apr 16 23:29:58.102022 kubelet[2759]: I0416 23:29:58.101932 2759 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 23:29:58.106674 kubelet[2759]: I0416 23:29:58.106492 2759 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 16 23:29:58.109217 kubelet[2759]: I0416 23:29:58.109180 2759 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 23:29:58.109217 kubelet[2759]: I0416 23:29:58.109224 2759 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 16 23:29:58.114711 kubelet[2759]: I0416 23:29:58.114159 2759 server.go:1257] "Started kubelet" Apr 16 23:29:58.119502 kubelet[2759]: I0416 23:29:58.119475 2759 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 16 23:29:58.123048 kubelet[2759]: I0416 23:29:58.122967 2759 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 23:29:58.125943 kubelet[2759]: I0416 23:29:58.125427 2759 server.go:317] "Adding debug handlers to kubelet server" Apr 16 23:29:58.131584 kubelet[2759]: I0416 23:29:58.131146 2759 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 23:29:58.131584 kubelet[2759]: I0416 23:29:58.131226 2759 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 16 23:29:58.131584 kubelet[2759]: I0416 23:29:58.131392 2759 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 23:29:58.139335 kubelet[2759]: I0416 23:29:58.139115 2759 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 23:29:58.144089 kubelet[2759]: I0416 23:29:58.144044 2759 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 16 23:29:58.144345 kubelet[2759]: E0416 23:29:58.144327 2759 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-fff9fc0546\" not found" Apr 16 23:29:58.147597 kubelet[2759]: I0416 23:29:58.147548 2759 factory.go:223] Registration of the systemd container factory successfully Apr 16 23:29:58.147748 kubelet[2759]: I0416 23:29:58.147663 2759 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 23:29:58.155425 kubelet[2759]: I0416 23:29:58.154986 2759 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 16 23:29:58.159522 kubelet[2759]: I0416 23:29:58.158957 2759 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 16 23:29:58.159760 kubelet[2759]: I0416 23:29:58.159731 2759 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 16 23:29:58.159850 kubelet[2759]: I0416 23:29:58.159839 2759 kubelet.go:2501] "Starting kubelet main sync loop" Apr 16 23:29:58.159954 kubelet[2759]: E0416 23:29:58.159936 2759 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 23:29:58.161781 kubelet[2759]: I0416 23:29:58.160824 2759 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 16 23:29:58.161781 kubelet[2759]: I0416 23:29:58.160939 2759 reconciler.go:29] "Reconciler: start to sync state" Apr 16 23:29:58.170731 kubelet[2759]: I0416 23:29:58.159443 2759 factory.go:223] Registration of the containerd container factory successfully Apr 16 23:29:58.187239 kubelet[2759]: E0416 23:29:58.187199 2759 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 16 23:29:58.227071 kubelet[2759]: I0416 23:29:58.225947 2759 cpu_manager.go:225] "Starting" policy="none" Apr 16 23:29:58.227071 kubelet[2759]: I0416 23:29:58.225970 2759 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 16 23:29:58.227071 kubelet[2759]: I0416 23:29:58.225997 2759 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 16 23:29:58.227071 kubelet[2759]: I0416 23:29:58.226149 2759 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Apr 16 23:29:58.227071 kubelet[2759]: I0416 23:29:58.226172 2759 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Apr 16 23:29:58.227071 kubelet[2759]: I0416 23:29:58.226194 2759 policy_none.go:50] "Start" Apr 16 23:29:58.227071 kubelet[2759]: I0416 23:29:58.226202 2759 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 16 23:29:58.227071 kubelet[2759]: I0416 23:29:58.226213 2759 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 16 23:29:58.227984 kubelet[2759]: I0416 23:29:58.227794 2759 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 16 23:29:58.228146 kubelet[2759]: I0416 23:29:58.228117 2759 policy_none.go:44] "Start" Apr 16 23:29:58.235279 kubelet[2759]: E0416 23:29:58.235252 2759 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 23:29:58.237505 kubelet[2759]: I0416 23:29:58.236858 2759 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 16 23:29:58.237505 kubelet[2759]: I0416 23:29:58.236879 2759 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 23:29:58.237505 kubelet[2759]: I0416 23:29:58.237349 2759 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 16 23:29:58.241691 kubelet[2759]: E0416 23:29:58.241031 2759 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 23:29:58.262126 kubelet[2759]: I0416 23:29:58.262082 2759 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-4-n-fff9fc0546" Apr 16 23:29:58.262631 kubelet[2759]: I0416 23:29:58.262603 2759 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-4-n-fff9fc0546" Apr 16 23:29:58.262865 kubelet[2759]: I0416 23:29:58.262839 2759 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-fff9fc0546" Apr 16 23:29:58.273354 kubelet[2759]: E0416 23:29:58.273309 2759 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-4-n-fff9fc0546\" already exists" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-fff9fc0546" Apr 16 23:29:58.274908 kubelet[2759]: E0416 23:29:58.274837 2759 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-4-n-fff9fc0546\" already exists" pod="kube-system/kube-apiserver-ci-4459-2-4-n-fff9fc0546" Apr 16 23:29:58.275082 kubelet[2759]: E0416 23:29:58.274874 2759 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-4-n-fff9fc0546\" already exists" pod="kube-system/kube-scheduler-ci-4459-2-4-n-fff9fc0546" Apr 16 23:29:58.345500 kubelet[2759]: I0416 23:29:58.345380 2759 kubelet_node_status.go:74] "Attempting to register node" node="ci-4459-2-4-n-fff9fc0546" Apr 16 23:29:58.358446 kubelet[2759]: I0416 23:29:58.357989 2759 kubelet_node_status.go:123] "Node was previously registered" node="ci-4459-2-4-n-fff9fc0546" Apr 16 23:29:58.358446 kubelet[2759]: I0416 23:29:58.358081 2759 kubelet_node_status.go:77] "Successfully registered node" node="ci-4459-2-4-n-fff9fc0546" Apr 16 23:29:58.361509 kubelet[2759]: I0416 23:29:58.361474 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/74e6ab4b3a047bb356acf8571be9be24-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-4-n-fff9fc0546\" (UID: \"74e6ab4b3a047bb356acf8571be9be24\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-fff9fc0546" Apr 16 23:29:58.362436 kubelet[2759]: I0416 23:29:58.362414 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f877b2b2ec3e4012e8d944722044aad5-ca-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-fff9fc0546\" (UID: \"f877b2b2ec3e4012e8d944722044aad5\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-fff9fc0546" Apr 16 23:29:58.362577 kubelet[2759]: I0416 23:29:58.362560 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f877b2b2ec3e4012e8d944722044aad5-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-4-n-fff9fc0546\" (UID: \"f877b2b2ec3e4012e8d944722044aad5\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-fff9fc0546" Apr 16 23:29:58.362656 kubelet[2759]: I0416 23:29:58.362644 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f877b2b2ec3e4012e8d944722044aad5-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-fff9fc0546\" (UID: \"f877b2b2ec3e4012e8d944722044aad5\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-fff9fc0546" Apr 16 23:29:58.362776 kubelet[2759]: I0416 23:29:58.362760 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f877b2b2ec3e4012e8d944722044aad5-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-4-n-fff9fc0546\" (UID: \"f877b2b2ec3e4012e8d944722044aad5\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-fff9fc0546" Apr 16 23:29:58.362865 kubelet[2759]: I0416 23:29:58.362843 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50c40e2f8151bc0e01acdbfdaa41b8a5-kubeconfig\") pod \"kube-scheduler-ci-4459-2-4-n-fff9fc0546\" (UID: \"50c40e2f8151bc0e01acdbfdaa41b8a5\") " pod="kube-system/kube-scheduler-ci-4459-2-4-n-fff9fc0546" Apr 16 23:29:58.362987 kubelet[2759]: I0416 23:29:58.362967 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/74e6ab4b3a047bb356acf8571be9be24-ca-certs\") pod \"kube-apiserver-ci-4459-2-4-n-fff9fc0546\" (UID: \"74e6ab4b3a047bb356acf8571be9be24\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-fff9fc0546" Apr 16 23:29:58.363071 kubelet[2759]: I0416 23:29:58.363059 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/74e6ab4b3a047bb356acf8571be9be24-k8s-certs\") pod \"kube-apiserver-ci-4459-2-4-n-fff9fc0546\" (UID: \"74e6ab4b3a047bb356acf8571be9be24\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-fff9fc0546" Apr 16 23:29:58.363141 kubelet[2759]: I0416 23:29:58.363129 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f877b2b2ec3e4012e8d944722044aad5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-4-n-fff9fc0546\" (UID: \"f877b2b2ec3e4012e8d944722044aad5\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-fff9fc0546" Apr 16 23:29:59.104386 kubelet[2759]: I0416 23:29:59.104342 2759 apiserver.go:52] "Watching apiserver" Apr 16 23:29:59.161102 kubelet[2759]: I0416 23:29:59.161042 2759 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 16 23:29:59.203685 kubelet[2759]: I0416 23:29:59.203646 2759 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-4-n-fff9fc0546" Apr 16 23:29:59.215620 kubelet[2759]: E0416 23:29:59.215099 2759 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-4-n-fff9fc0546\" already exists" pod="kube-system/kube-apiserver-ci-4459-2-4-n-fff9fc0546" Apr 16 23:29:59.233824 kubelet[2759]: I0416 23:29:59.233569 2759 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-2-4-n-fff9fc0546" podStartSLOduration=4.233552648 podStartE2EDuration="4.233552648s" podCreationTimestamp="2026-04-16 23:29:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:29:59.232545445 +0000 UTC m=+1.206376553" watchObservedRunningTime="2026-04-16 23:29:59.233552648 +0000 UTC m=+1.207383756" Apr 16 23:29:59.258937 kubelet[2759]: I0416 23:29:59.258269 2759 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-2-4-n-fff9fc0546" podStartSLOduration=4.258252069 podStartE2EDuration="4.258252069s" podCreationTimestamp="2026-04-16 23:29:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:29:59.256944872 +0000 UTC m=+1.230775980" watchObservedRunningTime="2026-04-16 23:29:59.258252069 +0000 UTC m=+1.232083177" Apr 16 23:29:59.259153 kubelet[2759]: I0416 23:29:59.259001 2759 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-fff9fc0546" podStartSLOduration=4.258993088 podStartE2EDuration="4.258993088s" podCreationTimestamp="2026-04-16 23:29:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:29:59.247959177 +0000 UTC m=+1.221790285" watchObservedRunningTime="2026-04-16 23:29:59.258993088 +0000 UTC m=+1.232824196" Apr 16 23:30:02.234518 kubelet[2759]: I0416 23:30:02.234291 2759 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 16 23:30:02.236341 kubelet[2759]: I0416 23:30:02.236254 2759 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 16 23:30:02.236382 containerd[1544]: time="2026-04-16T23:30:02.236005237Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 16 23:30:02.973270 systemd[1]: Created slice kubepods-besteffort-podf4842aa8_f7f2_4606_8e72_4b968d8c9a65.slice - libcontainer container kubepods-besteffort-podf4842aa8_f7f2_4606_8e72_4b968d8c9a65.slice. Apr 16 23:30:02.994125 kubelet[2759]: I0416 23:30:02.993930 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vpmh\" (UniqueName: \"kubernetes.io/projected/f4842aa8-f7f2-4606-8e72-4b968d8c9a65-kube-api-access-6vpmh\") pod \"kube-proxy-2fg2l\" (UID: \"f4842aa8-f7f2-4606-8e72-4b968d8c9a65\") " pod="kube-system/kube-proxy-2fg2l" Apr 16 23:30:02.994125 kubelet[2759]: I0416 23:30:02.993975 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f4842aa8-f7f2-4606-8e72-4b968d8c9a65-xtables-lock\") pod \"kube-proxy-2fg2l\" (UID: \"f4842aa8-f7f2-4606-8e72-4b968d8c9a65\") " pod="kube-system/kube-proxy-2fg2l" Apr 16 23:30:02.994125 kubelet[2759]: I0416 23:30:02.993995 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4842aa8-f7f2-4606-8e72-4b968d8c9a65-lib-modules\") pod \"kube-proxy-2fg2l\" (UID: \"f4842aa8-f7f2-4606-8e72-4b968d8c9a65\") " pod="kube-system/kube-proxy-2fg2l" Apr 16 23:30:02.994125 kubelet[2759]: I0416 23:30:02.994013 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f4842aa8-f7f2-4606-8e72-4b968d8c9a65-kube-proxy\") pod \"kube-proxy-2fg2l\" (UID: \"f4842aa8-f7f2-4606-8e72-4b968d8c9a65\") " pod="kube-system/kube-proxy-2fg2l" Apr 16 23:30:03.116809 kubelet[2759]: E0416 23:30:03.116719 2759 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 16 23:30:03.116809 kubelet[2759]: E0416 23:30:03.116773 2759 projected.go:196] Error preparing data for projected volume kube-api-access-6vpmh for pod kube-system/kube-proxy-2fg2l: configmap "kube-root-ca.crt" not found Apr 16 23:30:03.117325 kubelet[2759]: E0416 23:30:03.117298 2759 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f4842aa8-f7f2-4606-8e72-4b968d8c9a65-kube-api-access-6vpmh podName:f4842aa8-f7f2-4606-8e72-4b968d8c9a65 nodeName:}" failed. No retries permitted until 2026-04-16 23:30:03.617198837 +0000 UTC m=+5.591029985 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6vpmh" (UniqueName: "kubernetes.io/projected/f4842aa8-f7f2-4606-8e72-4b968d8c9a65-kube-api-access-6vpmh") pod "kube-proxy-2fg2l" (UID: "f4842aa8-f7f2-4606-8e72-4b968d8c9a65") : configmap "kube-root-ca.crt" not found Apr 16 23:30:03.564537 systemd[1]: Created slice kubepods-besteffort-pod43f422ec_f79a_4341_bd03_8eef1c791565.slice - libcontainer container kubepods-besteffort-pod43f422ec_f79a_4341_bd03_8eef1c791565.slice. Apr 16 23:30:03.597892 kubelet[2759]: I0416 23:30:03.597684 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96gk7\" (UniqueName: \"kubernetes.io/projected/43f422ec-f79a-4341-bd03-8eef1c791565-kube-api-access-96gk7\") pod \"tigera-operator-6cf4cccc57-8nq9n\" (UID: \"43f422ec-f79a-4341-bd03-8eef1c791565\") " pod="tigera-operator/tigera-operator-6cf4cccc57-8nq9n" Apr 16 23:30:03.597892 kubelet[2759]: I0416 23:30:03.597804 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/43f422ec-f79a-4341-bd03-8eef1c791565-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-8nq9n\" (UID: \"43f422ec-f79a-4341-bd03-8eef1c791565\") " pod="tigera-operator/tigera-operator-6cf4cccc57-8nq9n" Apr 16 23:30:03.874109 containerd[1544]: time="2026-04-16T23:30:03.873987653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-8nq9n,Uid:43f422ec-f79a-4341-bd03-8eef1c791565,Namespace:tigera-operator,Attempt:0,}" Apr 16 23:30:03.883319 containerd[1544]: time="2026-04-16T23:30:03.883237128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2fg2l,Uid:f4842aa8-f7f2-4606-8e72-4b968d8c9a65,Namespace:kube-system,Attempt:0,}" Apr 16 23:30:03.910015 containerd[1544]: time="2026-04-16T23:30:03.909928055Z" level=info msg="connecting to shim 02fce5e964a6e660c9f9100af7b81ab7dd993614c686c698bf4b307ca872a5f1" address="unix:///run/containerd/s/b96aa5b3ac4dad1e24e9c53e6fb25c22e0378420b82e643a5ae9128615ebd37b" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:30:03.923907 containerd[1544]: time="2026-04-16T23:30:03.923581710Z" level=info msg="connecting to shim 575817903d5e3938af737161ed3b1484f45f1c69c8fc13b771214789819fb212" address="unix:///run/containerd/s/873c56c2c04e951f4f173481109e48ae837523a9b46f529ee00fbcd487db55ae" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:30:03.947928 systemd[1]: Started cri-containerd-02fce5e964a6e660c9f9100af7b81ab7dd993614c686c698bf4b307ca872a5f1.scope - libcontainer container 02fce5e964a6e660c9f9100af7b81ab7dd993614c686c698bf4b307ca872a5f1. Apr 16 23:30:03.962923 systemd[1]: Started cri-containerd-575817903d5e3938af737161ed3b1484f45f1c69c8fc13b771214789819fb212.scope - libcontainer container 575817903d5e3938af737161ed3b1484f45f1c69c8fc13b771214789819fb212. Apr 16 23:30:04.003149 containerd[1544]: time="2026-04-16T23:30:04.003097665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-8nq9n,Uid:43f422ec-f79a-4341-bd03-8eef1c791565,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"02fce5e964a6e660c9f9100af7b81ab7dd993614c686c698bf4b307ca872a5f1\"" Apr 16 23:30:04.006658 containerd[1544]: time="2026-04-16T23:30:04.005930955Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 16 23:30:04.014092 containerd[1544]: time="2026-04-16T23:30:04.014038043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2fg2l,Uid:f4842aa8-f7f2-4606-8e72-4b968d8c9a65,Namespace:kube-system,Attempt:0,} returns sandbox id \"575817903d5e3938af737161ed3b1484f45f1c69c8fc13b771214789819fb212\"" Apr 16 23:30:04.021932 containerd[1544]: time="2026-04-16T23:30:04.021893771Z" level=info msg="CreateContainer within sandbox \"575817903d5e3938af737161ed3b1484f45f1c69c8fc13b771214789819fb212\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 16 23:30:04.030789 containerd[1544]: time="2026-04-16T23:30:04.030746778Z" level=info msg="Container 87f269f808c4bea8fa9c4c445986a44227c1f40c4709d897fc8c96f183d5b44f: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:30:04.038906 containerd[1544]: time="2026-04-16T23:30:04.038849546Z" level=info msg="CreateContainer within sandbox \"575817903d5e3938af737161ed3b1484f45f1c69c8fc13b771214789819fb212\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"87f269f808c4bea8fa9c4c445986a44227c1f40c4709d897fc8c96f183d5b44f\"" Apr 16 23:30:04.039610 containerd[1544]: time="2026-04-16T23:30:04.039410115Z" level=info msg="StartContainer for \"87f269f808c4bea8fa9c4c445986a44227c1f40c4709d897fc8c96f183d5b44f\"" Apr 16 23:30:04.041219 containerd[1544]: time="2026-04-16T23:30:04.041185357Z" level=info msg="connecting to shim 87f269f808c4bea8fa9c4c445986a44227c1f40c4709d897fc8c96f183d5b44f" address="unix:///run/containerd/s/873c56c2c04e951f4f173481109e48ae837523a9b46f529ee00fbcd487db55ae" protocol=ttrpc version=3 Apr 16 23:30:04.060154 systemd[1]: Started cri-containerd-87f269f808c4bea8fa9c4c445986a44227c1f40c4709d897fc8c96f183d5b44f.scope - libcontainer container 87f269f808c4bea8fa9c4c445986a44227c1f40c4709d897fc8c96f183d5b44f. Apr 16 23:30:04.122583 containerd[1544]: time="2026-04-16T23:30:04.122447590Z" level=info msg="StartContainer for \"87f269f808c4bea8fa9c4c445986a44227c1f40c4709d897fc8c96f183d5b44f\" returns successfully" Apr 16 23:30:05.414169 kubelet[2759]: I0416 23:30:05.414085 2759 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-2fg2l" podStartSLOduration=3.414069269 podStartE2EDuration="3.414069269s" podCreationTimestamp="2026-04-16 23:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:30:04.241404532 +0000 UTC m=+6.215235640" watchObservedRunningTime="2026-04-16 23:30:05.414069269 +0000 UTC m=+7.387900377" Apr 16 23:30:05.824852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount870696319.mount: Deactivated successfully. Apr 16 23:30:06.253309 containerd[1544]: time="2026-04-16T23:30:06.253220537Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:06.255017 containerd[1544]: time="2026-04-16T23:30:06.254951623Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Apr 16 23:30:06.257066 containerd[1544]: time="2026-04-16T23:30:06.256939105Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:06.262428 containerd[1544]: time="2026-04-16T23:30:06.262356636Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:06.263759 containerd[1544]: time="2026-04-16T23:30:06.263686865Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 2.257716544s" Apr 16 23:30:06.263759 containerd[1544]: time="2026-04-16T23:30:06.263737152Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Apr 16 23:30:06.268187 containerd[1544]: time="2026-04-16T23:30:06.268156140Z" level=info msg="CreateContainer within sandbox \"02fce5e964a6e660c9f9100af7b81ab7dd993614c686c698bf4b307ca872a5f1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 16 23:30:06.281410 containerd[1544]: time="2026-04-16T23:30:06.279171987Z" level=info msg="Container cfe28905f7a54da9d80a858e312aa150dc465d6a3569920af54900b8d0d841f3: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:30:06.288151 containerd[1544]: time="2026-04-16T23:30:06.288108538Z" level=info msg="CreateContainer within sandbox \"02fce5e964a6e660c9f9100af7b81ab7dd993614c686c698bf4b307ca872a5f1\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"cfe28905f7a54da9d80a858e312aa150dc465d6a3569920af54900b8d0d841f3\"" Apr 16 23:30:06.290128 containerd[1544]: time="2026-04-16T23:30:06.288688940Z" level=info msg="StartContainer for \"cfe28905f7a54da9d80a858e312aa150dc465d6a3569920af54900b8d0d841f3\"" Apr 16 23:30:06.291189 containerd[1544]: time="2026-04-16T23:30:06.291156851Z" level=info msg="connecting to shim cfe28905f7a54da9d80a858e312aa150dc465d6a3569920af54900b8d0d841f3" address="unix:///run/containerd/s/b96aa5b3ac4dad1e24e9c53e6fb25c22e0378420b82e643a5ae9128615ebd37b" protocol=ttrpc version=3 Apr 16 23:30:06.322000 systemd[1]: Started cri-containerd-cfe28905f7a54da9d80a858e312aa150dc465d6a3569920af54900b8d0d841f3.scope - libcontainer container cfe28905f7a54da9d80a858e312aa150dc465d6a3569920af54900b8d0d841f3. Apr 16 23:30:06.355737 containerd[1544]: time="2026-04-16T23:30:06.355648622Z" level=info msg="StartContainer for \"cfe28905f7a54da9d80a858e312aa150dc465d6a3569920af54900b8d0d841f3\" returns successfully" Apr 16 23:30:12.372144 kubelet[2759]: I0416 23:30:12.372072 2759 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-8nq9n" podStartSLOduration=7.112476004 podStartE2EDuration="9.37205915s" podCreationTimestamp="2026-04-16 23:30:03 +0000 UTC" firstStartedPulling="2026-04-16 23:30:04.004989205 +0000 UTC m=+5.978820273" lastFinishedPulling="2026-04-16 23:30:06.264572311 +0000 UTC m=+8.238403419" observedRunningTime="2026-04-16 23:30:07.245169737 +0000 UTC m=+9.219000845" watchObservedRunningTime="2026-04-16 23:30:12.37205915 +0000 UTC m=+14.345890258" Apr 16 23:30:12.634918 sudo[1808]: pam_unix(sudo:session): session closed for user root Apr 16 23:30:12.650144 sshd[1807]: Connection closed by 50.85.169.122 port 44440 Apr 16 23:30:12.650763 sshd-session[1804]: pam_unix(sshd:session): session closed for user core Apr 16 23:30:12.656922 systemd[1]: sshd@6-46.224.1.2:22-50.85.169.122:44440.service: Deactivated successfully. Apr 16 23:30:12.662616 systemd[1]: session-7.scope: Deactivated successfully. Apr 16 23:30:12.664108 systemd[1]: session-7.scope: Consumed 4.263s CPU time, 218.8M memory peak. Apr 16 23:30:12.668308 systemd-logind[1520]: Session 7 logged out. Waiting for processes to exit. Apr 16 23:30:12.670446 systemd-logind[1520]: Removed session 7. Apr 16 23:30:18.114628 systemd[1]: Created slice kubepods-besteffort-pod0256e09d_d4cd_4cd9_b133_14f0b239830f.slice - libcontainer container kubepods-besteffort-pod0256e09d_d4cd_4cd9_b133_14f0b239830f.slice. Apr 16 23:30:18.203498 kubelet[2759]: I0416 23:30:18.203351 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0256e09d-d4cd-4cd9-b133-14f0b239830f-typha-certs\") pod \"calico-typha-54c7884887-bz44m\" (UID: \"0256e09d-d4cd-4cd9-b133-14f0b239830f\") " pod="calico-system/calico-typha-54c7884887-bz44m" Apr 16 23:30:18.203498 kubelet[2759]: I0416 23:30:18.203395 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5w2d\" (UniqueName: \"kubernetes.io/projected/0256e09d-d4cd-4cd9-b133-14f0b239830f-kube-api-access-j5w2d\") pod \"calico-typha-54c7884887-bz44m\" (UID: \"0256e09d-d4cd-4cd9-b133-14f0b239830f\") " pod="calico-system/calico-typha-54c7884887-bz44m" Apr 16 23:30:18.203498 kubelet[2759]: I0416 23:30:18.203419 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0256e09d-d4cd-4cd9-b133-14f0b239830f-tigera-ca-bundle\") pod \"calico-typha-54c7884887-bz44m\" (UID: \"0256e09d-d4cd-4cd9-b133-14f0b239830f\") " pod="calico-system/calico-typha-54c7884887-bz44m" Apr 16 23:30:18.207283 systemd[1]: Created slice kubepods-besteffort-pode0914f08_5b6e_4d9e_8c70_da98100bfac6.slice - libcontainer container kubepods-besteffort-pode0914f08_5b6e_4d9e_8c70_da98100bfac6.slice. Apr 16 23:30:18.304326 kubelet[2759]: I0416 23:30:18.303838 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/e0914f08-5b6e-4d9e-8c70-da98100bfac6-sys-fs\") pod \"calico-node-xd69l\" (UID: \"e0914f08-5b6e-4d9e-8c70-da98100bfac6\") " pod="calico-system/calico-node-xd69l" Apr 16 23:30:18.304326 kubelet[2759]: I0416 23:30:18.303894 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e0914f08-5b6e-4d9e-8c70-da98100bfac6-xtables-lock\") pod \"calico-node-xd69l\" (UID: \"e0914f08-5b6e-4d9e-8c70-da98100bfac6\") " pod="calico-system/calico-node-xd69l" Apr 16 23:30:18.304326 kubelet[2759]: I0416 23:30:18.303911 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e0914f08-5b6e-4d9e-8c70-da98100bfac6-policysync\") pod \"calico-node-xd69l\" (UID: \"e0914f08-5b6e-4d9e-8c70-da98100bfac6\") " pod="calico-system/calico-node-xd69l" Apr 16 23:30:18.304326 kubelet[2759]: I0416 23:30:18.303956 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0914f08-5b6e-4d9e-8c70-da98100bfac6-tigera-ca-bundle\") pod \"calico-node-xd69l\" (UID: \"e0914f08-5b6e-4d9e-8c70-da98100bfac6\") " pod="calico-system/calico-node-xd69l" Apr 16 23:30:18.304326 kubelet[2759]: I0416 23:30:18.303978 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e0914f08-5b6e-4d9e-8c70-da98100bfac6-node-certs\") pod \"calico-node-xd69l\" (UID: \"e0914f08-5b6e-4d9e-8c70-da98100bfac6\") " pod="calico-system/calico-node-xd69l" Apr 16 23:30:18.304541 kubelet[2759]: I0416 23:30:18.303993 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/e0914f08-5b6e-4d9e-8c70-da98100bfac6-nodeproc\") pod \"calico-node-xd69l\" (UID: \"e0914f08-5b6e-4d9e-8c70-da98100bfac6\") " pod="calico-system/calico-node-xd69l" Apr 16 23:30:18.304541 kubelet[2759]: I0416 23:30:18.304011 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e0914f08-5b6e-4d9e-8c70-da98100bfac6-cni-log-dir\") pod \"calico-node-xd69l\" (UID: \"e0914f08-5b6e-4d9e-8c70-da98100bfac6\") " pod="calico-system/calico-node-xd69l" Apr 16 23:30:18.304541 kubelet[2759]: I0416 23:30:18.304055 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e0914f08-5b6e-4d9e-8c70-da98100bfac6-lib-modules\") pod \"calico-node-xd69l\" (UID: \"e0914f08-5b6e-4d9e-8c70-da98100bfac6\") " pod="calico-system/calico-node-xd69l" Apr 16 23:30:18.304541 kubelet[2759]: I0416 23:30:18.304069 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e0914f08-5b6e-4d9e-8c70-da98100bfac6-cni-bin-dir\") pod \"calico-node-xd69l\" (UID: \"e0914f08-5b6e-4d9e-8c70-da98100bfac6\") " pod="calico-system/calico-node-xd69l" Apr 16 23:30:18.304541 kubelet[2759]: I0416 23:30:18.304085 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj29l\" (UniqueName: \"kubernetes.io/projected/e0914f08-5b6e-4d9e-8c70-da98100bfac6-kube-api-access-lj29l\") pod \"calico-node-xd69l\" (UID: \"e0914f08-5b6e-4d9e-8c70-da98100bfac6\") " pod="calico-system/calico-node-xd69l" Apr 16 23:30:18.304645 kubelet[2759]: I0416 23:30:18.304103 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/e0914f08-5b6e-4d9e-8c70-da98100bfac6-bpffs\") pod \"calico-node-xd69l\" (UID: \"e0914f08-5b6e-4d9e-8c70-da98100bfac6\") " pod="calico-system/calico-node-xd69l" Apr 16 23:30:18.304645 kubelet[2759]: I0416 23:30:18.304118 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e0914f08-5b6e-4d9e-8c70-da98100bfac6-var-run-calico\") pod \"calico-node-xd69l\" (UID: \"e0914f08-5b6e-4d9e-8c70-da98100bfac6\") " pod="calico-system/calico-node-xd69l" Apr 16 23:30:18.304645 kubelet[2759]: I0416 23:30:18.304135 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e0914f08-5b6e-4d9e-8c70-da98100bfac6-cni-net-dir\") pod \"calico-node-xd69l\" (UID: \"e0914f08-5b6e-4d9e-8c70-da98100bfac6\") " pod="calico-system/calico-node-xd69l" Apr 16 23:30:18.304645 kubelet[2759]: I0416 23:30:18.304150 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e0914f08-5b6e-4d9e-8c70-da98100bfac6-var-lib-calico\") pod \"calico-node-xd69l\" (UID: \"e0914f08-5b6e-4d9e-8c70-da98100bfac6\") " pod="calico-system/calico-node-xd69l" Apr 16 23:30:18.304645 kubelet[2759]: I0416 23:30:18.304213 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e0914f08-5b6e-4d9e-8c70-da98100bfac6-flexvol-driver-host\") pod \"calico-node-xd69l\" (UID: \"e0914f08-5b6e-4d9e-8c70-da98100bfac6\") " pod="calico-system/calico-node-xd69l" Apr 16 23:30:18.334523 kubelet[2759]: E0416 23:30:18.334381 2759 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-trmr9" podUID="64a4ac65-767d-4c6a-a7b0-e9d6eab09bc8" Apr 16 23:30:18.404667 kubelet[2759]: I0416 23:30:18.404535 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgln9\" (UniqueName: \"kubernetes.io/projected/64a4ac65-767d-4c6a-a7b0-e9d6eab09bc8-kube-api-access-bgln9\") pod \"csi-node-driver-trmr9\" (UID: \"64a4ac65-767d-4c6a-a7b0-e9d6eab09bc8\") " pod="calico-system/csi-node-driver-trmr9" Apr 16 23:30:18.405651 kubelet[2759]: I0416 23:30:18.404960 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/64a4ac65-767d-4c6a-a7b0-e9d6eab09bc8-socket-dir\") pod \"csi-node-driver-trmr9\" (UID: \"64a4ac65-767d-4c6a-a7b0-e9d6eab09bc8\") " pod="calico-system/csi-node-driver-trmr9" Apr 16 23:30:18.405651 kubelet[2759]: I0416 23:30:18.405007 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/64a4ac65-767d-4c6a-a7b0-e9d6eab09bc8-registration-dir\") pod \"csi-node-driver-trmr9\" (UID: \"64a4ac65-767d-4c6a-a7b0-e9d6eab09bc8\") " pod="calico-system/csi-node-driver-trmr9" Apr 16 23:30:18.405651 kubelet[2759]: I0416 23:30:18.405053 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/64a4ac65-767d-4c6a-a7b0-e9d6eab09bc8-kubelet-dir\") pod \"csi-node-driver-trmr9\" (UID: \"64a4ac65-767d-4c6a-a7b0-e9d6eab09bc8\") " pod="calico-system/csi-node-driver-trmr9" Apr 16 23:30:18.405651 kubelet[2759]: I0416 23:30:18.405110 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/64a4ac65-767d-4c6a-a7b0-e9d6eab09bc8-varrun\") pod \"csi-node-driver-trmr9\" (UID: \"64a4ac65-767d-4c6a-a7b0-e9d6eab09bc8\") " pod="calico-system/csi-node-driver-trmr9" Apr 16 23:30:18.415499 kubelet[2759]: E0416 23:30:18.415460 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:18.415657 kubelet[2759]: W0416 23:30:18.415489 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:18.415702 kubelet[2759]: E0416 23:30:18.415668 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:18.422731 containerd[1544]: time="2026-04-16T23:30:18.422587297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54c7884887-bz44m,Uid:0256e09d-d4cd-4cd9-b133-14f0b239830f,Namespace:calico-system,Attempt:0,}" Apr 16 23:30:18.431055 kubelet[2759]: E0416 23:30:18.430949 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:18.431055 kubelet[2759]: W0416 23:30:18.430989 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:18.431055 kubelet[2759]: E0416 23:30:18.431014 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:18.456304 containerd[1544]: time="2026-04-16T23:30:18.456163557Z" level=info msg="connecting to shim 8bb754530b295034ba2b7065816d31a1c0ff9ffae0c56d78b21f4e02733fad40" address="unix:///run/containerd/s/d106770a4e63929205abe26696e6fbf5cf68afef398a3e3ae995b3d8653b7eb4" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:30:18.488130 systemd[1]: Started cri-containerd-8bb754530b295034ba2b7065816d31a1c0ff9ffae0c56d78b21f4e02733fad40.scope - libcontainer container 8bb754530b295034ba2b7065816d31a1c0ff9ffae0c56d78b21f4e02733fad40. Apr 16 23:30:18.505636 kubelet[2759]: E0416 23:30:18.505610 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:18.505904 kubelet[2759]: W0416 23:30:18.505718 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:18.505904 kubelet[2759]: E0416 23:30:18.505741 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:18.506636 kubelet[2759]: E0416 23:30:18.506592 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:18.506636 kubelet[2759]: W0416 23:30:18.506612 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:18.506636 kubelet[2759]: E0416 23:30:18.506633 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:18.506925 kubelet[2759]: E0416 23:30:18.506904 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:18.506925 kubelet[2759]: W0416 23:30:18.506919 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:18.507124 kubelet[2759]: E0416 23:30:18.506930 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:18.507218 kubelet[2759]: E0416 23:30:18.507204 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:18.507279 kubelet[2759]: W0416 23:30:18.507268 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:18.507727 kubelet[2759]: E0416 23:30:18.507320 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:18.507900 kubelet[2759]: E0416 23:30:18.507847 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:18.507970 kubelet[2759]: W0416 23:30:18.507957 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:18.508763 kubelet[2759]: E0416 23:30:18.508742 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:18.509075 kubelet[2759]: E0416 23:30:18.509062 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:18.509247 kubelet[2759]: W0416 23:30:18.509138 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:18.509247 kubelet[2759]: E0416 23:30:18.509154 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:18.509405 kubelet[2759]: E0416 23:30:18.509393 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:18.509550 kubelet[2759]: W0416 23:30:18.509454 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:18.509550 kubelet[2759]: E0416 23:30:18.509468 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:18.510594 kubelet[2759]: E0416 23:30:18.509677 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:18.510722 kubelet[2759]: W0416 23:30:18.510676 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:18.510792 kubelet[2759]: E0416 23:30:18.510777 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:18.511044 kubelet[2759]: E0416 23:30:18.511032 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:18.511212 kubelet[2759]: W0416 23:30:18.511106 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:18.511212 kubelet[2759]: E0416 23:30:18.511123 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:18.511370 kubelet[2759]: E0416 23:30:18.511360 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:18.511438 kubelet[2759]: W0416 23:30:18.511427 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:18.511572 kubelet[2759]: E0416 23:30:18.511485 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:18.511734 kubelet[2759]: E0416 23:30:18.511722 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:18.511792 kubelet[2759]: W0416 23:30:18.511781 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:18.511933 kubelet[2759]: E0416 23:30:18.511842 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:18.512066 kubelet[2759]: E0416 23:30:18.512055 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:18.512130 kubelet[2759]: W0416 23:30:18.512118 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:18.512189 kubelet[2759]: E0416 23:30:18.512178 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:18.512432 kubelet[2759]: E0416 23:30:18.512420 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:18.512577 kubelet[2759]: W0416 23:30:18.512493 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:18.512577 kubelet[2759]: E0416 23:30:18.512510 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:18.512756 kubelet[2759]: E0416 23:30:18.512741 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:18.513646 kubelet[2759]: W0416 23:30:18.512792 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:18.513646 kubelet[2759]: E0416 23:30:18.512805 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:18.513646 kubelet[2759]: E0416 23:30:18.512980 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:18.513646 kubelet[2759]: W0416 23:30:18.512989 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:18.513646 kubelet[2759]: E0416 23:30:18.512999 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:18.513646 kubelet[2759]: E0416 23:30:18.513151 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:18.513646 kubelet[2759]: W0416 23:30:18.513159 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:18.513646 kubelet[2759]: E0416 23:30:18.513166 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:18.515124 containerd[1544]: time="2026-04-16T23:30:18.515092124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xd69l,Uid:e0914f08-5b6e-4d9e-8c70-da98100bfac6,Namespace:calico-system,Attempt:0,}" Apr 16 23:30:18.515424 kubelet[2759]: E0416 23:30:18.515258 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:18.515424 kubelet[2759]: W0416 23:30:18.515276 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:18.515424 kubelet[2759]: E0416 23:30:18.515293 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:18.515886 kubelet[2759]: E0416 23:30:18.515828 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:18.515978 kubelet[2759]: W0416 23:30:18.515963 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:18.516036 kubelet[2759]: E0416 23:30:18.516025 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:18.516748 kubelet[2759]: E0416 23:30:18.516442 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:18.516748 kubelet[2759]: W0416 23:30:18.516456 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:18.516748 kubelet[2759]: E0416 23:30:18.516467 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:18.517413 kubelet[2759]: E0416 23:30:18.517223 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:18.517413 kubelet[2759]: W0416 23:30:18.517350 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:18.517413 kubelet[2759]: E0416 23:30:18.517368 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:18.518062 kubelet[2759]: E0416 23:30:18.517920 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:18.518062 kubelet[2759]: W0416 23:30:18.518061 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:18.518149 kubelet[2759]: E0416 23:30:18.518075 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:18.519850 kubelet[2759]: E0416 23:30:18.519827 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:18.519850 kubelet[2759]: W0416 23:30:18.519851 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:18.519850 kubelet[2759]: E0416 23:30:18.519876 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:18.520239 kubelet[2759]: E0416 23:30:18.520197 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:18.520239 kubelet[2759]: W0416 23:30:18.520214 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:18.520239 kubelet[2759]: E0416 23:30:18.520225 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:18.521380 kubelet[2759]: E0416 23:30:18.521326 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:18.521380 kubelet[2759]: W0416 23:30:18.521352 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:18.521380 kubelet[2759]: E0416 23:30:18.521368 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:18.522070 kubelet[2759]: E0416 23:30:18.522041 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:18.522070 kubelet[2759]: W0416 23:30:18.522068 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:18.522207 kubelet[2759]: E0416 23:30:18.522083 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:18.541990 kubelet[2759]: E0416 23:30:18.541958 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:18.541990 kubelet[2759]: W0416 23:30:18.541986 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:18.542130 kubelet[2759]: E0416 23:30:18.542007 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:18.561838 containerd[1544]: time="2026-04-16T23:30:18.561790153Z" level=info msg="connecting to shim f72eeb3bfbf7dd571afced013675f2cf4b51a82135a2dbdee149ee1b48f2979a" address="unix:///run/containerd/s/f3519fc1435398dd589160d5232b59acd4d1e7e139b12f8f36a2d71a7a494d57" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:30:18.570980 containerd[1544]: time="2026-04-16T23:30:18.570826407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54c7884887-bz44m,Uid:0256e09d-d4cd-4cd9-b133-14f0b239830f,Namespace:calico-system,Attempt:0,} returns sandbox id \"8bb754530b295034ba2b7065816d31a1c0ff9ffae0c56d78b21f4e02733fad40\"" Apr 16 23:30:18.574655 containerd[1544]: time="2026-04-16T23:30:18.574587016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 16 23:30:18.598954 systemd[1]: Started cri-containerd-f72eeb3bfbf7dd571afced013675f2cf4b51a82135a2dbdee149ee1b48f2979a.scope - libcontainer container f72eeb3bfbf7dd571afced013675f2cf4b51a82135a2dbdee149ee1b48f2979a. Apr 16 23:30:18.630285 containerd[1544]: time="2026-04-16T23:30:18.630221450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xd69l,Uid:e0914f08-5b6e-4d9e-8c70-da98100bfac6,Namespace:calico-system,Attempt:0,} returns sandbox id \"f72eeb3bfbf7dd571afced013675f2cf4b51a82135a2dbdee149ee1b48f2979a\"" Apr 16 23:30:20.153686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount739270290.mount: Deactivated successfully. Apr 16 23:30:20.160930 kubelet[2759]: E0416 23:30:20.160887 2759 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-trmr9" podUID="64a4ac65-767d-4c6a-a7b0-e9d6eab09bc8" Apr 16 23:30:20.990436 containerd[1544]: time="2026-04-16T23:30:20.989935248Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:20.991639 containerd[1544]: time="2026-04-16T23:30:20.991608605Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Apr 16 23:30:20.993946 containerd[1544]: time="2026-04-16T23:30:20.993916087Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:20.998212 containerd[1544]: time="2026-04-16T23:30:20.998127662Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:20.999297 containerd[1544]: time="2026-04-16T23:30:20.998919678Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 2.424191451s" Apr 16 23:30:20.999297 containerd[1544]: time="2026-04-16T23:30:20.998954240Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Apr 16 23:30:21.002130 containerd[1544]: time="2026-04-16T23:30:21.002098899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 16 23:30:21.017495 containerd[1544]: time="2026-04-16T23:30:21.017454568Z" level=info msg="CreateContainer within sandbox \"8bb754530b295034ba2b7065816d31a1c0ff9ffae0c56d78b21f4e02733fad40\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 16 23:30:21.035662 containerd[1544]: time="2026-04-16T23:30:21.034143406Z" level=info msg="Container 421733ed0180deeff18c0882e89edfa06d54bd8f83059fd46e6d2bdc64355979: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:30:21.045378 containerd[1544]: time="2026-04-16T23:30:21.045331156Z" level=info msg="CreateContainer within sandbox \"8bb754530b295034ba2b7065816d31a1c0ff9ffae0c56d78b21f4e02733fad40\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"421733ed0180deeff18c0882e89edfa06d54bd8f83059fd46e6d2bdc64355979\"" Apr 16 23:30:21.046556 containerd[1544]: time="2026-04-16T23:30:21.046503474Z" level=info msg="StartContainer for \"421733ed0180deeff18c0882e89edfa06d54bd8f83059fd46e6d2bdc64355979\"" Apr 16 23:30:21.049041 containerd[1544]: time="2026-04-16T23:30:21.049003962Z" level=info msg="connecting to shim 421733ed0180deeff18c0882e89edfa06d54bd8f83059fd46e6d2bdc64355979" address="unix:///run/containerd/s/d106770a4e63929205abe26696e6fbf5cf68afef398a3e3ae995b3d8653b7eb4" protocol=ttrpc version=3 Apr 16 23:30:21.076046 systemd[1]: Started cri-containerd-421733ed0180deeff18c0882e89edfa06d54bd8f83059fd46e6d2bdc64355979.scope - libcontainer container 421733ed0180deeff18c0882e89edfa06d54bd8f83059fd46e6d2bdc64355979. Apr 16 23:30:21.125601 containerd[1544]: time="2026-04-16T23:30:21.125555730Z" level=info msg="StartContainer for \"421733ed0180deeff18c0882e89edfa06d54bd8f83059fd46e6d2bdc64355979\" returns successfully" Apr 16 23:30:21.300739 kubelet[2759]: E0416 23:30:21.299482 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:21.300739 kubelet[2759]: W0416 23:30:21.299503 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:21.300739 kubelet[2759]: E0416 23:30:21.299550 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:21.300739 kubelet[2759]: E0416 23:30:21.300597 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:21.300739 kubelet[2759]: W0416 23:30:21.300629 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:21.300739 kubelet[2759]: E0416 23:30:21.300664 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:21.301999 kubelet[2759]: E0416 23:30:21.301977 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:21.301999 kubelet[2759]: W0416 23:30:21.301995 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:21.302181 kubelet[2759]: E0416 23:30:21.302009 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:21.304172 kubelet[2759]: E0416 23:30:21.304138 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:21.304172 kubelet[2759]: W0416 23:30:21.304172 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:21.304284 kubelet[2759]: E0416 23:30:21.304189 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:21.305946 kubelet[2759]: E0416 23:30:21.305923 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:21.305946 kubelet[2759]: W0416 23:30:21.305940 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:21.306136 kubelet[2759]: E0416 23:30:21.306102 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:21.306503 kubelet[2759]: E0416 23:30:21.306480 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:21.306503 kubelet[2759]: W0416 23:30:21.306495 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:21.306588 kubelet[2759]: E0416 23:30:21.306508 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:21.306855 kubelet[2759]: E0416 23:30:21.306836 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:21.306855 kubelet[2759]: W0416 23:30:21.306851 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:21.306931 kubelet[2759]: E0416 23:30:21.306863 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:21.307154 kubelet[2759]: E0416 23:30:21.307134 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:21.307203 kubelet[2759]: W0416 23:30:21.307158 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:21.307203 kubelet[2759]: E0416 23:30:21.307173 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:21.307811 kubelet[2759]: E0416 23:30:21.307688 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:21.307811 kubelet[2759]: W0416 23:30:21.307808 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:21.307984 kubelet[2759]: E0416 23:30:21.307822 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:21.309163 kubelet[2759]: E0416 23:30:21.309136 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:21.309163 kubelet[2759]: W0416 23:30:21.309157 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:21.309163 kubelet[2759]: E0416 23:30:21.309170 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:21.309345 kubelet[2759]: E0416 23:30:21.309326 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:21.309345 kubelet[2759]: W0416 23:30:21.309339 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:21.309397 kubelet[2759]: E0416 23:30:21.309348 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:21.309507 kubelet[2759]: E0416 23:30:21.309488 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:21.309507 kubelet[2759]: W0416 23:30:21.309501 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:21.309563 kubelet[2759]: E0416 23:30:21.309512 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:21.309682 kubelet[2759]: E0416 23:30:21.309665 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:21.309734 kubelet[2759]: W0416 23:30:21.309684 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:21.309734 kubelet[2759]: E0416 23:30:21.309701 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:21.309842 kubelet[2759]: E0416 23:30:21.309824 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:21.310775 kubelet[2759]: W0416 23:30:21.309845 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:21.310775 kubelet[2759]: E0416 23:30:21.309853 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:21.310775 kubelet[2759]: E0416 23:30:21.309973 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:21.310775 kubelet[2759]: W0416 23:30:21.309979 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:21.310775 kubelet[2759]: E0416 23:30:21.309995 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:21.335663 kubelet[2759]: E0416 23:30:21.335614 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:21.335663 kubelet[2759]: W0416 23:30:21.335656 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:21.335845 kubelet[2759]: E0416 23:30:21.335679 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:21.335884 kubelet[2759]: E0416 23:30:21.335866 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:21.335884 kubelet[2759]: W0416 23:30:21.335878 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:21.335959 kubelet[2759]: E0416 23:30:21.335887 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:21.336140 kubelet[2759]: E0416 23:30:21.336124 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:21.336140 kubelet[2759]: W0416 23:30:21.336138 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:21.336208 kubelet[2759]: E0416 23:30:21.336149 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:21.337034 kubelet[2759]: E0416 23:30:21.337008 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:21.337162 kubelet[2759]: W0416 23:30:21.337127 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:21.337162 kubelet[2759]: E0416 23:30:21.337149 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:21.337345 kubelet[2759]: E0416 23:30:21.337313 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:21.337345 kubelet[2759]: W0416 23:30:21.337326 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:21.337783 kubelet[2759]: E0416 23:30:21.337353 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:21.337909 kubelet[2759]: E0416 23:30:21.337890 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:21.337948 kubelet[2759]: W0416 23:30:21.337907 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:21.337948 kubelet[2759]: E0416 23:30:21.337922 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:21.338199 kubelet[2759]: E0416 23:30:21.338174 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:21.338199 kubelet[2759]: W0416 23:30:21.338190 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:21.338337 kubelet[2759]: E0416 23:30:21.338202 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:21.338532 kubelet[2759]: E0416 23:30:21.338517 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:21.338532 kubelet[2759]: W0416 23:30:21.338529 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:21.339347 kubelet[2759]: E0416 23:30:21.338540 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:21.339589 kubelet[2759]: E0416 23:30:21.339572 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:21.339589 kubelet[2759]: W0416 23:30:21.339587 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:21.339788 kubelet[2759]: E0416 23:30:21.339614 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:21.339940 kubelet[2759]: E0416 23:30:21.339907 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:21.339940 kubelet[2759]: W0416 23:30:21.339936 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:21.340184 kubelet[2759]: E0416 23:30:21.339948 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:21.340184 kubelet[2759]: E0416 23:30:21.340137 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:21.340184 kubelet[2759]: W0416 23:30:21.340153 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:21.340184 kubelet[2759]: E0416 23:30:21.340162 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:21.340326 kubelet[2759]: E0416 23:30:21.340285 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:21.340326 kubelet[2759]: W0416 23:30:21.340303 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:21.340326 kubelet[2759]: E0416 23:30:21.340314 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:21.340491 kubelet[2759]: E0416 23:30:21.340461 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:21.340491 kubelet[2759]: W0416 23:30:21.340472 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:21.340491 kubelet[2759]: E0416 23:30:21.340480 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:21.341985 kubelet[2759]: E0416 23:30:21.341808 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:21.341985 kubelet[2759]: W0416 23:30:21.341833 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:21.341985 kubelet[2759]: E0416 23:30:21.341849 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:21.342244 kubelet[2759]: E0416 23:30:21.342231 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:21.342425 kubelet[2759]: W0416 23:30:21.342297 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:21.342425 kubelet[2759]: E0416 23:30:21.342314 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:21.342594 kubelet[2759]: E0416 23:30:21.342583 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:21.342660 kubelet[2759]: W0416 23:30:21.342647 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:21.342746 kubelet[2759]: E0416 23:30:21.342733 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:21.342952 kubelet[2759]: E0416 23:30:21.342939 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:21.343164 kubelet[2759]: W0416 23:30:21.343025 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:21.343164 kubelet[2759]: E0416 23:30:21.343058 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:21.343306 kubelet[2759]: E0416 23:30:21.343291 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:21.343365 kubelet[2759]: W0416 23:30:21.343349 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:21.343418 kubelet[2759]: E0416 23:30:21.343404 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:22.161564 kubelet[2759]: E0416 23:30:22.160993 2759 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-trmr9" podUID="64a4ac65-767d-4c6a-a7b0-e9d6eab09bc8" Apr 16 23:30:22.289712 kubelet[2759]: I0416 23:30:22.289489 2759 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-54c7884887-bz44m" podStartSLOduration=1.862905534 podStartE2EDuration="4.289472718s" podCreationTimestamp="2026-04-16 23:30:18 +0000 UTC" firstStartedPulling="2026-04-16 23:30:18.573810476 +0000 UTC m=+20.547641584" lastFinishedPulling="2026-04-16 23:30:21.00037766 +0000 UTC m=+22.974208768" observedRunningTime="2026-04-16 23:30:21.303011859 +0000 UTC m=+23.276842967" watchObservedRunningTime="2026-04-16 23:30:22.289472718 +0000 UTC m=+24.263303826" Apr 16 23:30:22.317315 kubelet[2759]: E0416 23:30:22.317272 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:22.317315 kubelet[2759]: W0416 23:30:22.317305 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:22.317676 kubelet[2759]: E0416 23:30:22.317327 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:22.317800 kubelet[2759]: E0416 23:30:22.317779 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:22.317843 kubelet[2759]: W0416 23:30:22.317796 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:22.317843 kubelet[2759]: E0416 23:30:22.317819 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:22.318004 kubelet[2759]: E0416 23:30:22.317977 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:22.318004 kubelet[2759]: W0416 23:30:22.317994 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:22.318004 kubelet[2759]: E0416 23:30:22.318003 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:22.318471 kubelet[2759]: E0416 23:30:22.318188 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:22.318471 kubelet[2759]: W0416 23:30:22.318203 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:22.318471 kubelet[2759]: E0416 23:30:22.318213 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:22.318984 kubelet[2759]: E0416 23:30:22.318958 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:22.318984 kubelet[2759]: W0416 23:30:22.318978 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:22.319080 kubelet[2759]: E0416 23:30:22.318990 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:22.319848 kubelet[2759]: E0416 23:30:22.319817 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:22.319848 kubelet[2759]: W0416 23:30:22.319836 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:22.319848 kubelet[2759]: E0416 23:30:22.319848 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:22.320056 kubelet[2759]: E0416 23:30:22.320026 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:22.320056 kubelet[2759]: W0416 23:30:22.320046 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:22.320056 kubelet[2759]: E0416 23:30:22.320056 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:22.320470 kubelet[2759]: E0416 23:30:22.320383 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:22.320637 kubelet[2759]: W0416 23:30:22.320401 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:22.320685 kubelet[2759]: E0416 23:30:22.320639 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:22.320914 kubelet[2759]: E0416 23:30:22.320894 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:22.320914 kubelet[2759]: W0416 23:30:22.320909 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:22.320968 kubelet[2759]: E0416 23:30:22.320920 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:22.321090 kubelet[2759]: E0416 23:30:22.321068 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:22.321090 kubelet[2759]: W0416 23:30:22.321084 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:22.321166 kubelet[2759]: E0416 23:30:22.321092 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:22.321302 kubelet[2759]: E0416 23:30:22.321272 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:22.321302 kubelet[2759]: W0416 23:30:22.321296 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:22.321345 kubelet[2759]: E0416 23:30:22.321306 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:22.321468 kubelet[2759]: E0416 23:30:22.321452 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:22.321468 kubelet[2759]: W0416 23:30:22.321464 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:22.321522 kubelet[2759]: E0416 23:30:22.321471 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:22.321633 kubelet[2759]: E0416 23:30:22.321616 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:22.321633 kubelet[2759]: W0416 23:30:22.321628 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:22.321680 kubelet[2759]: E0416 23:30:22.321635 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:22.321836 kubelet[2759]: E0416 23:30:22.321819 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:22.321836 kubelet[2759]: W0416 23:30:22.321832 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:22.321903 kubelet[2759]: E0416 23:30:22.321841 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:22.322011 kubelet[2759]: E0416 23:30:22.321993 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:22.322011 kubelet[2759]: W0416 23:30:22.322004 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:22.322067 kubelet[2759]: E0416 23:30:22.322012 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:22.344804 kubelet[2759]: E0416 23:30:22.344744 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:22.344804 kubelet[2759]: W0416 23:30:22.344792 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:22.344979 kubelet[2759]: E0416 23:30:22.344818 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:22.345315 kubelet[2759]: E0416 23:30:22.345263 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:22.345355 kubelet[2759]: W0416 23:30:22.345305 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:22.345355 kubelet[2759]: E0416 23:30:22.345334 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:22.345687 kubelet[2759]: E0416 23:30:22.345662 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:22.345752 kubelet[2759]: W0416 23:30:22.345688 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:22.345752 kubelet[2759]: E0416 23:30:22.345733 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:22.346089 kubelet[2759]: E0416 23:30:22.346070 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:22.346169 kubelet[2759]: W0416 23:30:22.346089 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:22.346201 kubelet[2759]: E0416 23:30:22.346177 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:22.346526 kubelet[2759]: E0416 23:30:22.346494 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:22.346597 kubelet[2759]: W0416 23:30:22.346524 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:22.346628 kubelet[2759]: E0416 23:30:22.346605 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:22.347924 kubelet[2759]: E0416 23:30:22.347893 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:22.347924 kubelet[2759]: W0416 23:30:22.347911 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:22.347924 kubelet[2759]: E0416 23:30:22.347922 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:22.348811 kubelet[2759]: E0416 23:30:22.348793 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:22.348811 kubelet[2759]: W0416 23:30:22.348810 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:22.348893 kubelet[2759]: E0416 23:30:22.348823 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:22.349326 kubelet[2759]: E0416 23:30:22.349290 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:22.349326 kubelet[2759]: W0416 23:30:22.349320 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:22.349388 kubelet[2759]: E0416 23:30:22.349332 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:22.350069 kubelet[2759]: E0416 23:30:22.350052 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:22.350147 kubelet[2759]: W0416 23:30:22.350069 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:22.350147 kubelet[2759]: E0416 23:30:22.350088 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:22.350549 kubelet[2759]: E0416 23:30:22.350531 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:22.350549 kubelet[2759]: W0416 23:30:22.350546 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:22.350632 kubelet[2759]: E0416 23:30:22.350557 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:22.351261 kubelet[2759]: E0416 23:30:22.351240 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:22.351261 kubelet[2759]: W0416 23:30:22.351255 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:22.351353 kubelet[2759]: E0416 23:30:22.351266 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:22.351516 kubelet[2759]: E0416 23:30:22.351480 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:22.351516 kubelet[2759]: W0416 23:30:22.351495 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:22.351516 kubelet[2759]: E0416 23:30:22.351505 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:22.351880 kubelet[2759]: E0416 23:30:22.351864 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:22.351925 kubelet[2759]: W0416 23:30:22.351882 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:22.351925 kubelet[2759]: E0416 23:30:22.351895 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:22.352082 kubelet[2759]: E0416 23:30:22.352063 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:22.352082 kubelet[2759]: W0416 23:30:22.352081 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:22.352155 kubelet[2759]: E0416 23:30:22.352089 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:22.352305 kubelet[2759]: E0416 23:30:22.352283 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:22.352340 kubelet[2759]: W0416 23:30:22.352315 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:22.352340 kubelet[2759]: E0416 23:30:22.352326 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:22.352520 kubelet[2759]: E0416 23:30:22.352506 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:22.352520 kubelet[2759]: W0416 23:30:22.352519 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:22.352572 kubelet[2759]: E0416 23:30:22.352528 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:22.352744 kubelet[2759]: E0416 23:30:22.352732 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:22.352744 kubelet[2759]: W0416 23:30:22.352744 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:22.352809 kubelet[2759]: E0416 23:30:22.352752 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:22.353249 kubelet[2759]: E0416 23:30:22.353231 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:30:22.353249 kubelet[2759]: W0416 23:30:22.353248 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:30:22.353308 kubelet[2759]: E0416 23:30:22.353258 2759 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:30:22.665756 containerd[1544]: time="2026-04-16T23:30:22.665464586Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:22.667049 containerd[1544]: time="2026-04-16T23:30:22.666893157Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Apr 16 23:30:22.667988 containerd[1544]: time="2026-04-16T23:30:22.667933584Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:22.671418 containerd[1544]: time="2026-04-16T23:30:22.671378285Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:22.672086 containerd[1544]: time="2026-04-16T23:30:22.671897118Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.669639609s" Apr 16 23:30:22.672086 containerd[1544]: time="2026-04-16T23:30:22.672014806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Apr 16 23:30:22.677739 containerd[1544]: time="2026-04-16T23:30:22.677628806Z" level=info msg="CreateContainer within sandbox \"f72eeb3bfbf7dd571afced013675f2cf4b51a82135a2dbdee149ee1b48f2979a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 16 23:30:22.687885 containerd[1544]: time="2026-04-16T23:30:22.687842620Z" level=info msg="Container 062702e1b2ed71930c240a056f2def551261f09c889d4c3bcb9d67d53134b763: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:30:22.701560 containerd[1544]: time="2026-04-16T23:30:22.701492856Z" level=info msg="CreateContainer within sandbox \"f72eeb3bfbf7dd571afced013675f2cf4b51a82135a2dbdee149ee1b48f2979a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"062702e1b2ed71930c240a056f2def551261f09c889d4c3bcb9d67d53134b763\"" Apr 16 23:30:22.702846 containerd[1544]: time="2026-04-16T23:30:22.702569645Z" level=info msg="StartContainer for \"062702e1b2ed71930c240a056f2def551261f09c889d4c3bcb9d67d53134b763\"" Apr 16 23:30:22.704535 containerd[1544]: time="2026-04-16T23:30:22.704505329Z" level=info msg="connecting to shim 062702e1b2ed71930c240a056f2def551261f09c889d4c3bcb9d67d53134b763" address="unix:///run/containerd/s/f3519fc1435398dd589160d5232b59acd4d1e7e139b12f8f36a2d71a7a494d57" protocol=ttrpc version=3 Apr 16 23:30:22.728903 systemd[1]: Started cri-containerd-062702e1b2ed71930c240a056f2def551261f09c889d4c3bcb9d67d53134b763.scope - libcontainer container 062702e1b2ed71930c240a056f2def551261f09c889d4c3bcb9d67d53134b763. Apr 16 23:30:22.798119 containerd[1544]: time="2026-04-16T23:30:22.798061967Z" level=info msg="StartContainer for \"062702e1b2ed71930c240a056f2def551261f09c889d4c3bcb9d67d53134b763\" returns successfully" Apr 16 23:30:22.813887 systemd[1]: cri-containerd-062702e1b2ed71930c240a056f2def551261f09c889d4c3bcb9d67d53134b763.scope: Deactivated successfully. Apr 16 23:30:22.818060 containerd[1544]: time="2026-04-16T23:30:22.818021127Z" level=info msg="received container exit event container_id:\"062702e1b2ed71930c240a056f2def551261f09c889d4c3bcb9d67d53134b763\" id:\"062702e1b2ed71930c240a056f2def551261f09c889d4c3bcb9d67d53134b763\" pid:3421 exited_at:{seconds:1776382222 nanos:817103308}" Apr 16 23:30:22.841641 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-062702e1b2ed71930c240a056f2def551261f09c889d4c3bcb9d67d53134b763-rootfs.mount: Deactivated successfully. Apr 16 23:30:23.288458 containerd[1544]: time="2026-04-16T23:30:23.288022969Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 16 23:30:24.161745 kubelet[2759]: E0416 23:30:24.160779 2759 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-trmr9" podUID="64a4ac65-767d-4c6a-a7b0-e9d6eab09bc8" Apr 16 23:30:26.168163 kubelet[2759]: E0416 23:30:26.167915 2759 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-trmr9" podUID="64a4ac65-767d-4c6a-a7b0-e9d6eab09bc8" Apr 16 23:30:28.165534 kubelet[2759]: E0416 23:30:28.165457 2759 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-trmr9" podUID="64a4ac65-767d-4c6a-a7b0-e9d6eab09bc8" Apr 16 23:30:29.965973 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount499008662.mount: Deactivated successfully. Apr 16 23:30:29.994052 containerd[1544]: time="2026-04-16T23:30:29.992989985Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Apr 16 23:30:29.994052 containerd[1544]: time="2026-04-16T23:30:29.993149473Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:29.994052 containerd[1544]: time="2026-04-16T23:30:29.993997834Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:29.997016 containerd[1544]: time="2026-04-16T23:30:29.996976978Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:29.997793 containerd[1544]: time="2026-04-16T23:30:29.997754456Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 6.708949598s" Apr 16 23:30:29.997793 containerd[1544]: time="2026-04-16T23:30:29.997791657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Apr 16 23:30:30.006558 containerd[1544]: time="2026-04-16T23:30:30.006369623Z" level=info msg="CreateContainer within sandbox \"f72eeb3bfbf7dd571afced013675f2cf4b51a82135a2dbdee149ee1b48f2979a\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 16 23:30:30.018723 containerd[1544]: time="2026-04-16T23:30:30.018089850Z" level=info msg="Container a171c0486d01f94a2f470181e6332a6d0035312dc3b5c3433bd13200a7b2537d: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:30:30.032876 containerd[1544]: time="2026-04-16T23:30:30.032827579Z" level=info msg="CreateContainer within sandbox \"f72eeb3bfbf7dd571afced013675f2cf4b51a82135a2dbdee149ee1b48f2979a\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"a171c0486d01f94a2f470181e6332a6d0035312dc3b5c3433bd13200a7b2537d\"" Apr 16 23:30:30.037261 containerd[1544]: time="2026-04-16T23:30:30.037211903Z" level=info msg="StartContainer for \"a171c0486d01f94a2f470181e6332a6d0035312dc3b5c3433bd13200a7b2537d\"" Apr 16 23:30:30.040886 containerd[1544]: time="2026-04-16T23:30:30.040829352Z" level=info msg="connecting to shim a171c0486d01f94a2f470181e6332a6d0035312dc3b5c3433bd13200a7b2537d" address="unix:///run/containerd/s/f3519fc1435398dd589160d5232b59acd4d1e7e139b12f8f36a2d71a7a494d57" protocol=ttrpc version=3 Apr 16 23:30:30.066095 systemd[1]: Started cri-containerd-a171c0486d01f94a2f470181e6332a6d0035312dc3b5c3433bd13200a7b2537d.scope - libcontainer container a171c0486d01f94a2f470181e6332a6d0035312dc3b5c3433bd13200a7b2537d. Apr 16 23:30:30.135980 containerd[1544]: time="2026-04-16T23:30:30.135919313Z" level=info msg="StartContainer for \"a171c0486d01f94a2f470181e6332a6d0035312dc3b5c3433bd13200a7b2537d\" returns successfully" Apr 16 23:30:30.162738 kubelet[2759]: E0416 23:30:30.161935 2759 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-trmr9" podUID="64a4ac65-767d-4c6a-a7b0-e9d6eab09bc8" Apr 16 23:30:30.237272 systemd[1]: cri-containerd-a171c0486d01f94a2f470181e6332a6d0035312dc3b5c3433bd13200a7b2537d.scope: Deactivated successfully. Apr 16 23:30:30.239401 containerd[1544]: time="2026-04-16T23:30:30.239234858Z" level=info msg="received container exit event container_id:\"a171c0486d01f94a2f470181e6332a6d0035312dc3b5c3433bd13200a7b2537d\" id:\"a171c0486d01f94a2f470181e6332a6d0035312dc3b5c3433bd13200a7b2537d\" pid:3480 exited_at:{seconds:1776382230 nanos:238994807}" Apr 16 23:30:30.266184 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a171c0486d01f94a2f470181e6332a6d0035312dc3b5c3433bd13200a7b2537d-rootfs.mount: Deactivated successfully. Apr 16 23:30:31.314643 containerd[1544]: time="2026-04-16T23:30:31.314213797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 16 23:30:32.161535 kubelet[2759]: E0416 23:30:32.161393 2759 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-trmr9" podUID="64a4ac65-767d-4c6a-a7b0-e9d6eab09bc8" Apr 16 23:30:34.162726 kubelet[2759]: E0416 23:30:34.161738 2759 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-trmr9" podUID="64a4ac65-767d-4c6a-a7b0-e9d6eab09bc8" Apr 16 23:30:35.531749 containerd[1544]: time="2026-04-16T23:30:35.531064059Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:35.533015 containerd[1544]: time="2026-04-16T23:30:35.532988576Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Apr 16 23:30:35.533844 containerd[1544]: time="2026-04-16T23:30:35.533817728Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:35.537380 containerd[1544]: time="2026-04-16T23:30:35.537349948Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:35.538280 containerd[1544]: time="2026-04-16T23:30:35.537930251Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 4.223583728s" Apr 16 23:30:35.538610 containerd[1544]: time="2026-04-16T23:30:35.538592278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Apr 16 23:30:35.545042 containerd[1544]: time="2026-04-16T23:30:35.544963170Z" level=info msg="CreateContainer within sandbox \"f72eeb3bfbf7dd571afced013675f2cf4b51a82135a2dbdee149ee1b48f2979a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 16 23:30:35.558817 containerd[1544]: time="2026-04-16T23:30:35.558749476Z" level=info msg="Container 32e290495c2a8bd1063589f7945e82feeddcb34146f566f256ce89189eac2e5f: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:30:35.571839 containerd[1544]: time="2026-04-16T23:30:35.571768991Z" level=info msg="CreateContainer within sandbox \"f72eeb3bfbf7dd571afced013675f2cf4b51a82135a2dbdee149ee1b48f2979a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"32e290495c2a8bd1063589f7945e82feeddcb34146f566f256ce89189eac2e5f\"" Apr 16 23:30:35.574045 containerd[1544]: time="2026-04-16T23:30:35.573976919Z" level=info msg="StartContainer for \"32e290495c2a8bd1063589f7945e82feeddcb34146f566f256ce89189eac2e5f\"" Apr 16 23:30:35.577535 containerd[1544]: time="2026-04-16T23:30:35.577503378Z" level=info msg="connecting to shim 32e290495c2a8bd1063589f7945e82feeddcb34146f566f256ce89189eac2e5f" address="unix:///run/containerd/s/f3519fc1435398dd589160d5232b59acd4d1e7e139b12f8f36a2d71a7a494d57" protocol=ttrpc version=3 Apr 16 23:30:35.606043 systemd[1]: Started cri-containerd-32e290495c2a8bd1063589f7945e82feeddcb34146f566f256ce89189eac2e5f.scope - libcontainer container 32e290495c2a8bd1063589f7945e82feeddcb34146f566f256ce89189eac2e5f. Apr 16 23:30:35.680007 containerd[1544]: time="2026-04-16T23:30:35.679969395Z" level=info msg="StartContainer for \"32e290495c2a8bd1063589f7945e82feeddcb34146f566f256ce89189eac2e5f\" returns successfully" Apr 16 23:30:36.161262 kubelet[2759]: E0416 23:30:36.161194 2759 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-trmr9" podUID="64a4ac65-767d-4c6a-a7b0-e9d6eab09bc8" Apr 16 23:30:36.233166 containerd[1544]: time="2026-04-16T23:30:36.233106786Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 16 23:30:36.238403 kubelet[2759]: I0416 23:30:36.238348 2759 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Apr 16 23:30:36.239361 systemd[1]: cri-containerd-32e290495c2a8bd1063589f7945e82feeddcb34146f566f256ce89189eac2e5f.scope: Deactivated successfully. Apr 16 23:30:36.239686 systemd[1]: cri-containerd-32e290495c2a8bd1063589f7945e82feeddcb34146f566f256ce89189eac2e5f.scope: Consumed 524ms CPU time, 192.1M memory peak, 171.3M written to disk. Apr 16 23:30:36.245422 containerd[1544]: time="2026-04-16T23:30:36.245124528Z" level=info msg="received container exit event container_id:\"32e290495c2a8bd1063589f7945e82feeddcb34146f566f256ce89189eac2e5f\" id:\"32e290495c2a8bd1063589f7945e82feeddcb34146f566f256ce89189eac2e5f\" pid:3541 exited_at:{seconds:1776382236 nanos:244900519}" Apr 16 23:30:36.287941 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32e290495c2a8bd1063589f7945e82feeddcb34146f566f256ce89189eac2e5f-rootfs.mount: Deactivated successfully. Apr 16 23:30:36.340534 systemd[1]: Created slice kubepods-burstable-pod2bf6e3dc_0126_4a60_8616_f3a621631fa4.slice - libcontainer container kubepods-burstable-pod2bf6e3dc_0126_4a60_8616_f3a621631fa4.slice. Apr 16 23:30:36.362064 systemd[1]: Created slice kubepods-burstable-pod4946c3d7_bfb6_4373_a3d7_415927dfe156.slice - libcontainer container kubepods-burstable-pod4946c3d7_bfb6_4373_a3d7_415927dfe156.slice. Apr 16 23:30:36.403373 systemd[1]: Created slice kubepods-besteffort-pod1c997839_dd97_4bff_97ae_590f0c51aa4b.slice - libcontainer container kubepods-besteffort-pod1c997839_dd97_4bff_97ae_590f0c51aa4b.slice. Apr 16 23:30:36.414849 systemd[1]: Created slice kubepods-besteffort-podee323213_77a2_4e82_98b6_21f5d91a3940.slice - libcontainer container kubepods-besteffort-podee323213_77a2_4e82_98b6_21f5d91a3940.slice. Apr 16 23:30:36.421942 systemd[1]: Created slice kubepods-besteffort-pod1e919261_a6de_4170_8882_981734963b11.slice - libcontainer container kubepods-besteffort-pod1e919261_a6de_4170_8882_981734963b11.slice. Apr 16 23:30:36.428394 systemd[1]: Created slice kubepods-besteffort-podfb3cba98_8329_4d93_b69b_cc6abca1373a.slice - libcontainer container kubepods-besteffort-podfb3cba98_8329_4d93_b69b_cc6abca1373a.slice. Apr 16 23:30:36.439850 systemd[1]: Created slice kubepods-besteffort-pod8073296a_4e06_444b_9779_9a044c70aebc.slice - libcontainer container kubepods-besteffort-pod8073296a_4e06_444b_9779_9a044c70aebc.slice. Apr 16 23:30:36.453493 kubelet[2759]: I0416 23:30:36.452988 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbj7r\" (UniqueName: \"kubernetes.io/projected/ee323213-77a2-4e82-98b6-21f5d91a3940-kube-api-access-sbj7r\") pod \"calico-apiserver-784c96cd86-kh6sr\" (UID: \"ee323213-77a2-4e82-98b6-21f5d91a3940\") " pod="calico-system/calico-apiserver-784c96cd86-kh6sr" Apr 16 23:30:36.453769 kubelet[2759]: I0416 23:30:36.453747 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c997839-dd97-4bff-97ae-590f0c51aa4b-tigera-ca-bundle\") pod \"calico-kube-controllers-648d8fc559-q6448\" (UID: \"1c997839-dd97-4bff-97ae-590f0c51aa4b\") " pod="calico-system/calico-kube-controllers-648d8fc559-q6448" Apr 16 23:30:36.453874 kubelet[2759]: I0416 23:30:36.453861 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4946c3d7-bfb6-4373-a3d7-415927dfe156-config-volume\") pod \"coredns-7d764666f9-khfbq\" (UID: \"4946c3d7-bfb6-4373-a3d7-415927dfe156\") " pod="kube-system/coredns-7d764666f9-khfbq" Apr 16 23:30:36.454339 kubelet[2759]: I0416 23:30:36.454077 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ee323213-77a2-4e82-98b6-21f5d91a3940-calico-apiserver-certs\") pod \"calico-apiserver-784c96cd86-kh6sr\" (UID: \"ee323213-77a2-4e82-98b6-21f5d91a3940\") " pod="calico-system/calico-apiserver-784c96cd86-kh6sr" Apr 16 23:30:36.454339 kubelet[2759]: I0416 23:30:36.454115 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e919261-a6de-4170-8882-981734963b11-config\") pod \"goldmane-9f7667bb8-ktxr4\" (UID: \"1e919261-a6de-4170-8882-981734963b11\") " pod="calico-system/goldmane-9f7667bb8-ktxr4" Apr 16 23:30:36.454339 kubelet[2759]: I0416 23:30:36.454131 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e919261-a6de-4170-8882-981734963b11-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-ktxr4\" (UID: \"1e919261-a6de-4170-8882-981734963b11\") " pod="calico-system/goldmane-9f7667bb8-ktxr4" Apr 16 23:30:36.454339 kubelet[2759]: I0416 23:30:36.454151 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqrrw\" (UniqueName: \"kubernetes.io/projected/1c997839-dd97-4bff-97ae-590f0c51aa4b-kube-api-access-xqrrw\") pod \"calico-kube-controllers-648d8fc559-q6448\" (UID: \"1c997839-dd97-4bff-97ae-590f0c51aa4b\") " pod="calico-system/calico-kube-controllers-648d8fc559-q6448" Apr 16 23:30:36.454339 kubelet[2759]: I0416 23:30:36.454168 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8073296a-4e06-444b-9779-9a044c70aebc-whisker-backend-key-pair\") pod \"whisker-5656769985-7bnsx\" (UID: \"8073296a-4e06-444b-9779-9a044c70aebc\") " pod="calico-system/whisker-5656769985-7bnsx" Apr 16 23:30:36.454847 kubelet[2759]: I0416 23:30:36.454787 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dwjj\" (UniqueName: \"kubernetes.io/projected/2bf6e3dc-0126-4a60-8616-f3a621631fa4-kube-api-access-9dwjj\") pod \"coredns-7d764666f9-dsxbp\" (UID: \"2bf6e3dc-0126-4a60-8616-f3a621631fa4\") " pod="kube-system/coredns-7d764666f9-dsxbp" Apr 16 23:30:36.454847 kubelet[2759]: I0416 23:30:36.454829 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g84dm\" (UniqueName: \"kubernetes.io/projected/fb3cba98-8329-4d93-b69b-cc6abca1373a-kube-api-access-g84dm\") pod \"calico-apiserver-784c96cd86-5m7xc\" (UID: \"fb3cba98-8329-4d93-b69b-cc6abca1373a\") " pod="calico-system/calico-apiserver-784c96cd86-5m7xc" Apr 16 23:30:36.454950 kubelet[2759]: I0416 23:30:36.454860 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/1e919261-a6de-4170-8882-981734963b11-goldmane-key-pair\") pod \"goldmane-9f7667bb8-ktxr4\" (UID: \"1e919261-a6de-4170-8882-981734963b11\") " pod="calico-system/goldmane-9f7667bb8-ktxr4" Apr 16 23:30:36.454950 kubelet[2759]: I0416 23:30:36.454878 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8073296a-4e06-444b-9779-9a044c70aebc-whisker-ca-bundle\") pod \"whisker-5656769985-7bnsx\" (UID: \"8073296a-4e06-444b-9779-9a044c70aebc\") " pod="calico-system/whisker-5656769985-7bnsx" Apr 16 23:30:36.454950 kubelet[2759]: I0416 23:30:36.454897 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssrb6\" (UniqueName: \"kubernetes.io/projected/1e919261-a6de-4170-8882-981734963b11-kube-api-access-ssrb6\") pod \"goldmane-9f7667bb8-ktxr4\" (UID: \"1e919261-a6de-4170-8882-981734963b11\") " pod="calico-system/goldmane-9f7667bb8-ktxr4" Apr 16 23:30:36.454950 kubelet[2759]: I0416 23:30:36.454913 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bf6e3dc-0126-4a60-8616-f3a621631fa4-config-volume\") pod \"coredns-7d764666f9-dsxbp\" (UID: \"2bf6e3dc-0126-4a60-8616-f3a621631fa4\") " pod="kube-system/coredns-7d764666f9-dsxbp" Apr 16 23:30:36.454950 kubelet[2759]: I0416 23:30:36.454929 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjjf6\" (UniqueName: \"kubernetes.io/projected/8073296a-4e06-444b-9779-9a044c70aebc-kube-api-access-vjjf6\") pod \"whisker-5656769985-7bnsx\" (UID: \"8073296a-4e06-444b-9779-9a044c70aebc\") " pod="calico-system/whisker-5656769985-7bnsx" Apr 16 23:30:36.455061 kubelet[2759]: I0416 23:30:36.454949 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c4vn\" (UniqueName: \"kubernetes.io/projected/4946c3d7-bfb6-4373-a3d7-415927dfe156-kube-api-access-2c4vn\") pod \"coredns-7d764666f9-khfbq\" (UID: \"4946c3d7-bfb6-4373-a3d7-415927dfe156\") " pod="kube-system/coredns-7d764666f9-khfbq" Apr 16 23:30:36.455061 kubelet[2759]: I0416 23:30:36.454990 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fb3cba98-8329-4d93-b69b-cc6abca1373a-calico-apiserver-certs\") pod \"calico-apiserver-784c96cd86-5m7xc\" (UID: \"fb3cba98-8329-4d93-b69b-cc6abca1373a\") " pod="calico-system/calico-apiserver-784c96cd86-5m7xc" Apr 16 23:30:36.455061 kubelet[2759]: I0416 23:30:36.455015 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/8073296a-4e06-444b-9779-9a044c70aebc-nginx-config\") pod \"whisker-5656769985-7bnsx\" (UID: \"8073296a-4e06-444b-9779-9a044c70aebc\") " pod="calico-system/whisker-5656769985-7bnsx" Apr 16 23:30:36.668853 containerd[1544]: time="2026-04-16T23:30:36.668274909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-dsxbp,Uid:2bf6e3dc-0126-4a60-8616-f3a621631fa4,Namespace:kube-system,Attempt:0,}" Apr 16 23:30:36.675248 containerd[1544]: time="2026-04-16T23:30:36.674949086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-khfbq,Uid:4946c3d7-bfb6-4373-a3d7-415927dfe156,Namespace:kube-system,Attempt:0,}" Apr 16 23:30:36.711829 containerd[1544]: time="2026-04-16T23:30:36.711795141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-648d8fc559-q6448,Uid:1c997839-dd97-4bff-97ae-590f0c51aa4b,Namespace:calico-system,Attempt:0,}" Apr 16 23:30:36.723005 containerd[1544]: time="2026-04-16T23:30:36.722933489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-784c96cd86-kh6sr,Uid:ee323213-77a2-4e82-98b6-21f5d91a3940,Namespace:calico-system,Attempt:0,}" Apr 16 23:30:36.728233 containerd[1544]: time="2026-04-16T23:30:36.727935642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-ktxr4,Uid:1e919261-a6de-4170-8882-981734963b11,Namespace:calico-system,Attempt:0,}" Apr 16 23:30:36.738828 containerd[1544]: time="2026-04-16T23:30:36.738627013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-784c96cd86-5m7xc,Uid:fb3cba98-8329-4d93-b69b-cc6abca1373a,Namespace:calico-system,Attempt:0,}" Apr 16 23:30:36.748572 containerd[1544]: time="2026-04-16T23:30:36.748337386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5656769985-7bnsx,Uid:8073296a-4e06-444b-9779-9a044c70aebc,Namespace:calico-system,Attempt:0,}" Apr 16 23:30:36.822535 containerd[1544]: time="2026-04-16T23:30:36.822421913Z" level=error msg="Failed to destroy network for sandbox \"1f72b7b64a5f234a8c704e27c8106bfcdc3e1d41a805cb92e2a460bcaa43a40c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:30:36.824625 containerd[1544]: time="2026-04-16T23:30:36.824573275Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-khfbq,Uid:4946c3d7-bfb6-4373-a3d7-415927dfe156,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f72b7b64a5f234a8c704e27c8106bfcdc3e1d41a805cb92e2a460bcaa43a40c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:30:36.825530 kubelet[2759]: E0416 23:30:36.825391 2759 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f72b7b64a5f234a8c704e27c8106bfcdc3e1d41a805cb92e2a460bcaa43a40c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:30:36.825530 kubelet[2759]: E0416 23:30:36.825471 2759 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f72b7b64a5f234a8c704e27c8106bfcdc3e1d41a805cb92e2a460bcaa43a40c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-khfbq" Apr 16 23:30:36.825530 kubelet[2759]: E0416 23:30:36.825490 2759 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f72b7b64a5f234a8c704e27c8106bfcdc3e1d41a805cb92e2a460bcaa43a40c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-khfbq" Apr 16 23:30:36.826036 kubelet[2759]: E0416 23:30:36.825998 2759 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-khfbq_kube-system(4946c3d7-bfb6-4373-a3d7-415927dfe156)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-khfbq_kube-system(4946c3d7-bfb6-4373-a3d7-415927dfe156)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f72b7b64a5f234a8c704e27c8106bfcdc3e1d41a805cb92e2a460bcaa43a40c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-khfbq" podUID="4946c3d7-bfb6-4373-a3d7-415927dfe156" Apr 16 23:30:36.839555 containerd[1544]: time="2026-04-16T23:30:36.839427646Z" level=error msg="Failed to destroy network for sandbox \"f1ec18f635e2980a920e2158b509b41dcbd32a4d2da0839992435188dab9e1c6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:30:36.842753 containerd[1544]: time="2026-04-16T23:30:36.842521325Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-dsxbp,Uid:2bf6e3dc-0126-4a60-8616-f3a621631fa4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1ec18f635e2980a920e2158b509b41dcbd32a4d2da0839992435188dab9e1c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:30:36.843553 kubelet[2759]: E0416 23:30:36.843503 2759 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1ec18f635e2980a920e2158b509b41dcbd32a4d2da0839992435188dab9e1c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:30:36.844206 kubelet[2759]: E0416 23:30:36.844034 2759 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1ec18f635e2980a920e2158b509b41dcbd32a4d2da0839992435188dab9e1c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-dsxbp" Apr 16 23:30:36.844206 kubelet[2759]: E0416 23:30:36.844085 2759 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1ec18f635e2980a920e2158b509b41dcbd32a4d2da0839992435188dab9e1c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-dsxbp" Apr 16 23:30:36.844616 kubelet[2759]: E0416 23:30:36.844377 2759 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-dsxbp_kube-system(2bf6e3dc-0126-4a60-8616-f3a621631fa4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-dsxbp_kube-system(2bf6e3dc-0126-4a60-8616-f3a621631fa4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f1ec18f635e2980a920e2158b509b41dcbd32a4d2da0839992435188dab9e1c6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-dsxbp" podUID="2bf6e3dc-0126-4a60-8616-f3a621631fa4" Apr 16 23:30:36.902653 containerd[1544]: time="2026-04-16T23:30:36.902576073Z" level=error msg="Failed to destroy network for sandbox \"1cef6e8e319b86f87193642286b21074e1c45c1cf02ecece15aa93c1ac52ce3e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:30:36.906570 containerd[1544]: time="2026-04-16T23:30:36.906491183Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-ktxr4,Uid:1e919261-a6de-4170-8882-981734963b11,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cef6e8e319b86f87193642286b21074e1c45c1cf02ecece15aa93c1ac52ce3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:30:36.908875 kubelet[2759]: E0416 23:30:36.907116 2759 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cef6e8e319b86f87193642286b21074e1c45c1cf02ecece15aa93c1ac52ce3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:30:36.908875 kubelet[2759]: E0416 23:30:36.907166 2759 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cef6e8e319b86f87193642286b21074e1c45c1cf02ecece15aa93c1ac52ce3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-ktxr4" Apr 16 23:30:36.908875 kubelet[2759]: E0416 23:30:36.907185 2759 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cef6e8e319b86f87193642286b21074e1c45c1cf02ecece15aa93c1ac52ce3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-ktxr4" Apr 16 23:30:36.909155 kubelet[2759]: E0416 23:30:36.907250 2759 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-ktxr4_calico-system(1e919261-a6de-4170-8882-981734963b11)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-ktxr4_calico-system(1e919261-a6de-4170-8882-981734963b11)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1cef6e8e319b86f87193642286b21074e1c45c1cf02ecece15aa93c1ac52ce3e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-ktxr4" podUID="1e919261-a6de-4170-8882-981734963b11" Apr 16 23:30:36.922370 containerd[1544]: time="2026-04-16T23:30:36.920964420Z" level=error msg="Failed to destroy network for sandbox \"18b771787a58387cca228cb2e74159b85194883bcdbc3e82c4da46d63d387fff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:30:36.924042 containerd[1544]: time="2026-04-16T23:30:36.923985136Z" level=error msg="Failed to destroy network for sandbox \"5294d69897e842b30a2557234d5c6088d7112e55981924ad91c5ec94cbebdde8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:30:36.925834 containerd[1544]: time="2026-04-16T23:30:36.925514354Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-648d8fc559-q6448,Uid:1c997839-dd97-4bff-97ae-590f0c51aa4b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"18b771787a58387cca228cb2e74159b85194883bcdbc3e82c4da46d63d387fff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:30:36.926223 kubelet[2759]: E0416 23:30:36.925830 2759 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18b771787a58387cca228cb2e74159b85194883bcdbc3e82c4da46d63d387fff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:30:36.926223 kubelet[2759]: E0416 23:30:36.925878 2759 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18b771787a58387cca228cb2e74159b85194883bcdbc3e82c4da46d63d387fff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-648d8fc559-q6448" Apr 16 23:30:36.926223 kubelet[2759]: E0416 23:30:36.925953 2759 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18b771787a58387cca228cb2e74159b85194883bcdbc3e82c4da46d63d387fff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-648d8fc559-q6448" Apr 16 23:30:36.927572 kubelet[2759]: E0416 23:30:36.926011 2759 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-648d8fc559-q6448_calico-system(1c997839-dd97-4bff-97ae-590f0c51aa4b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-648d8fc559-q6448_calico-system(1c997839-dd97-4bff-97ae-590f0c51aa4b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"18b771787a58387cca228cb2e74159b85194883bcdbc3e82c4da46d63d387fff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-648d8fc559-q6448" podUID="1c997839-dd97-4bff-97ae-590f0c51aa4b" Apr 16 23:30:36.927572 kubelet[2759]: E0416 23:30:36.926901 2759 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5294d69897e842b30a2557234d5c6088d7112e55981924ad91c5ec94cbebdde8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:30:36.927572 kubelet[2759]: E0416 23:30:36.926939 2759 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5294d69897e842b30a2557234d5c6088d7112e55981924ad91c5ec94cbebdde8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5656769985-7bnsx" Apr 16 23:30:36.929058 containerd[1544]: time="2026-04-16T23:30:36.926579515Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5656769985-7bnsx,Uid:8073296a-4e06-444b-9779-9a044c70aebc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5294d69897e842b30a2557234d5c6088d7112e55981924ad91c5ec94cbebdde8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:30:36.929223 kubelet[2759]: E0416 23:30:36.926954 2759 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5294d69897e842b30a2557234d5c6088d7112e55981924ad91c5ec94cbebdde8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5656769985-7bnsx" Apr 16 23:30:36.929223 kubelet[2759]: E0416 23:30:36.927000 2759 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5656769985-7bnsx_calico-system(8073296a-4e06-444b-9779-9a044c70aebc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5656769985-7bnsx_calico-system(8073296a-4e06-444b-9779-9a044c70aebc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5294d69897e842b30a2557234d5c6088d7112e55981924ad91c5ec94cbebdde8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5656769985-7bnsx" podUID="8073296a-4e06-444b-9779-9a044c70aebc" Apr 16 23:30:36.932727 containerd[1544]: time="2026-04-16T23:30:36.932655869Z" level=error msg="Failed to destroy network for sandbox \"5b8faf195a5c0b895b726d71c12a5ede42ac418ca4767db755cbf51341970b71\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:30:36.934744 containerd[1544]: time="2026-04-16T23:30:36.934427177Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-784c96cd86-kh6sr,Uid:ee323213-77a2-4e82-98b6-21f5d91a3940,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b8faf195a5c0b895b726d71c12a5ede42ac418ca4767db755cbf51341970b71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:30:36.935122 kubelet[2759]: E0416 23:30:36.934921 2759 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b8faf195a5c0b895b726d71c12a5ede42ac418ca4767db755cbf51341970b71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:30:36.935122 kubelet[2759]: E0416 23:30:36.935000 2759 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b8faf195a5c0b895b726d71c12a5ede42ac418ca4767db755cbf51341970b71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-784c96cd86-kh6sr" Apr 16 23:30:36.935122 kubelet[2759]: E0416 23:30:36.935023 2759 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b8faf195a5c0b895b726d71c12a5ede42ac418ca4767db755cbf51341970b71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-784c96cd86-kh6sr" Apr 16 23:30:36.935289 kubelet[2759]: E0416 23:30:36.935111 2759 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-784c96cd86-kh6sr_calico-system(ee323213-77a2-4e82-98b6-21f5d91a3940)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-784c96cd86-kh6sr_calico-system(ee323213-77a2-4e82-98b6-21f5d91a3940)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5b8faf195a5c0b895b726d71c12a5ede42ac418ca4767db755cbf51341970b71\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-784c96cd86-kh6sr" podUID="ee323213-77a2-4e82-98b6-21f5d91a3940" Apr 16 23:30:36.956554 containerd[1544]: time="2026-04-16T23:30:36.956494545Z" level=error msg="Failed to destroy network for sandbox \"25e486e43a95b28bfb065ada272eef2277229f399e06172ed14670cecd4477e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:30:36.958588 containerd[1544]: time="2026-04-16T23:30:36.958539864Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-784c96cd86-5m7xc,Uid:fb3cba98-8329-4d93-b69b-cc6abca1373a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"25e486e43a95b28bfb065ada272eef2277229f399e06172ed14670cecd4477e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:30:36.958914 kubelet[2759]: E0416 23:30:36.958854 2759 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25e486e43a95b28bfb065ada272eef2277229f399e06172ed14670cecd4477e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:30:36.959016 kubelet[2759]: E0416 23:30:36.958935 2759 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25e486e43a95b28bfb065ada272eef2277229f399e06172ed14670cecd4477e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-784c96cd86-5m7xc" Apr 16 23:30:36.959016 kubelet[2759]: E0416 23:30:36.958953 2759 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25e486e43a95b28bfb065ada272eef2277229f399e06172ed14670cecd4477e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-784c96cd86-5m7xc" Apr 16 23:30:36.959127 kubelet[2759]: E0416 23:30:36.959015 2759 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-784c96cd86-5m7xc_calico-system(fb3cba98-8329-4d93-b69b-cc6abca1373a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-784c96cd86-5m7xc_calico-system(fb3cba98-8329-4d93-b69b-cc6abca1373a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"25e486e43a95b28bfb065ada272eef2277229f399e06172ed14670cecd4477e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-784c96cd86-5m7xc" podUID="fb3cba98-8329-4d93-b69b-cc6abca1373a" Apr 16 23:30:37.381850 containerd[1544]: time="2026-04-16T23:30:37.381752390Z" level=info msg="CreateContainer within sandbox \"f72eeb3bfbf7dd571afced013675f2cf4b51a82135a2dbdee149ee1b48f2979a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 16 23:30:37.400130 containerd[1544]: time="2026-04-16T23:30:37.398578059Z" level=info msg="Container c320958afe06f0fd600ac3930d7be463679628d84eb7279e3752d8b2b7dcdcf7: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:30:37.412589 containerd[1544]: time="2026-04-16T23:30:37.412530700Z" level=info msg="CreateContainer within sandbox \"f72eeb3bfbf7dd571afced013675f2cf4b51a82135a2dbdee149ee1b48f2979a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c320958afe06f0fd600ac3930d7be463679628d84eb7279e3752d8b2b7dcdcf7\"" Apr 16 23:30:37.413364 containerd[1544]: time="2026-04-16T23:30:37.413327449Z" level=info msg="StartContainer for \"c320958afe06f0fd600ac3930d7be463679628d84eb7279e3752d8b2b7dcdcf7\"" Apr 16 23:30:37.415276 containerd[1544]: time="2026-04-16T23:30:37.415184439Z" level=info msg="connecting to shim c320958afe06f0fd600ac3930d7be463679628d84eb7279e3752d8b2b7dcdcf7" address="unix:///run/containerd/s/f3519fc1435398dd589160d5232b59acd4d1e7e139b12f8f36a2d71a7a494d57" protocol=ttrpc version=3 Apr 16 23:30:37.440005 systemd[1]: Started cri-containerd-c320958afe06f0fd600ac3930d7be463679628d84eb7279e3752d8b2b7dcdcf7.scope - libcontainer container c320958afe06f0fd600ac3930d7be463679628d84eb7279e3752d8b2b7dcdcf7. Apr 16 23:30:37.513777 containerd[1544]: time="2026-04-16T23:30:37.513625274Z" level=info msg="StartContainer for \"c320958afe06f0fd600ac3930d7be463679628d84eb7279e3752d8b2b7dcdcf7\" returns successfully" Apr 16 23:30:37.767247 kubelet[2759]: I0416 23:30:37.766856 2759 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/8073296a-4e06-444b-9779-9a044c70aebc-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8073296a-4e06-444b-9779-9a044c70aebc-whisker-backend-key-pair\") pod \"8073296a-4e06-444b-9779-9a044c70aebc\" (UID: \"8073296a-4e06-444b-9779-9a044c70aebc\") " Apr 16 23:30:37.767247 kubelet[2759]: I0416 23:30:37.766955 2759 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/8073296a-4e06-444b-9779-9a044c70aebc-nginx-config\" (UniqueName: \"kubernetes.io/configmap/8073296a-4e06-444b-9779-9a044c70aebc-nginx-config\") pod \"8073296a-4e06-444b-9779-9a044c70aebc\" (UID: \"8073296a-4e06-444b-9779-9a044c70aebc\") " Apr 16 23:30:37.767247 kubelet[2759]: I0416 23:30:37.766996 2759 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/8073296a-4e06-444b-9779-9a044c70aebc-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8073296a-4e06-444b-9779-9a044c70aebc-whisker-ca-bundle\") pod \"8073296a-4e06-444b-9779-9a044c70aebc\" (UID: \"8073296a-4e06-444b-9779-9a044c70aebc\") " Apr 16 23:30:37.767247 kubelet[2759]: I0416 23:30:37.767029 2759 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/8073296a-4e06-444b-9779-9a044c70aebc-kube-api-access-vjjf6\" (UniqueName: \"kubernetes.io/projected/8073296a-4e06-444b-9779-9a044c70aebc-kube-api-access-vjjf6\") pod \"8073296a-4e06-444b-9779-9a044c70aebc\" (UID: \"8073296a-4e06-444b-9779-9a044c70aebc\") " Apr 16 23:30:37.769047 kubelet[2759]: I0416 23:30:37.768569 2759 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8073296a-4e06-444b-9779-9a044c70aebc-nginx-config" pod "8073296a-4e06-444b-9779-9a044c70aebc" (UID: "8073296a-4e06-444b-9779-9a044c70aebc"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 23:30:37.769047 kubelet[2759]: I0416 23:30:37.768989 2759 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8073296a-4e06-444b-9779-9a044c70aebc-whisker-ca-bundle" pod "8073296a-4e06-444b-9779-9a044c70aebc" (UID: "8073296a-4e06-444b-9779-9a044c70aebc"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 23:30:37.772807 systemd[1]: var-lib-kubelet-pods-8073296a\x2d4e06\x2d444b\x2d9779\x2d9a044c70aebc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvjjf6.mount: Deactivated successfully. Apr 16 23:30:37.775003 kubelet[2759]: I0416 23:30:37.774963 2759 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8073296a-4e06-444b-9779-9a044c70aebc-kube-api-access-vjjf6" pod "8073296a-4e06-444b-9779-9a044c70aebc" (UID: "8073296a-4e06-444b-9779-9a044c70aebc"). InnerVolumeSpecName "kube-api-access-vjjf6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 23:30:37.777609 systemd[1]: var-lib-kubelet-pods-8073296a\x2d4e06\x2d444b\x2d9779\x2d9a044c70aebc-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 16 23:30:37.779005 kubelet[2759]: I0416 23:30:37.778563 2759 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8073296a-4e06-444b-9779-9a044c70aebc-whisker-backend-key-pair" pod "8073296a-4e06-444b-9779-9a044c70aebc" (UID: "8073296a-4e06-444b-9779-9a044c70aebc"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 23:30:37.868192 kubelet[2759]: I0416 23:30:37.868110 2759 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8073296a-4e06-444b-9779-9a044c70aebc-whisker-backend-key-pair\") on node \"ci-4459-2-4-n-fff9fc0546\" DevicePath \"\"" Apr 16 23:30:37.868192 kubelet[2759]: I0416 23:30:37.868148 2759 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/8073296a-4e06-444b-9779-9a044c70aebc-nginx-config\") on node \"ci-4459-2-4-n-fff9fc0546\" DevicePath \"\"" Apr 16 23:30:37.868192 kubelet[2759]: I0416 23:30:37.868159 2759 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8073296a-4e06-444b-9779-9a044c70aebc-whisker-ca-bundle\") on node \"ci-4459-2-4-n-fff9fc0546\" DevicePath \"\"" Apr 16 23:30:37.868192 kubelet[2759]: I0416 23:30:37.868168 2759 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vjjf6\" (UniqueName: \"kubernetes.io/projected/8073296a-4e06-444b-9779-9a044c70aebc-kube-api-access-vjjf6\") on node \"ci-4459-2-4-n-fff9fc0546\" DevicePath \"\"" Apr 16 23:30:38.170011 systemd[1]: Created slice kubepods-besteffort-pod64a4ac65_767d_4c6a_a7b0_e9d6eab09bc8.slice - libcontainer container kubepods-besteffort-pod64a4ac65_767d_4c6a_a7b0_e9d6eab09bc8.slice. Apr 16 23:30:38.172726 systemd[1]: Removed slice kubepods-besteffort-pod8073296a_4e06_444b_9779_9a044c70aebc.slice - libcontainer container kubepods-besteffort-pod8073296a_4e06_444b_9779_9a044c70aebc.slice. Apr 16 23:30:38.178342 containerd[1544]: time="2026-04-16T23:30:38.177971856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-trmr9,Uid:64a4ac65-767d-4c6a-a7b0-e9d6eab09bc8,Namespace:calico-system,Attempt:0,}" Apr 16 23:30:38.350035 systemd-networkd[1418]: calia0c0d7cc39e: Link UP Apr 16 23:30:38.350456 systemd-networkd[1418]: calia0c0d7cc39e: Gained carrier Apr 16 23:30:38.377867 containerd[1544]: 2026-04-16 23:30:38.204 [ERROR][3823] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 16 23:30:38.377867 containerd[1544]: 2026-04-16 23:30:38.225 [INFO][3823] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--fff9fc0546-k8s-csi--node--driver--trmr9-eth0 csi-node-driver- calico-system 64a4ac65-767d-4c6a-a7b0-e9d6eab09bc8 701 0 2026-04-16 23:30:18 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459-2-4-n-fff9fc0546 csi-node-driver-trmr9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia0c0d7cc39e [] [] }} ContainerID="d42c8b6dd775ad81a42bc897f242f52c842481aee1c698bc3d9ca56862dfb84d" Namespace="calico-system" Pod="csi-node-driver-trmr9" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-csi--node--driver--trmr9-" Apr 16 23:30:38.377867 containerd[1544]: 2026-04-16 23:30:38.225 [INFO][3823] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d42c8b6dd775ad81a42bc897f242f52c842481aee1c698bc3d9ca56862dfb84d" Namespace="calico-system" Pod="csi-node-driver-trmr9" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-csi--node--driver--trmr9-eth0" Apr 16 23:30:38.377867 containerd[1544]: 2026-04-16 23:30:38.278 [INFO][3834] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d42c8b6dd775ad81a42bc897f242f52c842481aee1c698bc3d9ca56862dfb84d" HandleID="k8s-pod-network.d42c8b6dd775ad81a42bc897f242f52c842481aee1c698bc3d9ca56862dfb84d" Workload="ci--4459--2--4--n--fff9fc0546-k8s-csi--node--driver--trmr9-eth0" Apr 16 23:30:38.377867 containerd[1544]: 2026-04-16 23:30:38.289 [INFO][3834] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d42c8b6dd775ad81a42bc897f242f52c842481aee1c698bc3d9ca56862dfb84d" HandleID="k8s-pod-network.d42c8b6dd775ad81a42bc897f242f52c842481aee1c698bc3d9ca56862dfb84d" Workload="ci--4459--2--4--n--fff9fc0546-k8s-csi--node--driver--trmr9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400030a170), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-n-fff9fc0546", "pod":"csi-node-driver-trmr9", "timestamp":"2026-04-16 23:30:38.278233057 +0000 UTC"}, Hostname:"ci-4459-2-4-n-fff9fc0546", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000211080)} Apr 16 23:30:38.377867 containerd[1544]: 2026-04-16 23:30:38.290 [INFO][3834] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:30:38.377867 containerd[1544]: 2026-04-16 23:30:38.290 [INFO][3834] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:30:38.377867 containerd[1544]: 2026-04-16 23:30:38.290 [INFO][3834] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-fff9fc0546' Apr 16 23:30:38.377867 containerd[1544]: 2026-04-16 23:30:38.294 [INFO][3834] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d42c8b6dd775ad81a42bc897f242f52c842481aee1c698bc3d9ca56862dfb84d" host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:38.377867 containerd[1544]: 2026-04-16 23:30:38.301 [INFO][3834] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:38.377867 containerd[1544]: 2026-04-16 23:30:38.307 [INFO][3834] ipam/ipam.go 526: Trying affinity for 192.168.58.64/26 host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:38.377867 containerd[1544]: 2026-04-16 23:30:38.310 [INFO][3834] ipam/ipam.go 160: Attempting to load block cidr=192.168.58.64/26 host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:38.377867 containerd[1544]: 2026-04-16 23:30:38.313 [INFO][3834] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.58.64/26 host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:38.377867 containerd[1544]: 2026-04-16 23:30:38.313 [INFO][3834] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.58.64/26 handle="k8s-pod-network.d42c8b6dd775ad81a42bc897f242f52c842481aee1c698bc3d9ca56862dfb84d" host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:38.377867 containerd[1544]: 2026-04-16 23:30:38.315 [INFO][3834] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d42c8b6dd775ad81a42bc897f242f52c842481aee1c698bc3d9ca56862dfb84d Apr 16 23:30:38.377867 containerd[1544]: 2026-04-16 23:30:38.321 [INFO][3834] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.58.64/26 handle="k8s-pod-network.d42c8b6dd775ad81a42bc897f242f52c842481aee1c698bc3d9ca56862dfb84d" host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:38.377867 containerd[1544]: 2026-04-16 23:30:38.328 [INFO][3834] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.58.65/26] block=192.168.58.64/26 handle="k8s-pod-network.d42c8b6dd775ad81a42bc897f242f52c842481aee1c698bc3d9ca56862dfb84d" host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:38.377867 containerd[1544]: 2026-04-16 23:30:38.328 [INFO][3834] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.58.65/26] handle="k8s-pod-network.d42c8b6dd775ad81a42bc897f242f52c842481aee1c698bc3d9ca56862dfb84d" host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:38.377867 containerd[1544]: 2026-04-16 23:30:38.328 [INFO][3834] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:30:38.377867 containerd[1544]: 2026-04-16 23:30:38.328 [INFO][3834] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.58.65/26] IPv6=[] ContainerID="d42c8b6dd775ad81a42bc897f242f52c842481aee1c698bc3d9ca56862dfb84d" HandleID="k8s-pod-network.d42c8b6dd775ad81a42bc897f242f52c842481aee1c698bc3d9ca56862dfb84d" Workload="ci--4459--2--4--n--fff9fc0546-k8s-csi--node--driver--trmr9-eth0" Apr 16 23:30:38.378396 containerd[1544]: 2026-04-16 23:30:38.333 [INFO][3823] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d42c8b6dd775ad81a42bc897f242f52c842481aee1c698bc3d9ca56862dfb84d" Namespace="calico-system" Pod="csi-node-driver-trmr9" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-csi--node--driver--trmr9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--fff9fc0546-k8s-csi--node--driver--trmr9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"64a4ac65-767d-4c6a-a7b0-e9d6eab09bc8", ResourceVersion:"701", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 30, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-fff9fc0546", ContainerID:"", Pod:"csi-node-driver-trmr9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.58.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia0c0d7cc39e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:30:38.378396 containerd[1544]: 2026-04-16 23:30:38.334 [INFO][3823] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.65/32] ContainerID="d42c8b6dd775ad81a42bc897f242f52c842481aee1c698bc3d9ca56862dfb84d" Namespace="calico-system" Pod="csi-node-driver-trmr9" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-csi--node--driver--trmr9-eth0" Apr 16 23:30:38.378396 containerd[1544]: 2026-04-16 23:30:38.334 [INFO][3823] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia0c0d7cc39e ContainerID="d42c8b6dd775ad81a42bc897f242f52c842481aee1c698bc3d9ca56862dfb84d" Namespace="calico-system" Pod="csi-node-driver-trmr9" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-csi--node--driver--trmr9-eth0" Apr 16 23:30:38.378396 containerd[1544]: 2026-04-16 23:30:38.352 [INFO][3823] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d42c8b6dd775ad81a42bc897f242f52c842481aee1c698bc3d9ca56862dfb84d" Namespace="calico-system" Pod="csi-node-driver-trmr9" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-csi--node--driver--trmr9-eth0" Apr 16 23:30:38.378396 containerd[1544]: 2026-04-16 23:30:38.354 [INFO][3823] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d42c8b6dd775ad81a42bc897f242f52c842481aee1c698bc3d9ca56862dfb84d" Namespace="calico-system" Pod="csi-node-driver-trmr9" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-csi--node--driver--trmr9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--fff9fc0546-k8s-csi--node--driver--trmr9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"64a4ac65-767d-4c6a-a7b0-e9d6eab09bc8", ResourceVersion:"701", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 30, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-fff9fc0546", ContainerID:"d42c8b6dd775ad81a42bc897f242f52c842481aee1c698bc3d9ca56862dfb84d", Pod:"csi-node-driver-trmr9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.58.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia0c0d7cc39e", MAC:"7e:61:91:55:97:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:30:38.378396 containerd[1544]: 2026-04-16 23:30:38.372 [INFO][3823] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d42c8b6dd775ad81a42bc897f242f52c842481aee1c698bc3d9ca56862dfb84d" Namespace="calico-system" Pod="csi-node-driver-trmr9" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-csi--node--driver--trmr9-eth0" Apr 16 23:30:38.410448 containerd[1544]: time="2026-04-16T23:30:38.410406736Z" level=info msg="connecting to shim d42c8b6dd775ad81a42bc897f242f52c842481aee1c698bc3d9ca56862dfb84d" address="unix:///run/containerd/s/3d890b58fbb8dbff83b58360bdc2768292cb0ef1d1d82da321d6aaf17b2d2789" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:30:38.442852 kubelet[2759]: I0416 23:30:38.442677 2759 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-xd69l" podStartSLOduration=1.704710023 podStartE2EDuration="20.442662267s" podCreationTimestamp="2026-04-16 23:30:18 +0000 UTC" firstStartedPulling="2026-04-16 23:30:18.631958104 +0000 UTC m=+20.605789212" lastFinishedPulling="2026-04-16 23:30:37.369910388 +0000 UTC m=+39.343741456" observedRunningTime="2026-04-16 23:30:38.424454686 +0000 UTC m=+40.398285914" watchObservedRunningTime="2026-04-16 23:30:38.442662267 +0000 UTC m=+40.416493375" Apr 16 23:30:38.445043 systemd[1]: Started cri-containerd-d42c8b6dd775ad81a42bc897f242f52c842481aee1c698bc3d9ca56862dfb84d.scope - libcontainer container d42c8b6dd775ad81a42bc897f242f52c842481aee1c698bc3d9ca56862dfb84d. Apr 16 23:30:38.520842 systemd[1]: Created slice kubepods-besteffort-podc7a96127_29f2_466f_85b7_7a2d5d53a594.slice - libcontainer container kubepods-besteffort-podc7a96127_29f2_466f_85b7_7a2d5d53a594.slice. Apr 16 23:30:38.533308 containerd[1544]: time="2026-04-16T23:30:38.533270157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-trmr9,Uid:64a4ac65-767d-4c6a-a7b0-e9d6eab09bc8,Namespace:calico-system,Attempt:0,} returns sandbox id \"d42c8b6dd775ad81a42bc897f242f52c842481aee1c698bc3d9ca56862dfb84d\"" Apr 16 23:30:38.536268 containerd[1544]: time="2026-04-16T23:30:38.536215784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 16 23:30:38.576279 kubelet[2759]: I0416 23:30:38.576218 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c7a96127-29f2-466f-85b7-7a2d5d53a594-whisker-backend-key-pair\") pod \"whisker-6885b5698-mrztp\" (UID: \"c7a96127-29f2-466f-85b7-7a2d5d53a594\") " pod="calico-system/whisker-6885b5698-mrztp" Apr 16 23:30:38.576279 kubelet[2759]: I0416 23:30:38.576294 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvcvx\" (UniqueName: \"kubernetes.io/projected/c7a96127-29f2-466f-85b7-7a2d5d53a594-kube-api-access-gvcvx\") pod \"whisker-6885b5698-mrztp\" (UID: \"c7a96127-29f2-466f-85b7-7a2d5d53a594\") " pod="calico-system/whisker-6885b5698-mrztp" Apr 16 23:30:38.576552 kubelet[2759]: I0416 23:30:38.576338 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/c7a96127-29f2-466f-85b7-7a2d5d53a594-nginx-config\") pod \"whisker-6885b5698-mrztp\" (UID: \"c7a96127-29f2-466f-85b7-7a2d5d53a594\") " pod="calico-system/whisker-6885b5698-mrztp" Apr 16 23:30:38.576552 kubelet[2759]: I0416 23:30:38.576382 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7a96127-29f2-466f-85b7-7a2d5d53a594-whisker-ca-bundle\") pod \"whisker-6885b5698-mrztp\" (UID: \"c7a96127-29f2-466f-85b7-7a2d5d53a594\") " pod="calico-system/whisker-6885b5698-mrztp" Apr 16 23:30:38.831743 containerd[1544]: time="2026-04-16T23:30:38.831616110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6885b5698-mrztp,Uid:c7a96127-29f2-466f-85b7-7a2d5d53a594,Namespace:calico-system,Attempt:0,}" Apr 16 23:30:38.976235 systemd-networkd[1418]: cali919d1101a4b: Link UP Apr 16 23:30:38.976912 systemd-networkd[1418]: cali919d1101a4b: Gained carrier Apr 16 23:30:38.997636 containerd[1544]: 2026-04-16 23:30:38.860 [ERROR][3899] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 16 23:30:38.997636 containerd[1544]: 2026-04-16 23:30:38.879 [INFO][3899] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--fff9fc0546-k8s-whisker--6885b5698--mrztp-eth0 whisker-6885b5698- calico-system c7a96127-29f2-466f-85b7-7a2d5d53a594 901 0 2026-04-16 23:30:38 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6885b5698 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459-2-4-n-fff9fc0546 whisker-6885b5698-mrztp eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali919d1101a4b [] [] }} ContainerID="ed009a203e998dc819b91c67119be1ea69cf70d6aa34f0f68bb15e5eacd99e36" Namespace="calico-system" Pod="whisker-6885b5698-mrztp" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-whisker--6885b5698--mrztp-" Apr 16 23:30:38.997636 containerd[1544]: 2026-04-16 23:30:38.879 [INFO][3899] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ed009a203e998dc819b91c67119be1ea69cf70d6aa34f0f68bb15e5eacd99e36" Namespace="calico-system" Pod="whisker-6885b5698-mrztp" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-whisker--6885b5698--mrztp-eth0" Apr 16 23:30:38.997636 containerd[1544]: 2026-04-16 23:30:38.906 [INFO][3910] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ed009a203e998dc819b91c67119be1ea69cf70d6aa34f0f68bb15e5eacd99e36" HandleID="k8s-pod-network.ed009a203e998dc819b91c67119be1ea69cf70d6aa34f0f68bb15e5eacd99e36" Workload="ci--4459--2--4--n--fff9fc0546-k8s-whisker--6885b5698--mrztp-eth0" Apr 16 23:30:38.997636 containerd[1544]: 2026-04-16 23:30:38.919 [INFO][3910] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ed009a203e998dc819b91c67119be1ea69cf70d6aa34f0f68bb15e5eacd99e36" HandleID="k8s-pod-network.ed009a203e998dc819b91c67119be1ea69cf70d6aa34f0f68bb15e5eacd99e36" Workload="ci--4459--2--4--n--fff9fc0546-k8s-whisker--6885b5698--mrztp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fb860), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-n-fff9fc0546", "pod":"whisker-6885b5698-mrztp", "timestamp":"2026-04-16 23:30:38.906154977 +0000 UTC"}, Hostname:"ci-4459-2-4-n-fff9fc0546", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002991e0)} Apr 16 23:30:38.997636 containerd[1544]: 2026-04-16 23:30:38.919 [INFO][3910] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:30:38.997636 containerd[1544]: 2026-04-16 23:30:38.920 [INFO][3910] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:30:38.997636 containerd[1544]: 2026-04-16 23:30:38.920 [INFO][3910] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-fff9fc0546' Apr 16 23:30:38.997636 containerd[1544]: 2026-04-16 23:30:38.923 [INFO][3910] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ed009a203e998dc819b91c67119be1ea69cf70d6aa34f0f68bb15e5eacd99e36" host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:38.997636 containerd[1544]: 2026-04-16 23:30:38.932 [INFO][3910] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:38.997636 containerd[1544]: 2026-04-16 23:30:38.940 [INFO][3910] ipam/ipam.go 526: Trying affinity for 192.168.58.64/26 host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:38.997636 containerd[1544]: 2026-04-16 23:30:38.943 [INFO][3910] ipam/ipam.go 160: Attempting to load block cidr=192.168.58.64/26 host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:38.997636 containerd[1544]: 2026-04-16 23:30:38.950 [INFO][3910] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.58.64/26 host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:38.997636 containerd[1544]: 2026-04-16 23:30:38.950 [INFO][3910] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.58.64/26 handle="k8s-pod-network.ed009a203e998dc819b91c67119be1ea69cf70d6aa34f0f68bb15e5eacd99e36" host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:38.997636 containerd[1544]: 2026-04-16 23:30:38.952 [INFO][3910] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ed009a203e998dc819b91c67119be1ea69cf70d6aa34f0f68bb15e5eacd99e36 Apr 16 23:30:38.997636 containerd[1544]: 2026-04-16 23:30:38.957 [INFO][3910] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.58.64/26 handle="k8s-pod-network.ed009a203e998dc819b91c67119be1ea69cf70d6aa34f0f68bb15e5eacd99e36" host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:38.997636 containerd[1544]: 2026-04-16 23:30:38.964 [INFO][3910] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.58.66/26] block=192.168.58.64/26 handle="k8s-pod-network.ed009a203e998dc819b91c67119be1ea69cf70d6aa34f0f68bb15e5eacd99e36" host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:38.997636 containerd[1544]: 2026-04-16 23:30:38.965 [INFO][3910] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.58.66/26] handle="k8s-pod-network.ed009a203e998dc819b91c67119be1ea69cf70d6aa34f0f68bb15e5eacd99e36" host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:38.997636 containerd[1544]: 2026-04-16 23:30:38.965 [INFO][3910] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:30:38.997636 containerd[1544]: 2026-04-16 23:30:38.965 [INFO][3910] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.58.66/26] IPv6=[] ContainerID="ed009a203e998dc819b91c67119be1ea69cf70d6aa34f0f68bb15e5eacd99e36" HandleID="k8s-pod-network.ed009a203e998dc819b91c67119be1ea69cf70d6aa34f0f68bb15e5eacd99e36" Workload="ci--4459--2--4--n--fff9fc0546-k8s-whisker--6885b5698--mrztp-eth0" Apr 16 23:30:38.998218 containerd[1544]: 2026-04-16 23:30:38.968 [INFO][3899] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ed009a203e998dc819b91c67119be1ea69cf70d6aa34f0f68bb15e5eacd99e36" Namespace="calico-system" Pod="whisker-6885b5698-mrztp" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-whisker--6885b5698--mrztp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--fff9fc0546-k8s-whisker--6885b5698--mrztp-eth0", GenerateName:"whisker-6885b5698-", Namespace:"calico-system", SelfLink:"", UID:"c7a96127-29f2-466f-85b7-7a2d5d53a594", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 30, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6885b5698", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-fff9fc0546", ContainerID:"", Pod:"whisker-6885b5698-mrztp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.58.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali919d1101a4b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:30:38.998218 containerd[1544]: 2026-04-16 23:30:38.968 [INFO][3899] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.66/32] ContainerID="ed009a203e998dc819b91c67119be1ea69cf70d6aa34f0f68bb15e5eacd99e36" Namespace="calico-system" Pod="whisker-6885b5698-mrztp" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-whisker--6885b5698--mrztp-eth0" Apr 16 23:30:38.998218 containerd[1544]: 2026-04-16 23:30:38.968 [INFO][3899] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali919d1101a4b ContainerID="ed009a203e998dc819b91c67119be1ea69cf70d6aa34f0f68bb15e5eacd99e36" Namespace="calico-system" Pod="whisker-6885b5698-mrztp" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-whisker--6885b5698--mrztp-eth0" Apr 16 23:30:38.998218 containerd[1544]: 2026-04-16 23:30:38.978 [INFO][3899] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ed009a203e998dc819b91c67119be1ea69cf70d6aa34f0f68bb15e5eacd99e36" Namespace="calico-system" Pod="whisker-6885b5698-mrztp" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-whisker--6885b5698--mrztp-eth0" Apr 16 23:30:38.998218 containerd[1544]: 2026-04-16 23:30:38.979 [INFO][3899] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ed009a203e998dc819b91c67119be1ea69cf70d6aa34f0f68bb15e5eacd99e36" Namespace="calico-system" Pod="whisker-6885b5698-mrztp" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-whisker--6885b5698--mrztp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--fff9fc0546-k8s-whisker--6885b5698--mrztp-eth0", GenerateName:"whisker-6885b5698-", Namespace:"calico-system", SelfLink:"", UID:"c7a96127-29f2-466f-85b7-7a2d5d53a594", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 30, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6885b5698", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-fff9fc0546", ContainerID:"ed009a203e998dc819b91c67119be1ea69cf70d6aa34f0f68bb15e5eacd99e36", Pod:"whisker-6885b5698-mrztp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.58.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali919d1101a4b", MAC:"9e:9c:55:70:0c:cc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:30:38.998218 containerd[1544]: 2026-04-16 23:30:38.993 [INFO][3899] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ed009a203e998dc819b91c67119be1ea69cf70d6aa34f0f68bb15e5eacd99e36" Namespace="calico-system" Pod="whisker-6885b5698-mrztp" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-whisker--6885b5698--mrztp-eth0" Apr 16 23:30:39.045965 containerd[1544]: time="2026-04-16T23:30:39.045604157Z" level=info msg="connecting to shim ed009a203e998dc819b91c67119be1ea69cf70d6aa34f0f68bb15e5eacd99e36" address="unix:///run/containerd/s/710e3ff404f1ede2db4c072d9c44ea40ae878388116590b4591b10e11c46555d" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:30:39.090380 systemd[1]: Started cri-containerd-ed009a203e998dc819b91c67119be1ea69cf70d6aa34f0f68bb15e5eacd99e36.scope - libcontainer container ed009a203e998dc819b91c67119be1ea69cf70d6aa34f0f68bb15e5eacd99e36. Apr 16 23:30:39.159832 containerd[1544]: time="2026-04-16T23:30:39.159580706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6885b5698-mrztp,Uid:c7a96127-29f2-466f-85b7-7a2d5d53a594,Namespace:calico-system,Attempt:0,} returns sandbox id \"ed009a203e998dc819b91c67119be1ea69cf70d6aa34f0f68bb15e5eacd99e36\"" Apr 16 23:30:39.389603 kubelet[2759]: I0416 23:30:39.389500 2759 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 16 23:30:39.866742 systemd-networkd[1418]: vxlan.calico: Link UP Apr 16 23:30:39.866747 systemd-networkd[1418]: vxlan.calico: Gained carrier Apr 16 23:30:39.995103 systemd-networkd[1418]: calia0c0d7cc39e: Gained IPv6LL Apr 16 23:30:40.061330 systemd-networkd[1418]: cali919d1101a4b: Gained IPv6LL Apr 16 23:30:40.172843 kubelet[2759]: I0416 23:30:40.172560 2759 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="8073296a-4e06-444b-9779-9a044c70aebc" path="/var/lib/kubelet/pods/8073296a-4e06-444b-9779-9a044c70aebc/volumes" Apr 16 23:30:40.885788 containerd[1544]: time="2026-04-16T23:30:40.885671085Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:40.887141 containerd[1544]: time="2026-04-16T23:30:40.886967890Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Apr 16 23:30:40.891246 containerd[1544]: time="2026-04-16T23:30:40.891189555Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:40.895305 containerd[1544]: time="2026-04-16T23:30:40.895229054Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:40.896923 containerd[1544]: time="2026-04-16T23:30:40.896501458Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 2.360239393s" Apr 16 23:30:40.896923 containerd[1544]: time="2026-04-16T23:30:40.896582781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Apr 16 23:30:40.897973 containerd[1544]: time="2026-04-16T23:30:40.897938308Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 16 23:30:40.902266 containerd[1544]: time="2026-04-16T23:30:40.902239456Z" level=info msg="CreateContainer within sandbox \"d42c8b6dd775ad81a42bc897f242f52c842481aee1c698bc3d9ca56862dfb84d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 16 23:30:40.916069 containerd[1544]: time="2026-04-16T23:30:40.916015370Z" level=info msg="Container ebacd4759c4c6075854857224903f3da22a10d27eb3e8c40bc7d68c7ea0328b6: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:30:40.933024 containerd[1544]: time="2026-04-16T23:30:40.932963154Z" level=info msg="CreateContainer within sandbox \"d42c8b6dd775ad81a42bc897f242f52c842481aee1c698bc3d9ca56862dfb84d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ebacd4759c4c6075854857224903f3da22a10d27eb3e8c40bc7d68c7ea0328b6\"" Apr 16 23:30:40.934095 containerd[1544]: time="2026-04-16T23:30:40.934051792Z" level=info msg="StartContainer for \"ebacd4759c4c6075854857224903f3da22a10d27eb3e8c40bc7d68c7ea0328b6\"" Apr 16 23:30:40.938067 containerd[1544]: time="2026-04-16T23:30:40.938004568Z" level=info msg="connecting to shim ebacd4759c4c6075854857224903f3da22a10d27eb3e8c40bc7d68c7ea0328b6" address="unix:///run/containerd/s/3d890b58fbb8dbff83b58360bdc2768292cb0ef1d1d82da321d6aaf17b2d2789" protocol=ttrpc version=3 Apr 16 23:30:40.965108 systemd[1]: Started cri-containerd-ebacd4759c4c6075854857224903f3da22a10d27eb3e8c40bc7d68c7ea0328b6.scope - libcontainer container ebacd4759c4c6075854857224903f3da22a10d27eb3e8c40bc7d68c7ea0328b6. Apr 16 23:30:41.027323 containerd[1544]: time="2026-04-16T23:30:41.027193938Z" level=info msg="StartContainer for \"ebacd4759c4c6075854857224903f3da22a10d27eb3e8c40bc7d68c7ea0328b6\" returns successfully" Apr 16 23:30:41.531313 systemd-networkd[1418]: vxlan.calico: Gained IPv6LL Apr 16 23:30:42.584599 containerd[1544]: time="2026-04-16T23:30:42.584528168Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:42.586966 containerd[1544]: time="2026-04-16T23:30:42.586899046Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Apr 16 23:30:42.588476 containerd[1544]: time="2026-04-16T23:30:42.588394695Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:42.602608 containerd[1544]: time="2026-04-16T23:30:42.601601608Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:42.602608 containerd[1544]: time="2026-04-16T23:30:42.602413795Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.704440367s" Apr 16 23:30:42.602608 containerd[1544]: time="2026-04-16T23:30:42.602450716Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Apr 16 23:30:42.605273 containerd[1544]: time="2026-04-16T23:30:42.605142284Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 16 23:30:42.610669 containerd[1544]: time="2026-04-16T23:30:42.610607184Z" level=info msg="CreateContainer within sandbox \"ed009a203e998dc819b91c67119be1ea69cf70d6aa34f0f68bb15e5eacd99e36\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 16 23:30:42.629890 containerd[1544]: time="2026-04-16T23:30:42.628824181Z" level=info msg="Container dc385a5c9c9c6cad2aec5b7bb2055fb45c8b33244773028763f5b191759b74da: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:30:42.652002 containerd[1544]: time="2026-04-16T23:30:42.651937860Z" level=info msg="CreateContainer within sandbox \"ed009a203e998dc819b91c67119be1ea69cf70d6aa34f0f68bb15e5eacd99e36\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"dc385a5c9c9c6cad2aec5b7bb2055fb45c8b33244773028763f5b191759b74da\"" Apr 16 23:30:42.653122 containerd[1544]: time="2026-04-16T23:30:42.653068177Z" level=info msg="StartContainer for \"dc385a5c9c9c6cad2aec5b7bb2055fb45c8b33244773028763f5b191759b74da\"" Apr 16 23:30:42.655654 containerd[1544]: time="2026-04-16T23:30:42.655534498Z" level=info msg="connecting to shim dc385a5c9c9c6cad2aec5b7bb2055fb45c8b33244773028763f5b191759b74da" address="unix:///run/containerd/s/710e3ff404f1ede2db4c072d9c44ea40ae878388116590b4591b10e11c46555d" protocol=ttrpc version=3 Apr 16 23:30:42.680997 systemd[1]: Started cri-containerd-dc385a5c9c9c6cad2aec5b7bb2055fb45c8b33244773028763f5b191759b74da.scope - libcontainer container dc385a5c9c9c6cad2aec5b7bb2055fb45c8b33244773028763f5b191759b74da. Apr 16 23:30:42.730140 containerd[1544]: time="2026-04-16T23:30:42.729823775Z" level=info msg="StartContainer for \"dc385a5c9c9c6cad2aec5b7bb2055fb45c8b33244773028763f5b191759b74da\" returns successfully" Apr 16 23:30:44.621727 containerd[1544]: time="2026-04-16T23:30:44.621350366Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:44.622561 containerd[1544]: time="2026-04-16T23:30:44.622348317Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Apr 16 23:30:44.623530 containerd[1544]: time="2026-04-16T23:30:44.623487153Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:44.626783 containerd[1544]: time="2026-04-16T23:30:44.626741055Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:44.627600 containerd[1544]: time="2026-04-16T23:30:44.627520079Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 2.022335554s" Apr 16 23:30:44.627724 containerd[1544]: time="2026-04-16T23:30:44.627705365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Apr 16 23:30:44.628957 containerd[1544]: time="2026-04-16T23:30:44.628896483Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 16 23:30:44.634817 containerd[1544]: time="2026-04-16T23:30:44.634755786Z" level=info msg="CreateContainer within sandbox \"d42c8b6dd775ad81a42bc897f242f52c842481aee1c698bc3d9ca56862dfb84d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 16 23:30:44.649421 containerd[1544]: time="2026-04-16T23:30:44.648805307Z" level=info msg="Container 3ccfdbab455eed694fd28e0031381019296c54017624f2872421c756f83e82eb: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:30:44.655189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3976810824.mount: Deactivated successfully. Apr 16 23:30:44.664304 containerd[1544]: time="2026-04-16T23:30:44.664257432Z" level=info msg="CreateContainer within sandbox \"d42c8b6dd775ad81a42bc897f242f52c842481aee1c698bc3d9ca56862dfb84d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"3ccfdbab455eed694fd28e0031381019296c54017624f2872421c756f83e82eb\"" Apr 16 23:30:44.666052 containerd[1544]: time="2026-04-16T23:30:44.665992646Z" level=info msg="StartContainer for \"3ccfdbab455eed694fd28e0031381019296c54017624f2872421c756f83e82eb\"" Apr 16 23:30:44.668115 containerd[1544]: time="2026-04-16T23:30:44.667984589Z" level=info msg="connecting to shim 3ccfdbab455eed694fd28e0031381019296c54017624f2872421c756f83e82eb" address="unix:///run/containerd/s/3d890b58fbb8dbff83b58360bdc2768292cb0ef1d1d82da321d6aaf17b2d2789" protocol=ttrpc version=3 Apr 16 23:30:44.694037 systemd[1]: Started cri-containerd-3ccfdbab455eed694fd28e0031381019296c54017624f2872421c756f83e82eb.scope - libcontainer container 3ccfdbab455eed694fd28e0031381019296c54017624f2872421c756f83e82eb. Apr 16 23:30:44.759584 containerd[1544]: time="2026-04-16T23:30:44.759529381Z" level=info msg="StartContainer for \"3ccfdbab455eed694fd28e0031381019296c54017624f2872421c756f83e82eb\" returns successfully" Apr 16 23:30:45.271588 kubelet[2759]: I0416 23:30:45.271544 2759 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 16 23:30:45.271588 kubelet[2759]: I0416 23:30:45.271588 2759 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 16 23:30:45.435858 kubelet[2759]: I0416 23:30:45.435773 2759 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-trmr9" podStartSLOduration=21.342811645 podStartE2EDuration="27.435681471s" podCreationTimestamp="2026-04-16 23:30:18 +0000 UTC" firstStartedPulling="2026-04-16 23:30:38.535912973 +0000 UTC m=+40.509744081" lastFinishedPulling="2026-04-16 23:30:44.628782839 +0000 UTC m=+46.602613907" observedRunningTime="2026-04-16 23:30:45.434873806 +0000 UTC m=+47.408704914" watchObservedRunningTime="2026-04-16 23:30:45.435681471 +0000 UTC m=+47.409512579" Apr 16 23:30:46.634167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2971772758.mount: Deactivated successfully. Apr 16 23:30:46.653314 containerd[1544]: time="2026-04-16T23:30:46.652919387Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:46.655331 containerd[1544]: time="2026-04-16T23:30:46.654897087Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Apr 16 23:30:46.655331 containerd[1544]: time="2026-04-16T23:30:46.655247737Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:46.658619 containerd[1544]: time="2026-04-16T23:30:46.658577838Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:46.659453 containerd[1544]: time="2026-04-16T23:30:46.659419423Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 2.030465979s" Apr 16 23:30:46.659571 containerd[1544]: time="2026-04-16T23:30:46.659554707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Apr 16 23:30:46.676427 containerd[1544]: time="2026-04-16T23:30:46.676383774Z" level=info msg="CreateContainer within sandbox \"ed009a203e998dc819b91c67119be1ea69cf70d6aa34f0f68bb15e5eacd99e36\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 16 23:30:46.699824 containerd[1544]: time="2026-04-16T23:30:46.698903812Z" level=info msg="Container dd6e96a5733277865a59c167b351365db96ea378e6830c0c3e02f8cb09b14282: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:30:46.710867 containerd[1544]: time="2026-04-16T23:30:46.710821171Z" level=info msg="CreateContainer within sandbox \"ed009a203e998dc819b91c67119be1ea69cf70d6aa34f0f68bb15e5eacd99e36\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"dd6e96a5733277865a59c167b351365db96ea378e6830c0c3e02f8cb09b14282\"" Apr 16 23:30:46.712360 containerd[1544]: time="2026-04-16T23:30:46.711835761Z" level=info msg="StartContainer for \"dd6e96a5733277865a59c167b351365db96ea378e6830c0c3e02f8cb09b14282\"" Apr 16 23:30:46.714306 containerd[1544]: time="2026-04-16T23:30:46.714063948Z" level=info msg="connecting to shim dd6e96a5733277865a59c167b351365db96ea378e6830c0c3e02f8cb09b14282" address="unix:///run/containerd/s/710e3ff404f1ede2db4c072d9c44ea40ae878388116590b4591b10e11c46555d" protocol=ttrpc version=3 Apr 16 23:30:46.740916 systemd[1]: Started cri-containerd-dd6e96a5733277865a59c167b351365db96ea378e6830c0c3e02f8cb09b14282.scope - libcontainer container dd6e96a5733277865a59c167b351365db96ea378e6830c0c3e02f8cb09b14282. Apr 16 23:30:46.796481 containerd[1544]: time="2026-04-16T23:30:46.796275104Z" level=info msg="StartContainer for \"dd6e96a5733277865a59c167b351365db96ea378e6830c0c3e02f8cb09b14282\" returns successfully" Apr 16 23:30:47.164142 containerd[1544]: time="2026-04-16T23:30:47.164075725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-dsxbp,Uid:2bf6e3dc-0126-4a60-8616-f3a621631fa4,Namespace:kube-system,Attempt:0,}" Apr 16 23:30:47.165493 containerd[1544]: time="2026-04-16T23:30:47.165322721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-khfbq,Uid:4946c3d7-bfb6-4373-a3d7-415927dfe156,Namespace:kube-system,Attempt:0,}" Apr 16 23:30:47.401037 systemd-networkd[1418]: cali819378706d5: Link UP Apr 16 23:30:47.401245 systemd-networkd[1418]: cali819378706d5: Gained carrier Apr 16 23:30:47.422844 containerd[1544]: 2026-04-16 23:30:47.223 [INFO][4420] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--fff9fc0546-k8s-coredns--7d764666f9--dsxbp-eth0 coredns-7d764666f9- kube-system 2bf6e3dc-0126-4a60-8616-f3a621631fa4 839 0 2026-04-16 23:30:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-2-4-n-fff9fc0546 coredns-7d764666f9-dsxbp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali819378706d5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="4111ca902d7e24ac9a8ad833324e0cbd13883757a7de359742f9c4ba9a3ee7ab" Namespace="kube-system" Pod="coredns-7d764666f9-dsxbp" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-coredns--7d764666f9--dsxbp-" Apr 16 23:30:47.422844 containerd[1544]: 2026-04-16 23:30:47.224 [INFO][4420] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4111ca902d7e24ac9a8ad833324e0cbd13883757a7de359742f9c4ba9a3ee7ab" Namespace="kube-system" Pod="coredns-7d764666f9-dsxbp" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-coredns--7d764666f9--dsxbp-eth0" Apr 16 23:30:47.422844 containerd[1544]: 2026-04-16 23:30:47.308 [INFO][4444] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4111ca902d7e24ac9a8ad833324e0cbd13883757a7de359742f9c4ba9a3ee7ab" HandleID="k8s-pod-network.4111ca902d7e24ac9a8ad833324e0cbd13883757a7de359742f9c4ba9a3ee7ab" Workload="ci--4459--2--4--n--fff9fc0546-k8s-coredns--7d764666f9--dsxbp-eth0" Apr 16 23:30:47.422844 containerd[1544]: 2026-04-16 23:30:47.330 [INFO][4444] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4111ca902d7e24ac9a8ad833324e0cbd13883757a7de359742f9c4ba9a3ee7ab" HandleID="k8s-pod-network.4111ca902d7e24ac9a8ad833324e0cbd13883757a7de359742f9c4ba9a3ee7ab" Workload="ci--4459--2--4--n--fff9fc0546-k8s-coredns--7d764666f9--dsxbp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000103e90), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-2-4-n-fff9fc0546", "pod":"coredns-7d764666f9-dsxbp", "timestamp":"2026-04-16 23:30:47.308846481 +0000 UTC"}, Hostname:"ci-4459-2-4-n-fff9fc0546", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400018f080)} Apr 16 23:30:47.422844 containerd[1544]: 2026-04-16 23:30:47.330 [INFO][4444] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:30:47.422844 containerd[1544]: 2026-04-16 23:30:47.330 [INFO][4444] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:30:47.422844 containerd[1544]: 2026-04-16 23:30:47.332 [INFO][4444] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-fff9fc0546' Apr 16 23:30:47.422844 containerd[1544]: 2026-04-16 23:30:47.339 [INFO][4444] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4111ca902d7e24ac9a8ad833324e0cbd13883757a7de359742f9c4ba9a3ee7ab" host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:47.422844 containerd[1544]: 2026-04-16 23:30:47.352 [INFO][4444] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:47.422844 containerd[1544]: 2026-04-16 23:30:47.364 [INFO][4444] ipam/ipam.go 526: Trying affinity for 192.168.58.64/26 host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:47.422844 containerd[1544]: 2026-04-16 23:30:47.366 [INFO][4444] ipam/ipam.go 160: Attempting to load block cidr=192.168.58.64/26 host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:47.422844 containerd[1544]: 2026-04-16 23:30:47.370 [INFO][4444] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.58.64/26 host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:47.422844 containerd[1544]: 2026-04-16 23:30:47.370 [INFO][4444] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.58.64/26 handle="k8s-pod-network.4111ca902d7e24ac9a8ad833324e0cbd13883757a7de359742f9c4ba9a3ee7ab" host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:47.422844 containerd[1544]: 2026-04-16 23:30:47.373 [INFO][4444] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4111ca902d7e24ac9a8ad833324e0cbd13883757a7de359742f9c4ba9a3ee7ab Apr 16 23:30:47.422844 containerd[1544]: 2026-04-16 23:30:47.381 [INFO][4444] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.58.64/26 handle="k8s-pod-network.4111ca902d7e24ac9a8ad833324e0cbd13883757a7de359742f9c4ba9a3ee7ab" host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:47.422844 containerd[1544]: 2026-04-16 23:30:47.390 [INFO][4444] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.58.67/26] block=192.168.58.64/26 handle="k8s-pod-network.4111ca902d7e24ac9a8ad833324e0cbd13883757a7de359742f9c4ba9a3ee7ab" host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:47.422844 containerd[1544]: 2026-04-16 23:30:47.391 [INFO][4444] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.58.67/26] handle="k8s-pod-network.4111ca902d7e24ac9a8ad833324e0cbd13883757a7de359742f9c4ba9a3ee7ab" host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:47.422844 containerd[1544]: 2026-04-16 23:30:47.391 [INFO][4444] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:30:47.422844 containerd[1544]: 2026-04-16 23:30:47.391 [INFO][4444] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.58.67/26] IPv6=[] ContainerID="4111ca902d7e24ac9a8ad833324e0cbd13883757a7de359742f9c4ba9a3ee7ab" HandleID="k8s-pod-network.4111ca902d7e24ac9a8ad833324e0cbd13883757a7de359742f9c4ba9a3ee7ab" Workload="ci--4459--2--4--n--fff9fc0546-k8s-coredns--7d764666f9--dsxbp-eth0" Apr 16 23:30:47.424201 containerd[1544]: 2026-04-16 23:30:47.394 [INFO][4420] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4111ca902d7e24ac9a8ad833324e0cbd13883757a7de359742f9c4ba9a3ee7ab" Namespace="kube-system" Pod="coredns-7d764666f9-dsxbp" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-coredns--7d764666f9--dsxbp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--fff9fc0546-k8s-coredns--7d764666f9--dsxbp-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"2bf6e3dc-0126-4a60-8616-f3a621631fa4", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 30, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-fff9fc0546", ContainerID:"", Pod:"coredns-7d764666f9-dsxbp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali819378706d5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:30:47.424201 containerd[1544]: 2026-04-16 23:30:47.394 [INFO][4420] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.67/32] ContainerID="4111ca902d7e24ac9a8ad833324e0cbd13883757a7de359742f9c4ba9a3ee7ab" Namespace="kube-system" Pod="coredns-7d764666f9-dsxbp" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-coredns--7d764666f9--dsxbp-eth0" Apr 16 23:30:47.424201 containerd[1544]: 2026-04-16 23:30:47.394 [INFO][4420] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali819378706d5 ContainerID="4111ca902d7e24ac9a8ad833324e0cbd13883757a7de359742f9c4ba9a3ee7ab" Namespace="kube-system" Pod="coredns-7d764666f9-dsxbp" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-coredns--7d764666f9--dsxbp-eth0" Apr 16 23:30:47.424201 containerd[1544]: 2026-04-16 23:30:47.401 [INFO][4420] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4111ca902d7e24ac9a8ad833324e0cbd13883757a7de359742f9c4ba9a3ee7ab" Namespace="kube-system" Pod="coredns-7d764666f9-dsxbp" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-coredns--7d764666f9--dsxbp-eth0" Apr 16 23:30:47.440310 containerd[1544]: 2026-04-16 23:30:47.402 [INFO][4420] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4111ca902d7e24ac9a8ad833324e0cbd13883757a7de359742f9c4ba9a3ee7ab" Namespace="kube-system" Pod="coredns-7d764666f9-dsxbp" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-coredns--7d764666f9--dsxbp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--fff9fc0546-k8s-coredns--7d764666f9--dsxbp-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"2bf6e3dc-0126-4a60-8616-f3a621631fa4", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 30, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-fff9fc0546", ContainerID:"4111ca902d7e24ac9a8ad833324e0cbd13883757a7de359742f9c4ba9a3ee7ab", Pod:"coredns-7d764666f9-dsxbp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali819378706d5", MAC:"fa:3d:2e:49:eb:43", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:30:47.440310 containerd[1544]: 2026-04-16 23:30:47.413 [INFO][4420] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4111ca902d7e24ac9a8ad833324e0cbd13883757a7de359742f9c4ba9a3ee7ab" Namespace="kube-system" Pod="coredns-7d764666f9-dsxbp" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-coredns--7d764666f9--dsxbp-eth0" Apr 16 23:30:47.486649 kubelet[2759]: I0416 23:30:47.485793 2759 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-6885b5698-mrztp" podStartSLOduration=1.987413807 podStartE2EDuration="9.485774986s" podCreationTimestamp="2026-04-16 23:30:38 +0000 UTC" firstStartedPulling="2026-04-16 23:30:39.162115636 +0000 UTC m=+41.135946744" lastFinishedPulling="2026-04-16 23:30:46.660476815 +0000 UTC m=+48.634307923" observedRunningTime="2026-04-16 23:30:47.48454419 +0000 UTC m=+49.458375298" watchObservedRunningTime="2026-04-16 23:30:47.485774986 +0000 UTC m=+49.459606094" Apr 16 23:30:47.534149 systemd-networkd[1418]: calic6a00e359cc: Link UP Apr 16 23:30:47.535640 systemd-networkd[1418]: calic6a00e359cc: Gained carrier Apr 16 23:30:47.555397 containerd[1544]: time="2026-04-16T23:30:47.554672661Z" level=info msg="connecting to shim 4111ca902d7e24ac9a8ad833324e0cbd13883757a7de359742f9c4ba9a3ee7ab" address="unix:///run/containerd/s/46fb26ccbd837b8e3f88c6177cdd86f0401de4036154d4f937b005103801bcbb" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:30:47.565172 containerd[1544]: 2026-04-16 23:30:47.255 [INFO][4422] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--fff9fc0546-k8s-coredns--7d764666f9--khfbq-eth0 coredns-7d764666f9- kube-system 4946c3d7-bfb6-4373-a3d7-415927dfe156 840 0 2026-04-16 23:30:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-2-4-n-fff9fc0546 coredns-7d764666f9-khfbq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic6a00e359cc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="e17454ea23be0005e3719d2231e08385f5c9752344fc1a84e9d862d4b4b8790c" Namespace="kube-system" Pod="coredns-7d764666f9-khfbq" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-coredns--7d764666f9--khfbq-" Apr 16 23:30:47.565172 containerd[1544]: 2026-04-16 23:30:47.256 [INFO][4422] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e17454ea23be0005e3719d2231e08385f5c9752344fc1a84e9d862d4b4b8790c" Namespace="kube-system" Pod="coredns-7d764666f9-khfbq" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-coredns--7d764666f9--khfbq-eth0" Apr 16 23:30:47.565172 containerd[1544]: 2026-04-16 23:30:47.330 [INFO][4449] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e17454ea23be0005e3719d2231e08385f5c9752344fc1a84e9d862d4b4b8790c" HandleID="k8s-pod-network.e17454ea23be0005e3719d2231e08385f5c9752344fc1a84e9d862d4b4b8790c" Workload="ci--4459--2--4--n--fff9fc0546-k8s-coredns--7d764666f9--khfbq-eth0" Apr 16 23:30:47.565172 containerd[1544]: 2026-04-16 23:30:47.349 [INFO][4449] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e17454ea23be0005e3719d2231e08385f5c9752344fc1a84e9d862d4b4b8790c" HandleID="k8s-pod-network.e17454ea23be0005e3719d2231e08385f5c9752344fc1a84e9d862d4b4b8790c" Workload="ci--4459--2--4--n--fff9fc0546-k8s-coredns--7d764666f9--khfbq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fab20), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-2-4-n-fff9fc0546", "pod":"coredns-7d764666f9-khfbq", "timestamp":"2026-04-16 23:30:47.33050644 +0000 UTC"}, Hostname:"ci-4459-2-4-n-fff9fc0546", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002c8580)} Apr 16 23:30:47.565172 containerd[1544]: 2026-04-16 23:30:47.349 [INFO][4449] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:30:47.565172 containerd[1544]: 2026-04-16 23:30:47.391 [INFO][4449] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:30:47.565172 containerd[1544]: 2026-04-16 23:30:47.391 [INFO][4449] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-fff9fc0546' Apr 16 23:30:47.565172 containerd[1544]: 2026-04-16 23:30:47.443 [INFO][4449] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e17454ea23be0005e3719d2231e08385f5c9752344fc1a84e9d862d4b4b8790c" host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:47.565172 containerd[1544]: 2026-04-16 23:30:47.457 [INFO][4449] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:47.565172 containerd[1544]: 2026-04-16 23:30:47.468 [INFO][4449] ipam/ipam.go 526: Trying affinity for 192.168.58.64/26 host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:47.565172 containerd[1544]: 2026-04-16 23:30:47.472 [INFO][4449] ipam/ipam.go 160: Attempting to load block cidr=192.168.58.64/26 host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:47.565172 containerd[1544]: 2026-04-16 23:30:47.482 [INFO][4449] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.58.64/26 host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:47.565172 containerd[1544]: 2026-04-16 23:30:47.486 [INFO][4449] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.58.64/26 handle="k8s-pod-network.e17454ea23be0005e3719d2231e08385f5c9752344fc1a84e9d862d4b4b8790c" host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:47.565172 containerd[1544]: 2026-04-16 23:30:47.502 [INFO][4449] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e17454ea23be0005e3719d2231e08385f5c9752344fc1a84e9d862d4b4b8790c Apr 16 23:30:47.565172 containerd[1544]: 2026-04-16 23:30:47.516 [INFO][4449] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.58.64/26 handle="k8s-pod-network.e17454ea23be0005e3719d2231e08385f5c9752344fc1a84e9d862d4b4b8790c" host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:47.565172 containerd[1544]: 2026-04-16 23:30:47.525 [INFO][4449] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.58.68/26] block=192.168.58.64/26 handle="k8s-pod-network.e17454ea23be0005e3719d2231e08385f5c9752344fc1a84e9d862d4b4b8790c" host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:47.565172 containerd[1544]: 2026-04-16 23:30:47.525 [INFO][4449] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.58.68/26] handle="k8s-pod-network.e17454ea23be0005e3719d2231e08385f5c9752344fc1a84e9d862d4b4b8790c" host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:47.565172 containerd[1544]: 2026-04-16 23:30:47.525 [INFO][4449] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:30:47.565172 containerd[1544]: 2026-04-16 23:30:47.525 [INFO][4449] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.58.68/26] IPv6=[] ContainerID="e17454ea23be0005e3719d2231e08385f5c9752344fc1a84e9d862d4b4b8790c" HandleID="k8s-pod-network.e17454ea23be0005e3719d2231e08385f5c9752344fc1a84e9d862d4b4b8790c" Workload="ci--4459--2--4--n--fff9fc0546-k8s-coredns--7d764666f9--khfbq-eth0" Apr 16 23:30:47.567274 containerd[1544]: 2026-04-16 23:30:47.529 [INFO][4422] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e17454ea23be0005e3719d2231e08385f5c9752344fc1a84e9d862d4b4b8790c" Namespace="kube-system" Pod="coredns-7d764666f9-khfbq" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-coredns--7d764666f9--khfbq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--fff9fc0546-k8s-coredns--7d764666f9--khfbq-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"4946c3d7-bfb6-4373-a3d7-415927dfe156", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 30, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-fff9fc0546", ContainerID:"", Pod:"coredns-7d764666f9-khfbq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic6a00e359cc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:30:47.567274 containerd[1544]: 2026-04-16 23:30:47.529 [INFO][4422] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.68/32] ContainerID="e17454ea23be0005e3719d2231e08385f5c9752344fc1a84e9d862d4b4b8790c" Namespace="kube-system" Pod="coredns-7d764666f9-khfbq" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-coredns--7d764666f9--khfbq-eth0" Apr 16 23:30:47.567274 containerd[1544]: 2026-04-16 23:30:47.529 [INFO][4422] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic6a00e359cc ContainerID="e17454ea23be0005e3719d2231e08385f5c9752344fc1a84e9d862d4b4b8790c" Namespace="kube-system" Pod="coredns-7d764666f9-khfbq" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-coredns--7d764666f9--khfbq-eth0" Apr 16 23:30:47.567274 containerd[1544]: 2026-04-16 23:30:47.536 [INFO][4422] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e17454ea23be0005e3719d2231e08385f5c9752344fc1a84e9d862d4b4b8790c" Namespace="kube-system" Pod="coredns-7d764666f9-khfbq" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-coredns--7d764666f9--khfbq-eth0" Apr 16 23:30:47.568027 containerd[1544]: 2026-04-16 23:30:47.537 [INFO][4422] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e17454ea23be0005e3719d2231e08385f5c9752344fc1a84e9d862d4b4b8790c" Namespace="kube-system" Pod="coredns-7d764666f9-khfbq" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-coredns--7d764666f9--khfbq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--fff9fc0546-k8s-coredns--7d764666f9--khfbq-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"4946c3d7-bfb6-4373-a3d7-415927dfe156", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 30, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-fff9fc0546", ContainerID:"e17454ea23be0005e3719d2231e08385f5c9752344fc1a84e9d862d4b4b8790c", Pod:"coredns-7d764666f9-khfbq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic6a00e359cc", MAC:"e2:f3:5e:b2:f1:81", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:30:47.568027 containerd[1544]: 2026-04-16 23:30:47.558 [INFO][4422] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e17454ea23be0005e3719d2231e08385f5c9752344fc1a84e9d862d4b4b8790c" Namespace="kube-system" Pod="coredns-7d764666f9-khfbq" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-coredns--7d764666f9--khfbq-eth0" Apr 16 23:30:47.601087 systemd[1]: Started cri-containerd-4111ca902d7e24ac9a8ad833324e0cbd13883757a7de359742f9c4ba9a3ee7ab.scope - libcontainer container 4111ca902d7e24ac9a8ad833324e0cbd13883757a7de359742f9c4ba9a3ee7ab. Apr 16 23:30:47.616359 containerd[1544]: time="2026-04-16T23:30:47.616148757Z" level=info msg="connecting to shim e17454ea23be0005e3719d2231e08385f5c9752344fc1a84e9d862d4b4b8790c" address="unix:///run/containerd/s/5959d9ebafc2af3ee0aff0af9b165639dfaabe67ab0c540fadb4e565a585e9ea" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:30:47.675605 containerd[1544]: time="2026-04-16T23:30:47.675472589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-dsxbp,Uid:2bf6e3dc-0126-4a60-8616-f3a621631fa4,Namespace:kube-system,Attempt:0,} returns sandbox id \"4111ca902d7e24ac9a8ad833324e0cbd13883757a7de359742f9c4ba9a3ee7ab\"" Apr 16 23:30:47.686166 containerd[1544]: time="2026-04-16T23:30:47.686127544Z" level=info msg="CreateContainer within sandbox \"4111ca902d7e24ac9a8ad833324e0cbd13883757a7de359742f9c4ba9a3ee7ab\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 16 23:30:47.686883 systemd[1]: Started cri-containerd-e17454ea23be0005e3719d2231e08385f5c9752344fc1a84e9d862d4b4b8790c.scope - libcontainer container e17454ea23be0005e3719d2231e08385f5c9752344fc1a84e9d862d4b4b8790c. Apr 16 23:30:47.704358 containerd[1544]: time="2026-04-16T23:30:47.703860628Z" level=info msg="Container 974b670fa89b6c05005f196c78d321e400af49805f19d11256eb106b5c4c742c: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:30:47.714440 containerd[1544]: time="2026-04-16T23:30:47.714395339Z" level=info msg="CreateContainer within sandbox \"4111ca902d7e24ac9a8ad833324e0cbd13883757a7de359742f9c4ba9a3ee7ab\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"974b670fa89b6c05005f196c78d321e400af49805f19d11256eb106b5c4c742c\"" Apr 16 23:30:47.716504 containerd[1544]: time="2026-04-16T23:30:47.716470080Z" level=info msg="StartContainer for \"974b670fa89b6c05005f196c78d321e400af49805f19d11256eb106b5c4c742c\"" Apr 16 23:30:47.717435 containerd[1544]: time="2026-04-16T23:30:47.717391547Z" level=info msg="connecting to shim 974b670fa89b6c05005f196c78d321e400af49805f19d11256eb106b5c4c742c" address="unix:///run/containerd/s/46fb26ccbd837b8e3f88c6177cdd86f0401de4036154d4f937b005103801bcbb" protocol=ttrpc version=3 Apr 16 23:30:47.748919 systemd[1]: Started cri-containerd-974b670fa89b6c05005f196c78d321e400af49805f19d11256eb106b5c4c742c.scope - libcontainer container 974b670fa89b6c05005f196c78d321e400af49805f19d11256eb106b5c4c742c. Apr 16 23:30:47.798095 containerd[1544]: time="2026-04-16T23:30:47.798034369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-khfbq,Uid:4946c3d7-bfb6-4373-a3d7-415927dfe156,Namespace:kube-system,Attempt:0,} returns sandbox id \"e17454ea23be0005e3719d2231e08385f5c9752344fc1a84e9d862d4b4b8790c\"" Apr 16 23:30:47.811508 containerd[1544]: time="2026-04-16T23:30:47.811447165Z" level=info msg="CreateContainer within sandbox \"e17454ea23be0005e3719d2231e08385f5c9752344fc1a84e9d862d4b4b8790c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 16 23:30:47.829748 containerd[1544]: time="2026-04-16T23:30:47.828029975Z" level=info msg="StartContainer for \"974b670fa89b6c05005f196c78d321e400af49805f19d11256eb106b5c4c742c\" returns successfully" Apr 16 23:30:47.832114 containerd[1544]: time="2026-04-16T23:30:47.832071655Z" level=info msg="Container 8247ad9adb0e9a086bb99630458a9c9c20940ec998263405f1e128fe0b6a0e2b: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:30:47.841215 containerd[1544]: time="2026-04-16T23:30:47.841158523Z" level=info msg="CreateContainer within sandbox \"e17454ea23be0005e3719d2231e08385f5c9752344fc1a84e9d862d4b4b8790c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8247ad9adb0e9a086bb99630458a9c9c20940ec998263405f1e128fe0b6a0e2b\"" Apr 16 23:30:47.841836 containerd[1544]: time="2026-04-16T23:30:47.841815982Z" level=info msg="StartContainer for \"8247ad9adb0e9a086bb99630458a9c9c20940ec998263405f1e128fe0b6a0e2b\"" Apr 16 23:30:47.843418 containerd[1544]: time="2026-04-16T23:30:47.843379989Z" level=info msg="connecting to shim 8247ad9adb0e9a086bb99630458a9c9c20940ec998263405f1e128fe0b6a0e2b" address="unix:///run/containerd/s/5959d9ebafc2af3ee0aff0af9b165639dfaabe67ab0c540fadb4e565a585e9ea" protocol=ttrpc version=3 Apr 16 23:30:47.867875 systemd[1]: Started cri-containerd-8247ad9adb0e9a086bb99630458a9c9c20940ec998263405f1e128fe0b6a0e2b.scope - libcontainer container 8247ad9adb0e9a086bb99630458a9c9c20940ec998263405f1e128fe0b6a0e2b. Apr 16 23:30:47.917301 containerd[1544]: time="2026-04-16T23:30:47.917247170Z" level=info msg="StartContainer for \"8247ad9adb0e9a086bb99630458a9c9c20940ec998263405f1e128fe0b6a0e2b\" returns successfully" Apr 16 23:30:48.482070 kubelet[2759]: I0416 23:30:48.481943 2759 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-khfbq" podStartSLOduration=45.48192827 podStartE2EDuration="45.48192827s" podCreationTimestamp="2026-04-16 23:30:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:30:48.480415827 +0000 UTC m=+50.454246935" watchObservedRunningTime="2026-04-16 23:30:48.48192827 +0000 UTC m=+50.455759378" Apr 16 23:30:48.524228 kubelet[2759]: I0416 23:30:48.524155 2759 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-dsxbp" podStartSLOduration=45.524119414 podStartE2EDuration="45.524119414s" podCreationTimestamp="2026-04-16 23:30:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:30:48.520127738 +0000 UTC m=+50.493958846" watchObservedRunningTime="2026-04-16 23:30:48.524119414 +0000 UTC m=+50.497950562" Apr 16 23:30:49.083047 systemd-networkd[1418]: calic6a00e359cc: Gained IPv6LL Apr 16 23:30:49.164623 containerd[1544]: time="2026-04-16T23:30:49.164563464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-648d8fc559-q6448,Uid:1c997839-dd97-4bff-97ae-590f0c51aa4b,Namespace:calico-system,Attempt:0,}" Apr 16 23:30:49.297736 systemd-networkd[1418]: cali470add6057b: Link UP Apr 16 23:30:49.298763 systemd-networkd[1418]: cali470add6057b: Gained carrier Apr 16 23:30:49.323207 containerd[1544]: 2026-04-16 23:30:49.208 [INFO][4672] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--fff9fc0546-k8s-calico--kube--controllers--648d8fc559--q6448-eth0 calico-kube-controllers-648d8fc559- calico-system 1c997839-dd97-4bff-97ae-590f0c51aa4b 842 0 2026-04-16 23:30:18 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:648d8fc559 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459-2-4-n-fff9fc0546 calico-kube-controllers-648d8fc559-q6448 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali470add6057b [] [] }} ContainerID="dc472ea3e322811f0faf71064c6d22bc51820473d4bcfee0d08f4504331b6fdf" Namespace="calico-system" Pod="calico-kube-controllers-648d8fc559-q6448" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-calico--kube--controllers--648d8fc559--q6448-" Apr 16 23:30:49.323207 containerd[1544]: 2026-04-16 23:30:49.209 [INFO][4672] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dc472ea3e322811f0faf71064c6d22bc51820473d4bcfee0d08f4504331b6fdf" Namespace="calico-system" Pod="calico-kube-controllers-648d8fc559-q6448" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-calico--kube--controllers--648d8fc559--q6448-eth0" Apr 16 23:30:49.323207 containerd[1544]: 2026-04-16 23:30:49.242 [INFO][4684] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dc472ea3e322811f0faf71064c6d22bc51820473d4bcfee0d08f4504331b6fdf" HandleID="k8s-pod-network.dc472ea3e322811f0faf71064c6d22bc51820473d4bcfee0d08f4504331b6fdf" Workload="ci--4459--2--4--n--fff9fc0546-k8s-calico--kube--controllers--648d8fc559--q6448-eth0" Apr 16 23:30:49.323207 containerd[1544]: 2026-04-16 23:30:49.253 [INFO][4684] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="dc472ea3e322811f0faf71064c6d22bc51820473d4bcfee0d08f4504331b6fdf" HandleID="k8s-pod-network.dc472ea3e322811f0faf71064c6d22bc51820473d4bcfee0d08f4504331b6fdf" Workload="ci--4459--2--4--n--fff9fc0546-k8s-calico--kube--controllers--648d8fc559--q6448-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002733a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-n-fff9fc0546", "pod":"calico-kube-controllers-648d8fc559-q6448", "timestamp":"2026-04-16 23:30:49.242137314 +0000 UTC"}, Hostname:"ci-4459-2-4-n-fff9fc0546", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002f3080)} Apr 16 23:30:49.323207 containerd[1544]: 2026-04-16 23:30:49.253 [INFO][4684] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:30:49.323207 containerd[1544]: 2026-04-16 23:30:49.253 [INFO][4684] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:30:49.323207 containerd[1544]: 2026-04-16 23:30:49.253 [INFO][4684] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-fff9fc0546' Apr 16 23:30:49.323207 containerd[1544]: 2026-04-16 23:30:49.256 [INFO][4684] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.dc472ea3e322811f0faf71064c6d22bc51820473d4bcfee0d08f4504331b6fdf" host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:49.323207 containerd[1544]: 2026-04-16 23:30:49.263 [INFO][4684] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:49.323207 containerd[1544]: 2026-04-16 23:30:49.268 [INFO][4684] ipam/ipam.go 526: Trying affinity for 192.168.58.64/26 host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:49.323207 containerd[1544]: 2026-04-16 23:30:49.271 [INFO][4684] ipam/ipam.go 160: Attempting to load block cidr=192.168.58.64/26 host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:49.323207 containerd[1544]: 2026-04-16 23:30:49.273 [INFO][4684] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.58.64/26 host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:49.323207 containerd[1544]: 2026-04-16 23:30:49.273 [INFO][4684] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.58.64/26 handle="k8s-pod-network.dc472ea3e322811f0faf71064c6d22bc51820473d4bcfee0d08f4504331b6fdf" host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:49.323207 containerd[1544]: 2026-04-16 23:30:49.275 [INFO][4684] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.dc472ea3e322811f0faf71064c6d22bc51820473d4bcfee0d08f4504331b6fdf Apr 16 23:30:49.323207 containerd[1544]: 2026-04-16 23:30:49.282 [INFO][4684] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.58.64/26 handle="k8s-pod-network.dc472ea3e322811f0faf71064c6d22bc51820473d4bcfee0d08f4504331b6fdf" host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:49.323207 containerd[1544]: 2026-04-16 23:30:49.289 [INFO][4684] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.58.69/26] block=192.168.58.64/26 handle="k8s-pod-network.dc472ea3e322811f0faf71064c6d22bc51820473d4bcfee0d08f4504331b6fdf" host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:49.323207 containerd[1544]: 2026-04-16 23:30:49.290 [INFO][4684] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.58.69/26] handle="k8s-pod-network.dc472ea3e322811f0faf71064c6d22bc51820473d4bcfee0d08f4504331b6fdf" host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:49.323207 containerd[1544]: 2026-04-16 23:30:49.290 [INFO][4684] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:30:49.323207 containerd[1544]: 2026-04-16 23:30:49.290 [INFO][4684] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.58.69/26] IPv6=[] ContainerID="dc472ea3e322811f0faf71064c6d22bc51820473d4bcfee0d08f4504331b6fdf" HandleID="k8s-pod-network.dc472ea3e322811f0faf71064c6d22bc51820473d4bcfee0d08f4504331b6fdf" Workload="ci--4459--2--4--n--fff9fc0546-k8s-calico--kube--controllers--648d8fc559--q6448-eth0" Apr 16 23:30:49.326371 containerd[1544]: 2026-04-16 23:30:49.294 [INFO][4672] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dc472ea3e322811f0faf71064c6d22bc51820473d4bcfee0d08f4504331b6fdf" Namespace="calico-system" Pod="calico-kube-controllers-648d8fc559-q6448" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-calico--kube--controllers--648d8fc559--q6448-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--fff9fc0546-k8s-calico--kube--controllers--648d8fc559--q6448-eth0", GenerateName:"calico-kube-controllers-648d8fc559-", Namespace:"calico-system", SelfLink:"", UID:"1c997839-dd97-4bff-97ae-590f0c51aa4b", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 30, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"648d8fc559", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-fff9fc0546", ContainerID:"", Pod:"calico-kube-controllers-648d8fc559-q6448", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.58.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali470add6057b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:30:49.326371 containerd[1544]: 2026-04-16 23:30:49.294 [INFO][4672] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.69/32] ContainerID="dc472ea3e322811f0faf71064c6d22bc51820473d4bcfee0d08f4504331b6fdf" Namespace="calico-system" Pod="calico-kube-controllers-648d8fc559-q6448" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-calico--kube--controllers--648d8fc559--q6448-eth0" Apr 16 23:30:49.326371 containerd[1544]: 2026-04-16 23:30:49.294 [INFO][4672] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali470add6057b ContainerID="dc472ea3e322811f0faf71064c6d22bc51820473d4bcfee0d08f4504331b6fdf" Namespace="calico-system" Pod="calico-kube-controllers-648d8fc559-q6448" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-calico--kube--controllers--648d8fc559--q6448-eth0" Apr 16 23:30:49.326371 containerd[1544]: 2026-04-16 23:30:49.299 [INFO][4672] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dc472ea3e322811f0faf71064c6d22bc51820473d4bcfee0d08f4504331b6fdf" Namespace="calico-system" Pod="calico-kube-controllers-648d8fc559-q6448" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-calico--kube--controllers--648d8fc559--q6448-eth0" Apr 16 23:30:49.326371 containerd[1544]: 2026-04-16 23:30:49.299 [INFO][4672] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dc472ea3e322811f0faf71064c6d22bc51820473d4bcfee0d08f4504331b6fdf" Namespace="calico-system" Pod="calico-kube-controllers-648d8fc559-q6448" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-calico--kube--controllers--648d8fc559--q6448-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--fff9fc0546-k8s-calico--kube--controllers--648d8fc559--q6448-eth0", GenerateName:"calico-kube-controllers-648d8fc559-", Namespace:"calico-system", SelfLink:"", UID:"1c997839-dd97-4bff-97ae-590f0c51aa4b", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 30, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"648d8fc559", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-fff9fc0546", ContainerID:"dc472ea3e322811f0faf71064c6d22bc51820473d4bcfee0d08f4504331b6fdf", Pod:"calico-kube-controllers-648d8fc559-q6448", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.58.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali470add6057b", MAC:"f6:ae:1e:d0:a0:92", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:30:49.326371 containerd[1544]: 2026-04-16 23:30:49.317 [INFO][4672] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dc472ea3e322811f0faf71064c6d22bc51820473d4bcfee0d08f4504331b6fdf" Namespace="calico-system" Pod="calico-kube-controllers-648d8fc559-q6448" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-calico--kube--controllers--648d8fc559--q6448-eth0" Apr 16 23:30:49.357735 containerd[1544]: time="2026-04-16T23:30:49.356869183Z" level=info msg="connecting to shim dc472ea3e322811f0faf71064c6d22bc51820473d4bcfee0d08f4504331b6fdf" address="unix:///run/containerd/s/658b462e72953150bbbdb4c4448bd861ac873870e28fdd45a576b5b7fa64b7d2" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:30:49.389052 systemd[1]: Started cri-containerd-dc472ea3e322811f0faf71064c6d22bc51820473d4bcfee0d08f4504331b6fdf.scope - libcontainer container dc472ea3e322811f0faf71064c6d22bc51820473d4bcfee0d08f4504331b6fdf. Apr 16 23:30:49.402926 systemd-networkd[1418]: cali819378706d5: Gained IPv6LL Apr 16 23:30:49.450658 containerd[1544]: time="2026-04-16T23:30:49.450596054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-648d8fc559-q6448,Uid:1c997839-dd97-4bff-97ae-590f0c51aa4b,Namespace:calico-system,Attempt:0,} returns sandbox id \"dc472ea3e322811f0faf71064c6d22bc51820473d4bcfee0d08f4504331b6fdf\"" Apr 16 23:30:49.453308 containerd[1544]: time="2026-04-16T23:30:49.453279170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 16 23:30:50.165993 containerd[1544]: time="2026-04-16T23:30:50.164602922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-784c96cd86-5m7xc,Uid:fb3cba98-8329-4d93-b69b-cc6abca1373a,Namespace:calico-system,Attempt:0,}" Apr 16 23:30:50.172896 containerd[1544]: time="2026-04-16T23:30:50.172859393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-784c96cd86-kh6sr,Uid:ee323213-77a2-4e82-98b6-21f5d91a3940,Namespace:calico-system,Attempt:0,}" Apr 16 23:30:50.379564 systemd-networkd[1418]: cali903da74edb2: Link UP Apr 16 23:30:50.380831 systemd-networkd[1418]: cali903da74edb2: Gained carrier Apr 16 23:30:50.402383 containerd[1544]: 2026-04-16 23:30:50.257 [INFO][4784] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--fff9fc0546-k8s-calico--apiserver--784c96cd86--kh6sr-eth0 calico-apiserver-784c96cd86- calico-system ee323213-77a2-4e82-98b6-21f5d91a3940 844 0 2026-04-16 23:30:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:784c96cd86 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-4-n-fff9fc0546 calico-apiserver-784c96cd86-kh6sr eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali903da74edb2 [] [] }} ContainerID="caeccf6369ebb5a802db1c03a252d35d7eb4663ab58844689f43ee2a280a01f0" Namespace="calico-system" Pod="calico-apiserver-784c96cd86-kh6sr" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-calico--apiserver--784c96cd86--kh6sr-" Apr 16 23:30:50.402383 containerd[1544]: 2026-04-16 23:30:50.258 [INFO][4784] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="caeccf6369ebb5a802db1c03a252d35d7eb4663ab58844689f43ee2a280a01f0" Namespace="calico-system" Pod="calico-apiserver-784c96cd86-kh6sr" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-calico--apiserver--784c96cd86--kh6sr-eth0" Apr 16 23:30:50.402383 containerd[1544]: 2026-04-16 23:30:50.314 [INFO][4800] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="caeccf6369ebb5a802db1c03a252d35d7eb4663ab58844689f43ee2a280a01f0" HandleID="k8s-pod-network.caeccf6369ebb5a802db1c03a252d35d7eb4663ab58844689f43ee2a280a01f0" Workload="ci--4459--2--4--n--fff9fc0546-k8s-calico--apiserver--784c96cd86--kh6sr-eth0" Apr 16 23:30:50.402383 containerd[1544]: 2026-04-16 23:30:50.328 [INFO][4800] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="caeccf6369ebb5a802db1c03a252d35d7eb4663ab58844689f43ee2a280a01f0" HandleID="k8s-pod-network.caeccf6369ebb5a802db1c03a252d35d7eb4663ab58844689f43ee2a280a01f0" Workload="ci--4459--2--4--n--fff9fc0546-k8s-calico--apiserver--784c96cd86--kh6sr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbc70), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-n-fff9fc0546", "pod":"calico-apiserver-784c96cd86-kh6sr", "timestamp":"2026-04-16 23:30:50.314980456 +0000 UTC"}, Hostname:"ci-4459-2-4-n-fff9fc0546", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40004471e0)} Apr 16 23:30:50.402383 containerd[1544]: 2026-04-16 23:30:50.328 [INFO][4800] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:30:50.402383 containerd[1544]: 2026-04-16 23:30:50.328 [INFO][4800] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:30:50.402383 containerd[1544]: 2026-04-16 23:30:50.328 [INFO][4800] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-fff9fc0546' Apr 16 23:30:50.402383 containerd[1544]: 2026-04-16 23:30:50.332 [INFO][4800] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.caeccf6369ebb5a802db1c03a252d35d7eb4663ab58844689f43ee2a280a01f0" host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:50.402383 containerd[1544]: 2026-04-16 23:30:50.342 [INFO][4800] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:50.402383 containerd[1544]: 2026-04-16 23:30:50.349 [INFO][4800] ipam/ipam.go 526: Trying affinity for 192.168.58.64/26 host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:50.402383 containerd[1544]: 2026-04-16 23:30:50.352 [INFO][4800] ipam/ipam.go 160: Attempting to load block cidr=192.168.58.64/26 host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:50.402383 containerd[1544]: 2026-04-16 23:30:50.355 [INFO][4800] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.58.64/26 host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:50.402383 containerd[1544]: 2026-04-16 23:30:50.355 [INFO][4800] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.58.64/26 handle="k8s-pod-network.caeccf6369ebb5a802db1c03a252d35d7eb4663ab58844689f43ee2a280a01f0" host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:50.402383 containerd[1544]: 2026-04-16 23:30:50.358 [INFO][4800] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.caeccf6369ebb5a802db1c03a252d35d7eb4663ab58844689f43ee2a280a01f0 Apr 16 23:30:50.402383 containerd[1544]: 2026-04-16 23:30:50.363 [INFO][4800] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.58.64/26 handle="k8s-pod-network.caeccf6369ebb5a802db1c03a252d35d7eb4663ab58844689f43ee2a280a01f0" host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:50.402383 containerd[1544]: 2026-04-16 23:30:50.372 [INFO][4800] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.58.70/26] block=192.168.58.64/26 handle="k8s-pod-network.caeccf6369ebb5a802db1c03a252d35d7eb4663ab58844689f43ee2a280a01f0" host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:50.402383 containerd[1544]: 2026-04-16 23:30:50.372 [INFO][4800] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.58.70/26] handle="k8s-pod-network.caeccf6369ebb5a802db1c03a252d35d7eb4663ab58844689f43ee2a280a01f0" host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:50.402383 containerd[1544]: 2026-04-16 23:30:50.372 [INFO][4800] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:30:50.402383 containerd[1544]: 2026-04-16 23:30:50.372 [INFO][4800] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.58.70/26] IPv6=[] ContainerID="caeccf6369ebb5a802db1c03a252d35d7eb4663ab58844689f43ee2a280a01f0" HandleID="k8s-pod-network.caeccf6369ebb5a802db1c03a252d35d7eb4663ab58844689f43ee2a280a01f0" Workload="ci--4459--2--4--n--fff9fc0546-k8s-calico--apiserver--784c96cd86--kh6sr-eth0" Apr 16 23:30:50.404321 containerd[1544]: 2026-04-16 23:30:50.376 [INFO][4784] cni-plugin/k8s.go 418: Populated endpoint ContainerID="caeccf6369ebb5a802db1c03a252d35d7eb4663ab58844689f43ee2a280a01f0" Namespace="calico-system" Pod="calico-apiserver-784c96cd86-kh6sr" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-calico--apiserver--784c96cd86--kh6sr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--fff9fc0546-k8s-calico--apiserver--784c96cd86--kh6sr-eth0", GenerateName:"calico-apiserver-784c96cd86-", Namespace:"calico-system", SelfLink:"", UID:"ee323213-77a2-4e82-98b6-21f5d91a3940", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 30, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"784c96cd86", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-fff9fc0546", ContainerID:"", Pod:"calico-apiserver-784c96cd86-kh6sr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali903da74edb2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:30:50.404321 containerd[1544]: 2026-04-16 23:30:50.376 [INFO][4784] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.70/32] ContainerID="caeccf6369ebb5a802db1c03a252d35d7eb4663ab58844689f43ee2a280a01f0" Namespace="calico-system" Pod="calico-apiserver-784c96cd86-kh6sr" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-calico--apiserver--784c96cd86--kh6sr-eth0" Apr 16 23:30:50.404321 containerd[1544]: 2026-04-16 23:30:50.376 [INFO][4784] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali903da74edb2 ContainerID="caeccf6369ebb5a802db1c03a252d35d7eb4663ab58844689f43ee2a280a01f0" Namespace="calico-system" Pod="calico-apiserver-784c96cd86-kh6sr" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-calico--apiserver--784c96cd86--kh6sr-eth0" Apr 16 23:30:50.404321 containerd[1544]: 2026-04-16 23:30:50.380 [INFO][4784] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="caeccf6369ebb5a802db1c03a252d35d7eb4663ab58844689f43ee2a280a01f0" Namespace="calico-system" Pod="calico-apiserver-784c96cd86-kh6sr" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-calico--apiserver--784c96cd86--kh6sr-eth0" Apr 16 23:30:50.404321 containerd[1544]: 2026-04-16 23:30:50.382 [INFO][4784] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="caeccf6369ebb5a802db1c03a252d35d7eb4663ab58844689f43ee2a280a01f0" Namespace="calico-system" Pod="calico-apiserver-784c96cd86-kh6sr" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-calico--apiserver--784c96cd86--kh6sr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--fff9fc0546-k8s-calico--apiserver--784c96cd86--kh6sr-eth0", GenerateName:"calico-apiserver-784c96cd86-", Namespace:"calico-system", SelfLink:"", UID:"ee323213-77a2-4e82-98b6-21f5d91a3940", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 30, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"784c96cd86", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-fff9fc0546", ContainerID:"caeccf6369ebb5a802db1c03a252d35d7eb4663ab58844689f43ee2a280a01f0", Pod:"calico-apiserver-784c96cd86-kh6sr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali903da74edb2", MAC:"0e:4d:81:f2:30:d9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:30:50.404321 containerd[1544]: 2026-04-16 23:30:50.399 [INFO][4784] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="caeccf6369ebb5a802db1c03a252d35d7eb4663ab58844689f43ee2a280a01f0" Namespace="calico-system" Pod="calico-apiserver-784c96cd86-kh6sr" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-calico--apiserver--784c96cd86--kh6sr-eth0" Apr 16 23:30:50.432901 containerd[1544]: time="2026-04-16T23:30:50.432785677Z" level=info msg="connecting to shim caeccf6369ebb5a802db1c03a252d35d7eb4663ab58844689f43ee2a280a01f0" address="unix:///run/containerd/s/94424988d55a5d527cb15f2062513a6efe42ad85c1d6bc777430b4818fa937d5" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:30:50.489091 systemd[1]: Started cri-containerd-caeccf6369ebb5a802db1c03a252d35d7eb4663ab58844689f43ee2a280a01f0.scope - libcontainer container caeccf6369ebb5a802db1c03a252d35d7eb4663ab58844689f43ee2a280a01f0. Apr 16 23:30:50.510877 systemd-networkd[1418]: cali9af5a09e1a2: Link UP Apr 16 23:30:50.512587 systemd-networkd[1418]: cali9af5a09e1a2: Gained carrier Apr 16 23:30:50.535988 containerd[1544]: 2026-04-16 23:30:50.267 [INFO][4767] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--fff9fc0546-k8s-calico--apiserver--784c96cd86--5m7xc-eth0 calico-apiserver-784c96cd86- calico-system fb3cba98-8329-4d93-b69b-cc6abca1373a 843 0 2026-04-16 23:30:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:784c96cd86 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-4-n-fff9fc0546 calico-apiserver-784c96cd86-5m7xc eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali9af5a09e1a2 [] [] }} ContainerID="933ee71b1e0c8265d129513b6f9a5534c19b71c54153bb41ff5f8ec1631b32bf" Namespace="calico-system" Pod="calico-apiserver-784c96cd86-5m7xc" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-calico--apiserver--784c96cd86--5m7xc-" Apr 16 23:30:50.535988 containerd[1544]: 2026-04-16 23:30:50.268 [INFO][4767] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="933ee71b1e0c8265d129513b6f9a5534c19b71c54153bb41ff5f8ec1631b32bf" Namespace="calico-system" Pod="calico-apiserver-784c96cd86-5m7xc" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-calico--apiserver--784c96cd86--5m7xc-eth0" Apr 16 23:30:50.535988 containerd[1544]: 2026-04-16 23:30:50.322 [INFO][4805] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="933ee71b1e0c8265d129513b6f9a5534c19b71c54153bb41ff5f8ec1631b32bf" HandleID="k8s-pod-network.933ee71b1e0c8265d129513b6f9a5534c19b71c54153bb41ff5f8ec1631b32bf" Workload="ci--4459--2--4--n--fff9fc0546-k8s-calico--apiserver--784c96cd86--5m7xc-eth0" Apr 16 23:30:50.535988 containerd[1544]: 2026-04-16 23:30:50.337 [INFO][4805] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="933ee71b1e0c8265d129513b6f9a5534c19b71c54153bb41ff5f8ec1631b32bf" HandleID="k8s-pod-network.933ee71b1e0c8265d129513b6f9a5534c19b71c54153bb41ff5f8ec1631b32bf" Workload="ci--4459--2--4--n--fff9fc0546-k8s-calico--apiserver--784c96cd86--5m7xc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000309f20), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-n-fff9fc0546", "pod":"calico-apiserver-784c96cd86-5m7xc", "timestamp":"2026-04-16 23:30:50.322514787 +0000 UTC"}, Hostname:"ci-4459-2-4-n-fff9fc0546", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000349080)} Apr 16 23:30:50.535988 containerd[1544]: 2026-04-16 23:30:50.337 [INFO][4805] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:30:50.535988 containerd[1544]: 2026-04-16 23:30:50.372 [INFO][4805] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:30:50.535988 containerd[1544]: 2026-04-16 23:30:50.372 [INFO][4805] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-fff9fc0546' Apr 16 23:30:50.535988 containerd[1544]: 2026-04-16 23:30:50.434 [INFO][4805] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.933ee71b1e0c8265d129513b6f9a5534c19b71c54153bb41ff5f8ec1631b32bf" host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:50.535988 containerd[1544]: 2026-04-16 23:30:50.450 [INFO][4805] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:50.535988 containerd[1544]: 2026-04-16 23:30:50.457 [INFO][4805] ipam/ipam.go 526: Trying affinity for 192.168.58.64/26 host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:50.535988 containerd[1544]: 2026-04-16 23:30:50.465 [INFO][4805] ipam/ipam.go 160: Attempting to load block cidr=192.168.58.64/26 host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:50.535988 containerd[1544]: 2026-04-16 23:30:50.473 [INFO][4805] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.58.64/26 host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:50.535988 containerd[1544]: 2026-04-16 23:30:50.476 [INFO][4805] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.58.64/26 handle="k8s-pod-network.933ee71b1e0c8265d129513b6f9a5534c19b71c54153bb41ff5f8ec1631b32bf" host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:50.535988 containerd[1544]: 2026-04-16 23:30:50.482 [INFO][4805] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.933ee71b1e0c8265d129513b6f9a5534c19b71c54153bb41ff5f8ec1631b32bf Apr 16 23:30:50.535988 containerd[1544]: 2026-04-16 23:30:50.490 [INFO][4805] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.58.64/26 handle="k8s-pod-network.933ee71b1e0c8265d129513b6f9a5534c19b71c54153bb41ff5f8ec1631b32bf" host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:50.535988 containerd[1544]: 2026-04-16 23:30:50.502 [INFO][4805] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.58.71/26] block=192.168.58.64/26 handle="k8s-pod-network.933ee71b1e0c8265d129513b6f9a5534c19b71c54153bb41ff5f8ec1631b32bf" host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:50.535988 containerd[1544]: 2026-04-16 23:30:50.502 [INFO][4805] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.58.71/26] handle="k8s-pod-network.933ee71b1e0c8265d129513b6f9a5534c19b71c54153bb41ff5f8ec1631b32bf" host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:50.535988 containerd[1544]: 2026-04-16 23:30:50.503 [INFO][4805] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:30:50.535988 containerd[1544]: 2026-04-16 23:30:50.503 [INFO][4805] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.58.71/26] IPv6=[] ContainerID="933ee71b1e0c8265d129513b6f9a5534c19b71c54153bb41ff5f8ec1631b32bf" HandleID="k8s-pod-network.933ee71b1e0c8265d129513b6f9a5534c19b71c54153bb41ff5f8ec1631b32bf" Workload="ci--4459--2--4--n--fff9fc0546-k8s-calico--apiserver--784c96cd86--5m7xc-eth0" Apr 16 23:30:50.536637 containerd[1544]: 2026-04-16 23:30:50.508 [INFO][4767] cni-plugin/k8s.go 418: Populated endpoint ContainerID="933ee71b1e0c8265d129513b6f9a5534c19b71c54153bb41ff5f8ec1631b32bf" Namespace="calico-system" Pod="calico-apiserver-784c96cd86-5m7xc" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-calico--apiserver--784c96cd86--5m7xc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--fff9fc0546-k8s-calico--apiserver--784c96cd86--5m7xc-eth0", GenerateName:"calico-apiserver-784c96cd86-", Namespace:"calico-system", SelfLink:"", UID:"fb3cba98-8329-4d93-b69b-cc6abca1373a", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 30, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"784c96cd86", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-fff9fc0546", ContainerID:"", Pod:"calico-apiserver-784c96cd86-5m7xc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9af5a09e1a2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:30:50.536637 containerd[1544]: 2026-04-16 23:30:50.508 [INFO][4767] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.71/32] ContainerID="933ee71b1e0c8265d129513b6f9a5534c19b71c54153bb41ff5f8ec1631b32bf" Namespace="calico-system" Pod="calico-apiserver-784c96cd86-5m7xc" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-calico--apiserver--784c96cd86--5m7xc-eth0" Apr 16 23:30:50.536637 containerd[1544]: 2026-04-16 23:30:50.508 [INFO][4767] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9af5a09e1a2 ContainerID="933ee71b1e0c8265d129513b6f9a5534c19b71c54153bb41ff5f8ec1631b32bf" Namespace="calico-system" Pod="calico-apiserver-784c96cd86-5m7xc" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-calico--apiserver--784c96cd86--5m7xc-eth0" Apr 16 23:30:50.536637 containerd[1544]: 2026-04-16 23:30:50.513 [INFO][4767] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="933ee71b1e0c8265d129513b6f9a5534c19b71c54153bb41ff5f8ec1631b32bf" Namespace="calico-system" Pod="calico-apiserver-784c96cd86-5m7xc" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-calico--apiserver--784c96cd86--5m7xc-eth0" Apr 16 23:30:50.536637 containerd[1544]: 2026-04-16 23:30:50.514 [INFO][4767] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="933ee71b1e0c8265d129513b6f9a5534c19b71c54153bb41ff5f8ec1631b32bf" Namespace="calico-system" Pod="calico-apiserver-784c96cd86-5m7xc" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-calico--apiserver--784c96cd86--5m7xc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--fff9fc0546-k8s-calico--apiserver--784c96cd86--5m7xc-eth0", GenerateName:"calico-apiserver-784c96cd86-", Namespace:"calico-system", SelfLink:"", UID:"fb3cba98-8329-4d93-b69b-cc6abca1373a", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 30, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"784c96cd86", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-fff9fc0546", ContainerID:"933ee71b1e0c8265d129513b6f9a5534c19b71c54153bb41ff5f8ec1631b32bf", Pod:"calico-apiserver-784c96cd86-5m7xc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9af5a09e1a2", MAC:"3a:51:a6:35:ec:49", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:30:50.536637 containerd[1544]: 2026-04-16 23:30:50.529 [INFO][4767] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="933ee71b1e0c8265d129513b6f9a5534c19b71c54153bb41ff5f8ec1631b32bf" Namespace="calico-system" Pod="calico-apiserver-784c96cd86-5m7xc" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-calico--apiserver--784c96cd86--5m7xc-eth0" Apr 16 23:30:50.573467 containerd[1544]: time="2026-04-16T23:30:50.573417898Z" level=info msg="connecting to shim 933ee71b1e0c8265d129513b6f9a5534c19b71c54153bb41ff5f8ec1631b32bf" address="unix:///run/containerd/s/4f18c9fd1bdb317acd9877d931f901461fd3eb8cf2b8853d701ea11e28e4b5e1" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:30:50.631276 systemd[1]: Started cri-containerd-933ee71b1e0c8265d129513b6f9a5534c19b71c54153bb41ff5f8ec1631b32bf.scope - libcontainer container 933ee71b1e0c8265d129513b6f9a5534c19b71c54153bb41ff5f8ec1631b32bf. Apr 16 23:30:50.654607 containerd[1544]: time="2026-04-16T23:30:50.654554651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-784c96cd86-kh6sr,Uid:ee323213-77a2-4e82-98b6-21f5d91a3940,Namespace:calico-system,Attempt:0,} returns sandbox id \"caeccf6369ebb5a802db1c03a252d35d7eb4663ab58844689f43ee2a280a01f0\"" Apr 16 23:30:50.691016 containerd[1544]: time="2026-04-16T23:30:50.689877241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-784c96cd86-5m7xc,Uid:fb3cba98-8329-4d93-b69b-cc6abca1373a,Namespace:calico-system,Attempt:0,} returns sandbox id \"933ee71b1e0c8265d129513b6f9a5534c19b71c54153bb41ff5f8ec1631b32bf\"" Apr 16 23:30:51.133471 systemd-networkd[1418]: cali470add6057b: Gained IPv6LL Apr 16 23:30:51.898988 systemd-networkd[1418]: cali903da74edb2: Gained IPv6LL Apr 16 23:30:52.164166 containerd[1544]: time="2026-04-16T23:30:52.163906803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-ktxr4,Uid:1e919261-a6de-4170-8882-981734963b11,Namespace:calico-system,Attempt:0,}" Apr 16 23:30:52.220550 systemd-networkd[1418]: cali9af5a09e1a2: Gained IPv6LL Apr 16 23:30:52.396415 systemd-networkd[1418]: cali18d7b5ba868: Link UP Apr 16 23:30:52.400953 systemd-networkd[1418]: cali18d7b5ba868: Gained carrier Apr 16 23:30:52.438061 containerd[1544]: 2026-04-16 23:30:52.248 [INFO][4955] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--fff9fc0546-k8s-goldmane--9f7667bb8--ktxr4-eth0 goldmane-9f7667bb8- calico-system 1e919261-a6de-4170-8882-981734963b11 845 0 2026-04-16 23:30:16 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459-2-4-n-fff9fc0546 goldmane-9f7667bb8-ktxr4 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali18d7b5ba868 [] [] }} ContainerID="e0a463ff6fa910114379bf72dfd84c61023220f52fd4cd4f0c8491d953c399f9" Namespace="calico-system" Pod="goldmane-9f7667bb8-ktxr4" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-goldmane--9f7667bb8--ktxr4-" Apr 16 23:30:52.438061 containerd[1544]: 2026-04-16 23:30:52.248 [INFO][4955] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e0a463ff6fa910114379bf72dfd84c61023220f52fd4cd4f0c8491d953c399f9" Namespace="calico-system" Pod="goldmane-9f7667bb8-ktxr4" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-goldmane--9f7667bb8--ktxr4-eth0" Apr 16 23:30:52.438061 containerd[1544]: 2026-04-16 23:30:52.294 [INFO][4970] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e0a463ff6fa910114379bf72dfd84c61023220f52fd4cd4f0c8491d953c399f9" HandleID="k8s-pod-network.e0a463ff6fa910114379bf72dfd84c61023220f52fd4cd4f0c8491d953c399f9" Workload="ci--4459--2--4--n--fff9fc0546-k8s-goldmane--9f7667bb8--ktxr4-eth0" Apr 16 23:30:52.438061 containerd[1544]: 2026-04-16 23:30:52.310 [INFO][4970] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e0a463ff6fa910114379bf72dfd84c61023220f52fd4cd4f0c8491d953c399f9" HandleID="k8s-pod-network.e0a463ff6fa910114379bf72dfd84c61023220f52fd4cd4f0c8491d953c399f9" Workload="ci--4459--2--4--n--fff9fc0546-k8s-goldmane--9f7667bb8--ktxr4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000273370), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-n-fff9fc0546", "pod":"goldmane-9f7667bb8-ktxr4", "timestamp":"2026-04-16 23:30:52.294927762 +0000 UTC"}, Hostname:"ci-4459-2-4-n-fff9fc0546", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400035adc0)} Apr 16 23:30:52.438061 containerd[1544]: 2026-04-16 23:30:52.310 [INFO][4970] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:30:52.438061 containerd[1544]: 2026-04-16 23:30:52.310 [INFO][4970] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:30:52.438061 containerd[1544]: 2026-04-16 23:30:52.310 [INFO][4970] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-fff9fc0546' Apr 16 23:30:52.438061 containerd[1544]: 2026-04-16 23:30:52.315 [INFO][4970] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e0a463ff6fa910114379bf72dfd84c61023220f52fd4cd4f0c8491d953c399f9" host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:52.438061 containerd[1544]: 2026-04-16 23:30:52.326 [INFO][4970] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:52.438061 containerd[1544]: 2026-04-16 23:30:52.344 [INFO][4970] ipam/ipam.go 526: Trying affinity for 192.168.58.64/26 host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:52.438061 containerd[1544]: 2026-04-16 23:30:52.350 [INFO][4970] ipam/ipam.go 160: Attempting to load block cidr=192.168.58.64/26 host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:52.438061 containerd[1544]: 2026-04-16 23:30:52.355 [INFO][4970] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.58.64/26 host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:52.438061 containerd[1544]: 2026-04-16 23:30:52.355 [INFO][4970] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.58.64/26 handle="k8s-pod-network.e0a463ff6fa910114379bf72dfd84c61023220f52fd4cd4f0c8491d953c399f9" host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:52.438061 containerd[1544]: 2026-04-16 23:30:52.358 [INFO][4970] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e0a463ff6fa910114379bf72dfd84c61023220f52fd4cd4f0c8491d953c399f9 Apr 16 23:30:52.438061 containerd[1544]: 2026-04-16 23:30:52.365 [INFO][4970] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.58.64/26 handle="k8s-pod-network.e0a463ff6fa910114379bf72dfd84c61023220f52fd4cd4f0c8491d953c399f9" host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:52.438061 containerd[1544]: 2026-04-16 23:30:52.377 [INFO][4970] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.58.72/26] block=192.168.58.64/26 handle="k8s-pod-network.e0a463ff6fa910114379bf72dfd84c61023220f52fd4cd4f0c8491d953c399f9" host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:52.438061 containerd[1544]: 2026-04-16 23:30:52.377 [INFO][4970] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.58.72/26] handle="k8s-pod-network.e0a463ff6fa910114379bf72dfd84c61023220f52fd4cd4f0c8491d953c399f9" host="ci-4459-2-4-n-fff9fc0546" Apr 16 23:30:52.438061 containerd[1544]: 2026-04-16 23:30:52.377 [INFO][4970] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:30:52.438061 containerd[1544]: 2026-04-16 23:30:52.377 [INFO][4970] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.58.72/26] IPv6=[] ContainerID="e0a463ff6fa910114379bf72dfd84c61023220f52fd4cd4f0c8491d953c399f9" HandleID="k8s-pod-network.e0a463ff6fa910114379bf72dfd84c61023220f52fd4cd4f0c8491d953c399f9" Workload="ci--4459--2--4--n--fff9fc0546-k8s-goldmane--9f7667bb8--ktxr4-eth0" Apr 16 23:30:52.440167 containerd[1544]: 2026-04-16 23:30:52.384 [INFO][4955] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e0a463ff6fa910114379bf72dfd84c61023220f52fd4cd4f0c8491d953c399f9" Namespace="calico-system" Pod="goldmane-9f7667bb8-ktxr4" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-goldmane--9f7667bb8--ktxr4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--fff9fc0546-k8s-goldmane--9f7667bb8--ktxr4-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"1e919261-a6de-4170-8882-981734963b11", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 30, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-fff9fc0546", ContainerID:"", Pod:"goldmane-9f7667bb8-ktxr4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.58.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali18d7b5ba868", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:30:52.440167 containerd[1544]: 2026-04-16 23:30:52.385 [INFO][4955] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.72/32] ContainerID="e0a463ff6fa910114379bf72dfd84c61023220f52fd4cd4f0c8491d953c399f9" Namespace="calico-system" Pod="goldmane-9f7667bb8-ktxr4" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-goldmane--9f7667bb8--ktxr4-eth0" Apr 16 23:30:52.440167 containerd[1544]: 2026-04-16 23:30:52.386 [INFO][4955] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali18d7b5ba868 ContainerID="e0a463ff6fa910114379bf72dfd84c61023220f52fd4cd4f0c8491d953c399f9" Namespace="calico-system" Pod="goldmane-9f7667bb8-ktxr4" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-goldmane--9f7667bb8--ktxr4-eth0" Apr 16 23:30:52.440167 containerd[1544]: 2026-04-16 23:30:52.402 [INFO][4955] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e0a463ff6fa910114379bf72dfd84c61023220f52fd4cd4f0c8491d953c399f9" Namespace="calico-system" Pod="goldmane-9f7667bb8-ktxr4" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-goldmane--9f7667bb8--ktxr4-eth0" Apr 16 23:30:52.440167 containerd[1544]: 2026-04-16 23:30:52.405 [INFO][4955] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e0a463ff6fa910114379bf72dfd84c61023220f52fd4cd4f0c8491d953c399f9" Namespace="calico-system" Pod="goldmane-9f7667bb8-ktxr4" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-goldmane--9f7667bb8--ktxr4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--fff9fc0546-k8s-goldmane--9f7667bb8--ktxr4-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"1e919261-a6de-4170-8882-981734963b11", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 30, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-fff9fc0546", ContainerID:"e0a463ff6fa910114379bf72dfd84c61023220f52fd4cd4f0c8491d953c399f9", Pod:"goldmane-9f7667bb8-ktxr4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.58.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali18d7b5ba868", MAC:"72:b6:8a:c4:b6:b6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:30:52.440167 containerd[1544]: 2026-04-16 23:30:52.430 [INFO][4955] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e0a463ff6fa910114379bf72dfd84c61023220f52fd4cd4f0c8491d953c399f9" Namespace="calico-system" Pod="goldmane-9f7667bb8-ktxr4" WorkloadEndpoint="ci--4459--2--4--n--fff9fc0546-k8s-goldmane--9f7667bb8--ktxr4-eth0" Apr 16 23:30:52.453609 containerd[1544]: time="2026-04-16T23:30:52.453541751Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:52.455636 containerd[1544]: time="2026-04-16T23:30:52.455593767Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Apr 16 23:30:52.459940 containerd[1544]: time="2026-04-16T23:30:52.459890923Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:52.465054 containerd[1544]: time="2026-04-16T23:30:52.465012462Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:52.466687 containerd[1544]: time="2026-04-16T23:30:52.466651667Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 3.013049248s" Apr 16 23:30:52.466782 containerd[1544]: time="2026-04-16T23:30:52.466691988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Apr 16 23:30:52.469619 containerd[1544]: time="2026-04-16T23:30:52.469496264Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 16 23:30:52.505440 containerd[1544]: time="2026-04-16T23:30:52.505383959Z" level=info msg="connecting to shim e0a463ff6fa910114379bf72dfd84c61023220f52fd4cd4f0c8491d953c399f9" address="unix:///run/containerd/s/f0330f7d8f85fcf04fb94bc1cef1bf90d163a622ce474bf145b68c96b217076f" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:30:52.517048 containerd[1544]: time="2026-04-16T23:30:52.516767668Z" level=info msg="CreateContainer within sandbox \"dc472ea3e322811f0faf71064c6d22bc51820473d4bcfee0d08f4504331b6fdf\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 16 23:30:52.551805 containerd[1544]: time="2026-04-16T23:30:52.551263525Z" level=info msg="Container d24619b64a426d4acc106015498ef74eb9db2bdeb09555c82197da5015b8ae08: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:30:52.561960 systemd[1]: Started cri-containerd-e0a463ff6fa910114379bf72dfd84c61023220f52fd4cd4f0c8491d953c399f9.scope - libcontainer container e0a463ff6fa910114379bf72dfd84c61023220f52fd4cd4f0c8491d953c399f9. Apr 16 23:30:52.566380 containerd[1544]: time="2026-04-16T23:30:52.566329175Z" level=info msg="CreateContainer within sandbox \"dc472ea3e322811f0faf71064c6d22bc51820473d4bcfee0d08f4504331b6fdf\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d24619b64a426d4acc106015498ef74eb9db2bdeb09555c82197da5015b8ae08\"" Apr 16 23:30:52.570207 containerd[1544]: time="2026-04-16T23:30:52.568170705Z" level=info msg="StartContainer for \"d24619b64a426d4acc106015498ef74eb9db2bdeb09555c82197da5015b8ae08\"" Apr 16 23:30:52.570207 containerd[1544]: time="2026-04-16T23:30:52.569340816Z" level=info msg="connecting to shim d24619b64a426d4acc106015498ef74eb9db2bdeb09555c82197da5015b8ae08" address="unix:///run/containerd/s/658b462e72953150bbbdb4c4448bd861ac873870e28fdd45a576b5b7fa64b7d2" protocol=ttrpc version=3 Apr 16 23:30:52.605932 systemd[1]: Started cri-containerd-d24619b64a426d4acc106015498ef74eb9db2bdeb09555c82197da5015b8ae08.scope - libcontainer container d24619b64a426d4acc106015498ef74eb9db2bdeb09555c82197da5015b8ae08. Apr 16 23:30:52.648323 containerd[1544]: time="2026-04-16T23:30:52.648201159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-ktxr4,Uid:1e919261-a6de-4170-8882-981734963b11,Namespace:calico-system,Attempt:0,} returns sandbox id \"e0a463ff6fa910114379bf72dfd84c61023220f52fd4cd4f0c8491d953c399f9\"" Apr 16 23:30:52.674821 containerd[1544]: time="2026-04-16T23:30:52.674667357Z" level=info msg="StartContainer for \"d24619b64a426d4acc106015498ef74eb9db2bdeb09555c82197da5015b8ae08\" returns successfully" Apr 16 23:30:53.524589 kubelet[2759]: I0416 23:30:53.524376 2759 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-648d8fc559-q6448" podStartSLOduration=32.508267543 podStartE2EDuration="35.524358274s" podCreationTimestamp="2026-04-16 23:30:18 +0000 UTC" firstStartedPulling="2026-04-16 23:30:49.452656353 +0000 UTC m=+51.426487461" lastFinishedPulling="2026-04-16 23:30:52.468747004 +0000 UTC m=+54.442578192" observedRunningTime="2026-04-16 23:30:53.520290845 +0000 UTC m=+55.494121953" watchObservedRunningTime="2026-04-16 23:30:53.524358274 +0000 UTC m=+55.498189382" Apr 16 23:30:54.268138 systemd-networkd[1418]: cali18d7b5ba868: Gained IPv6LL Apr 16 23:30:56.107725 containerd[1544]: time="2026-04-16T23:30:56.107273728Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:56.109790 containerd[1544]: time="2026-04-16T23:30:56.109677949Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Apr 16 23:30:56.111815 containerd[1544]: time="2026-04-16T23:30:56.110954462Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:56.115303 containerd[1544]: time="2026-04-16T23:30:56.115252933Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:56.116129 containerd[1544]: time="2026-04-16T23:30:56.116092515Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 3.646552849s" Apr 16 23:30:56.116256 containerd[1544]: time="2026-04-16T23:30:56.116237758Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 16 23:30:56.118068 containerd[1544]: time="2026-04-16T23:30:56.118017684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 16 23:30:56.123604 containerd[1544]: time="2026-04-16T23:30:56.123167337Z" level=info msg="CreateContainer within sandbox \"caeccf6369ebb5a802db1c03a252d35d7eb4663ab58844689f43ee2a280a01f0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 16 23:30:56.131935 containerd[1544]: time="2026-04-16T23:30:56.131884601Z" level=info msg="Container 6caf7a1f421223a688b816f80e2437d18d46d7a51621a37e78947d0cc49c2de4: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:30:56.152089 containerd[1544]: time="2026-04-16T23:30:56.152017920Z" level=info msg="CreateContainer within sandbox \"caeccf6369ebb5a802db1c03a252d35d7eb4663ab58844689f43ee2a280a01f0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6caf7a1f421223a688b816f80e2437d18d46d7a51621a37e78947d0cc49c2de4\"" Apr 16 23:30:56.153277 containerd[1544]: time="2026-04-16T23:30:56.153234631Z" level=info msg="StartContainer for \"6caf7a1f421223a688b816f80e2437d18d46d7a51621a37e78947d0cc49c2de4\"" Apr 16 23:30:56.154794 containerd[1544]: time="2026-04-16T23:30:56.154673068Z" level=info msg="connecting to shim 6caf7a1f421223a688b816f80e2437d18d46d7a51621a37e78947d0cc49c2de4" address="unix:///run/containerd/s/94424988d55a5d527cb15f2062513a6efe42ad85c1d6bc777430b4818fa937d5" protocol=ttrpc version=3 Apr 16 23:30:56.187932 systemd[1]: Started cri-containerd-6caf7a1f421223a688b816f80e2437d18d46d7a51621a37e78947d0cc49c2de4.scope - libcontainer container 6caf7a1f421223a688b816f80e2437d18d46d7a51621a37e78947d0cc49c2de4. Apr 16 23:30:56.235430 containerd[1544]: time="2026-04-16T23:30:56.235359905Z" level=info msg="StartContainer for \"6caf7a1f421223a688b816f80e2437d18d46d7a51621a37e78947d0cc49c2de4\" returns successfully" Apr 16 23:30:56.508458 containerd[1544]: time="2026-04-16T23:30:56.507673196Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:30:56.509819 containerd[1544]: time="2026-04-16T23:30:56.509770410Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 16 23:30:56.520778 containerd[1544]: time="2026-04-16T23:30:56.520721652Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 400.623035ms" Apr 16 23:30:56.520939 containerd[1544]: time="2026-04-16T23:30:56.520923337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 16 23:30:56.526139 containerd[1544]: time="2026-04-16T23:30:56.526106871Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 16 23:30:56.535969 containerd[1544]: time="2026-04-16T23:30:56.534816015Z" level=info msg="CreateContainer within sandbox \"933ee71b1e0c8265d129513b6f9a5534c19b71c54153bb41ff5f8ec1631b32bf\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 16 23:30:56.548816 containerd[1544]: time="2026-04-16T23:30:56.548774334Z" level=info msg="Container 5eea6e90568fcef93eddc89016e829c91be0d8a34a425b1852da2b83d0736072: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:30:56.570814 containerd[1544]: time="2026-04-16T23:30:56.570444372Z" level=info msg="CreateContainer within sandbox \"933ee71b1e0c8265d129513b6f9a5534c19b71c54153bb41ff5f8ec1631b32bf\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5eea6e90568fcef93eddc89016e829c91be0d8a34a425b1852da2b83d0736072\"" Apr 16 23:30:56.573051 containerd[1544]: time="2026-04-16T23:30:56.573020719Z" level=info msg="StartContainer for \"5eea6e90568fcef93eddc89016e829c91be0d8a34a425b1852da2b83d0736072\"" Apr 16 23:30:56.576052 containerd[1544]: time="2026-04-16T23:30:56.575998995Z" level=info msg="connecting to shim 5eea6e90568fcef93eddc89016e829c91be0d8a34a425b1852da2b83d0736072" address="unix:///run/containerd/s/4f18c9fd1bdb317acd9877d931f901461fd3eb8cf2b8853d701ea11e28e4b5e1" protocol=ttrpc version=3 Apr 16 23:30:56.608003 systemd[1]: Started cri-containerd-5eea6e90568fcef93eddc89016e829c91be0d8a34a425b1852da2b83d0736072.scope - libcontainer container 5eea6e90568fcef93eddc89016e829c91be0d8a34a425b1852da2b83d0736072. Apr 16 23:30:56.692211 containerd[1544]: time="2026-04-16T23:30:56.691988822Z" level=info msg="StartContainer for \"5eea6e90568fcef93eddc89016e829c91be0d8a34a425b1852da2b83d0736072\" returns successfully" Apr 16 23:30:57.540069 kubelet[2759]: I0416 23:30:57.540034 2759 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 16 23:30:57.556273 kubelet[2759]: I0416 23:30:57.556092 2759 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-784c96cd86-kh6sr" podStartSLOduration=36.096927589 podStartE2EDuration="41.556078542s" podCreationTimestamp="2026-04-16 23:30:16 +0000 UTC" firstStartedPulling="2026-04-16 23:30:50.65841792 +0000 UTC m=+52.632249028" lastFinishedPulling="2026-04-16 23:30:56.117568833 +0000 UTC m=+58.091399981" observedRunningTime="2026-04-16 23:30:56.568222835 +0000 UTC m=+58.542053943" watchObservedRunningTime="2026-04-16 23:30:57.556078542 +0000 UTC m=+59.529909610" Apr 16 23:30:58.542546 kubelet[2759]: I0416 23:30:58.542507 2759 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 16 23:30:59.913479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1409819667.mount: Deactivated successfully. Apr 16 23:31:00.323347 containerd[1544]: time="2026-04-16T23:31:00.323294990Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:31:00.325964 containerd[1544]: time="2026-04-16T23:31:00.325907574Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Apr 16 23:31:00.327772 containerd[1544]: time="2026-04-16T23:31:00.326833197Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:31:00.329677 containerd[1544]: time="2026-04-16T23:31:00.329598305Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:31:00.330836 containerd[1544]: time="2026-04-16T23:31:00.330788894Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 3.804454618s" Apr 16 23:31:00.330907 containerd[1544]: time="2026-04-16T23:31:00.330836096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Apr 16 23:31:00.341337 containerd[1544]: time="2026-04-16T23:31:00.340977346Z" level=info msg="CreateContainer within sandbox \"e0a463ff6fa910114379bf72dfd84c61023220f52fd4cd4f0c8491d953c399f9\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 16 23:31:00.368876 containerd[1544]: time="2026-04-16T23:31:00.367867449Z" level=info msg="Container 6e5af3635a581e79cbe1e449d0bb74320284f5475062b8bfbca927e3f4b5d922: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:31:00.385618 containerd[1544]: time="2026-04-16T23:31:00.385548324Z" level=info msg="CreateContainer within sandbox \"e0a463ff6fa910114379bf72dfd84c61023220f52fd4cd4f0c8491d953c399f9\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"6e5af3635a581e79cbe1e449d0bb74320284f5475062b8bfbca927e3f4b5d922\"" Apr 16 23:31:00.387863 containerd[1544]: time="2026-04-16T23:31:00.386119938Z" level=info msg="StartContainer for \"6e5af3635a581e79cbe1e449d0bb74320284f5475062b8bfbca927e3f4b5d922\"" Apr 16 23:31:00.387916 containerd[1544]: time="2026-04-16T23:31:00.387871622Z" level=info msg="connecting to shim 6e5af3635a581e79cbe1e449d0bb74320284f5475062b8bfbca927e3f4b5d922" address="unix:///run/containerd/s/f0330f7d8f85fcf04fb94bc1cef1bf90d163a622ce474bf145b68c96b217076f" protocol=ttrpc version=3 Apr 16 23:31:00.412915 systemd[1]: Started cri-containerd-6e5af3635a581e79cbe1e449d0bb74320284f5475062b8bfbca927e3f4b5d922.scope - libcontainer container 6e5af3635a581e79cbe1e449d0bb74320284f5475062b8bfbca927e3f4b5d922. Apr 16 23:31:00.461615 containerd[1544]: time="2026-04-16T23:31:00.461500717Z" level=info msg="StartContainer for \"6e5af3635a581e79cbe1e449d0bb74320284f5475062b8bfbca927e3f4b5d922\" returns successfully" Apr 16 23:31:00.576786 kubelet[2759]: I0416 23:31:00.574323 2759 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-784c96cd86-5m7xc" podStartSLOduration=38.743419412 podStartE2EDuration="44.574301937s" podCreationTimestamp="2026-04-16 23:30:16 +0000 UTC" firstStartedPulling="2026-04-16 23:30:50.693130332 +0000 UTC m=+52.666961440" lastFinishedPulling="2026-04-16 23:30:56.524012857 +0000 UTC m=+58.497843965" observedRunningTime="2026-04-16 23:30:57.558574525 +0000 UTC m=+59.532405633" watchObservedRunningTime="2026-04-16 23:31:00.574301937 +0000 UTC m=+62.548133085" Apr 16 23:31:00.665671 kubelet[2759]: I0416 23:31:00.664250 2759 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-ktxr4" podStartSLOduration=36.983162142 podStartE2EDuration="44.664236274s" podCreationTimestamp="2026-04-16 23:30:16 +0000 UTC" firstStartedPulling="2026-04-16 23:30:52.650646705 +0000 UTC m=+54.624477813" lastFinishedPulling="2026-04-16 23:31:00.331720837 +0000 UTC m=+62.305551945" observedRunningTime="2026-04-16 23:31:00.579093135 +0000 UTC m=+62.552924243" watchObservedRunningTime="2026-04-16 23:31:00.664236274 +0000 UTC m=+62.638067382" Apr 16 23:31:01.301645 kubelet[2759]: I0416 23:31:01.301206 2759 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 16 23:31:29.240470 kubelet[2759]: I0416 23:31:29.240197 2759 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 16 23:31:42.467125 systemd[1]: Started sshd@7-46.224.1.2:22-50.85.169.122:56662.service - OpenSSH per-connection server daemon (50.85.169.122:56662). Apr 16 23:31:42.614098 sshd[5464]: Accepted publickey for core from 50.85.169.122 port 56662 ssh2: RSA SHA256:YEfK51SBTL1bMgxZgn6/4J+7cuIr/XFmOhH6oMsPne8 Apr 16 23:31:42.616928 sshd-session[5464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:31:42.622973 systemd-logind[1520]: New session 8 of user core. Apr 16 23:31:42.632157 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 16 23:31:42.774191 sshd[5467]: Connection closed by 50.85.169.122 port 56662 Apr 16 23:31:42.774678 sshd-session[5464]: pam_unix(sshd:session): session closed for user core Apr 16 23:31:42.782651 systemd[1]: sshd@7-46.224.1.2:22-50.85.169.122:56662.service: Deactivated successfully. Apr 16 23:31:42.786414 systemd[1]: session-8.scope: Deactivated successfully. Apr 16 23:31:42.788009 systemd-logind[1520]: Session 8 logged out. Waiting for processes to exit. Apr 16 23:31:42.790007 systemd-logind[1520]: Removed session 8. Apr 16 23:31:47.804262 systemd[1]: Started sshd@8-46.224.1.2:22-50.85.169.122:56676.service - OpenSSH per-connection server daemon (50.85.169.122:56676). Apr 16 23:31:47.936764 sshd[5480]: Accepted publickey for core from 50.85.169.122 port 56676 ssh2: RSA SHA256:YEfK51SBTL1bMgxZgn6/4J+7cuIr/XFmOhH6oMsPne8 Apr 16 23:31:47.938485 sshd-session[5480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:31:47.944815 systemd-logind[1520]: New session 9 of user core. Apr 16 23:31:47.948915 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 16 23:31:48.075015 sshd[5483]: Connection closed by 50.85.169.122 port 56676 Apr 16 23:31:48.077224 sshd-session[5480]: pam_unix(sshd:session): session closed for user core Apr 16 23:31:48.085510 systemd[1]: sshd@8-46.224.1.2:22-50.85.169.122:56676.service: Deactivated successfully. Apr 16 23:31:48.086456 systemd-logind[1520]: Session 9 logged out. Waiting for processes to exit. Apr 16 23:31:48.089210 systemd[1]: session-9.scope: Deactivated successfully. Apr 16 23:31:48.092613 systemd-logind[1520]: Removed session 9. Apr 16 23:31:53.109980 systemd[1]: Started sshd@9-46.224.1.2:22-50.85.169.122:47470.service - OpenSSH per-connection server daemon (50.85.169.122:47470). Apr 16 23:31:53.244094 sshd[5518]: Accepted publickey for core from 50.85.169.122 port 47470 ssh2: RSA SHA256:YEfK51SBTL1bMgxZgn6/4J+7cuIr/XFmOhH6oMsPne8 Apr 16 23:31:53.245977 sshd-session[5518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:31:53.251590 systemd-logind[1520]: New session 10 of user core. Apr 16 23:31:53.258966 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 16 23:31:53.373752 sshd[5521]: Connection closed by 50.85.169.122 port 47470 Apr 16 23:31:53.374567 sshd-session[5518]: pam_unix(sshd:session): session closed for user core Apr 16 23:31:53.382157 systemd-logind[1520]: Session 10 logged out. Waiting for processes to exit. Apr 16 23:31:53.383289 systemd[1]: sshd@9-46.224.1.2:22-50.85.169.122:47470.service: Deactivated successfully. Apr 16 23:31:53.387533 systemd[1]: session-10.scope: Deactivated successfully. Apr 16 23:31:53.390834 systemd-logind[1520]: Removed session 10. Apr 16 23:31:58.399847 systemd[1]: Started sshd@10-46.224.1.2:22-50.85.169.122:47474.service - OpenSSH per-connection server daemon (50.85.169.122:47474). Apr 16 23:31:58.533428 sshd[5578]: Accepted publickey for core from 50.85.169.122 port 47474 ssh2: RSA SHA256:YEfK51SBTL1bMgxZgn6/4J+7cuIr/XFmOhH6oMsPne8 Apr 16 23:31:58.535447 sshd-session[5578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:31:58.540896 systemd-logind[1520]: New session 11 of user core. Apr 16 23:31:58.544976 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 16 23:31:58.659674 sshd[5581]: Connection closed by 50.85.169.122 port 47474 Apr 16 23:31:58.660230 sshd-session[5578]: pam_unix(sshd:session): session closed for user core Apr 16 23:31:58.668354 systemd[1]: sshd@10-46.224.1.2:22-50.85.169.122:47474.service: Deactivated successfully. Apr 16 23:31:58.672748 systemd[1]: session-11.scope: Deactivated successfully. Apr 16 23:31:58.675385 systemd-logind[1520]: Session 11 logged out. Waiting for processes to exit. Apr 16 23:31:58.686956 systemd[1]: Started sshd@11-46.224.1.2:22-50.85.169.122:47482.service - OpenSSH per-connection server daemon (50.85.169.122:47482). Apr 16 23:31:58.688863 systemd-logind[1520]: Removed session 11. Apr 16 23:31:58.824678 sshd[5593]: Accepted publickey for core from 50.85.169.122 port 47482 ssh2: RSA SHA256:YEfK51SBTL1bMgxZgn6/4J+7cuIr/XFmOhH6oMsPne8 Apr 16 23:31:58.826656 sshd-session[5593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:31:58.837761 systemd-logind[1520]: New session 12 of user core. Apr 16 23:31:58.841004 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 16 23:31:59.024713 sshd[5596]: Connection closed by 50.85.169.122 port 47482 Apr 16 23:31:59.027587 sshd-session[5593]: pam_unix(sshd:session): session closed for user core Apr 16 23:31:59.034064 systemd[1]: sshd@11-46.224.1.2:22-50.85.169.122:47482.service: Deactivated successfully. Apr 16 23:31:59.034066 systemd-logind[1520]: Session 12 logged out. Waiting for processes to exit. Apr 16 23:31:59.038799 systemd[1]: session-12.scope: Deactivated successfully. Apr 16 23:31:59.053004 systemd-logind[1520]: Removed session 12. Apr 16 23:31:59.053613 systemd[1]: Started sshd@12-46.224.1.2:22-50.85.169.122:47498.service - OpenSSH per-connection server daemon (50.85.169.122:47498). Apr 16 23:31:59.187100 sshd[5607]: Accepted publickey for core from 50.85.169.122 port 47498 ssh2: RSA SHA256:YEfK51SBTL1bMgxZgn6/4J+7cuIr/XFmOhH6oMsPne8 Apr 16 23:31:59.190896 sshd-session[5607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:31:59.196754 systemd-logind[1520]: New session 13 of user core. Apr 16 23:31:59.202997 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 16 23:31:59.328733 sshd[5610]: Connection closed by 50.85.169.122 port 47498 Apr 16 23:31:59.329683 sshd-session[5607]: pam_unix(sshd:session): session closed for user core Apr 16 23:31:59.337207 systemd-logind[1520]: Session 13 logged out. Waiting for processes to exit. Apr 16 23:31:59.337790 systemd[1]: sshd@12-46.224.1.2:22-50.85.169.122:47498.service: Deactivated successfully. Apr 16 23:31:59.341423 systemd[1]: session-13.scope: Deactivated successfully. Apr 16 23:31:59.344887 systemd-logind[1520]: Removed session 13. Apr 16 23:32:04.356302 systemd[1]: Started sshd@13-46.224.1.2:22-50.85.169.122:35086.service - OpenSSH per-connection server daemon (50.85.169.122:35086). Apr 16 23:32:04.492270 sshd[5658]: Accepted publickey for core from 50.85.169.122 port 35086 ssh2: RSA SHA256:YEfK51SBTL1bMgxZgn6/4J+7cuIr/XFmOhH6oMsPne8 Apr 16 23:32:04.495047 sshd-session[5658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:32:04.501962 systemd-logind[1520]: New session 14 of user core. Apr 16 23:32:04.505061 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 16 23:32:04.629751 sshd[5663]: Connection closed by 50.85.169.122 port 35086 Apr 16 23:32:04.631213 sshd-session[5658]: pam_unix(sshd:session): session closed for user core Apr 16 23:32:04.637985 systemd-logind[1520]: Session 14 logged out. Waiting for processes to exit. Apr 16 23:32:04.638617 systemd[1]: sshd@13-46.224.1.2:22-50.85.169.122:35086.service: Deactivated successfully. Apr 16 23:32:04.643833 systemd[1]: session-14.scope: Deactivated successfully. Apr 16 23:32:04.659380 systemd-logind[1520]: Removed session 14. Apr 16 23:32:04.661143 systemd[1]: Started sshd@14-46.224.1.2:22-50.85.169.122:35092.service - OpenSSH per-connection server daemon (50.85.169.122:35092). Apr 16 23:32:04.792640 sshd[5675]: Accepted publickey for core from 50.85.169.122 port 35092 ssh2: RSA SHA256:YEfK51SBTL1bMgxZgn6/4J+7cuIr/XFmOhH6oMsPne8 Apr 16 23:32:04.796308 sshd-session[5675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:32:04.801608 systemd-logind[1520]: New session 15 of user core. Apr 16 23:32:04.809909 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 16 23:32:05.097945 sshd[5678]: Connection closed by 50.85.169.122 port 35092 Apr 16 23:32:05.099582 sshd-session[5675]: pam_unix(sshd:session): session closed for user core Apr 16 23:32:05.106481 systemd[1]: sshd@14-46.224.1.2:22-50.85.169.122:35092.service: Deactivated successfully. Apr 16 23:32:05.112446 systemd[1]: session-15.scope: Deactivated successfully. Apr 16 23:32:05.116711 systemd-logind[1520]: Session 15 logged out. Waiting for processes to exit. Apr 16 23:32:05.139280 systemd[1]: Started sshd@15-46.224.1.2:22-50.85.169.122:35098.service - OpenSSH per-connection server daemon (50.85.169.122:35098). Apr 16 23:32:05.143387 systemd-logind[1520]: Removed session 15. Apr 16 23:32:05.272739 sshd[5688]: Accepted publickey for core from 50.85.169.122 port 35098 ssh2: RSA SHA256:YEfK51SBTL1bMgxZgn6/4J+7cuIr/XFmOhH6oMsPne8 Apr 16 23:32:05.274897 sshd-session[5688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:32:05.281448 systemd-logind[1520]: New session 16 of user core. Apr 16 23:32:05.292047 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 16 23:32:06.120584 sshd[5691]: Connection closed by 50.85.169.122 port 35098 Apr 16 23:32:06.120358 sshd-session[5688]: pam_unix(sshd:session): session closed for user core Apr 16 23:32:06.128873 systemd-logind[1520]: Session 16 logged out. Waiting for processes to exit. Apr 16 23:32:06.129005 systemd[1]: sshd@15-46.224.1.2:22-50.85.169.122:35098.service: Deactivated successfully. Apr 16 23:32:06.135318 systemd[1]: session-16.scope: Deactivated successfully. Apr 16 23:32:06.149501 systemd[1]: Started sshd@16-46.224.1.2:22-50.85.169.122:35100.service - OpenSSH per-connection server daemon (50.85.169.122:35100). Apr 16 23:32:06.151790 systemd-logind[1520]: Removed session 16. Apr 16 23:32:06.278362 sshd[5714]: Accepted publickey for core from 50.85.169.122 port 35100 ssh2: RSA SHA256:YEfK51SBTL1bMgxZgn6/4J+7cuIr/XFmOhH6oMsPne8 Apr 16 23:32:06.281255 sshd-session[5714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:32:06.287968 systemd-logind[1520]: New session 17 of user core. Apr 16 23:32:06.294024 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 16 23:32:06.544164 sshd[5717]: Connection closed by 50.85.169.122 port 35100 Apr 16 23:32:06.545295 sshd-session[5714]: pam_unix(sshd:session): session closed for user core Apr 16 23:32:06.553530 systemd[1]: sshd@16-46.224.1.2:22-50.85.169.122:35100.service: Deactivated successfully. Apr 16 23:32:06.558083 systemd[1]: session-17.scope: Deactivated successfully. Apr 16 23:32:06.561068 systemd-logind[1520]: Session 17 logged out. Waiting for processes to exit. Apr 16 23:32:06.574988 systemd[1]: Started sshd@17-46.224.1.2:22-50.85.169.122:35104.service - OpenSSH per-connection server daemon (50.85.169.122:35104). Apr 16 23:32:06.577373 systemd-logind[1520]: Removed session 17. Apr 16 23:32:06.713803 sshd[5727]: Accepted publickey for core from 50.85.169.122 port 35104 ssh2: RSA SHA256:YEfK51SBTL1bMgxZgn6/4J+7cuIr/XFmOhH6oMsPne8 Apr 16 23:32:06.716542 sshd-session[5727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:32:06.722407 systemd-logind[1520]: New session 18 of user core. Apr 16 23:32:06.727889 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 16 23:32:06.848913 sshd[5730]: Connection closed by 50.85.169.122 port 35104 Apr 16 23:32:06.850598 sshd-session[5727]: pam_unix(sshd:session): session closed for user core Apr 16 23:32:06.855771 systemd-logind[1520]: Session 18 logged out. Waiting for processes to exit. Apr 16 23:32:06.856076 systemd[1]: sshd@17-46.224.1.2:22-50.85.169.122:35104.service: Deactivated successfully. Apr 16 23:32:06.859474 systemd[1]: session-18.scope: Deactivated successfully. Apr 16 23:32:06.861389 systemd-logind[1520]: Removed session 18. Apr 16 23:32:11.880108 systemd[1]: Started sshd@18-46.224.1.2:22-50.85.169.122:49036.service - OpenSSH per-connection server daemon (50.85.169.122:49036). Apr 16 23:32:12.014558 sshd[5770]: Accepted publickey for core from 50.85.169.122 port 49036 ssh2: RSA SHA256:YEfK51SBTL1bMgxZgn6/4J+7cuIr/XFmOhH6oMsPne8 Apr 16 23:32:12.016799 sshd-session[5770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:32:12.023443 systemd-logind[1520]: New session 19 of user core. Apr 16 23:32:12.029919 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 16 23:32:12.147559 sshd[5773]: Connection closed by 50.85.169.122 port 49036 Apr 16 23:32:12.148303 sshd-session[5770]: pam_unix(sshd:session): session closed for user core Apr 16 23:32:12.155526 systemd-logind[1520]: Session 19 logged out. Waiting for processes to exit. Apr 16 23:32:12.156045 systemd[1]: sshd@18-46.224.1.2:22-50.85.169.122:49036.service: Deactivated successfully. Apr 16 23:32:12.161667 systemd[1]: session-19.scope: Deactivated successfully. Apr 16 23:32:12.167830 systemd-logind[1520]: Removed session 19. Apr 16 23:32:17.178199 systemd[1]: Started sshd@19-46.224.1.2:22-50.85.169.122:49042.service - OpenSSH per-connection server daemon (50.85.169.122:49042). Apr 16 23:32:17.308776 sshd[5807]: Accepted publickey for core from 50.85.169.122 port 49042 ssh2: RSA SHA256:YEfK51SBTL1bMgxZgn6/4J+7cuIr/XFmOhH6oMsPne8 Apr 16 23:32:17.311100 sshd-session[5807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:32:17.317204 systemd-logind[1520]: New session 20 of user core. Apr 16 23:32:17.325020 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 16 23:32:17.448398 sshd[5810]: Connection closed by 50.85.169.122 port 49042 Apr 16 23:32:17.449422 sshd-session[5807]: pam_unix(sshd:session): session closed for user core Apr 16 23:32:17.454943 systemd[1]: sshd@19-46.224.1.2:22-50.85.169.122:49042.service: Deactivated successfully. Apr 16 23:32:17.458175 systemd[1]: session-20.scope: Deactivated successfully. Apr 16 23:32:17.459972 systemd-logind[1520]: Session 20 logged out. Waiting for processes to exit. Apr 16 23:32:17.462003 systemd-logind[1520]: Removed session 20. Apr 16 23:32:31.875301 systemd[1]: cri-containerd-f172186664fee74882b3d372a3bbe4be272795324dd64dd08b459d3847d1b985.scope: Deactivated successfully. Apr 16 23:32:31.875622 systemd[1]: cri-containerd-f172186664fee74882b3d372a3bbe4be272795324dd64dd08b459d3847d1b985.scope: Consumed 3.015s CPU time, 66.4M memory peak, 2.2M read from disk. Apr 16 23:32:31.881730 containerd[1544]: time="2026-04-16T23:32:31.881664398Z" level=info msg="received container exit event container_id:\"f172186664fee74882b3d372a3bbe4be272795324dd64dd08b459d3847d1b985\" id:\"f172186664fee74882b3d372a3bbe4be272795324dd64dd08b459d3847d1b985\" pid:2600 exit_status:1 exited_at:{seconds:1776382351 nanos:880882547}" Apr 16 23:32:31.908638 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f172186664fee74882b3d372a3bbe4be272795324dd64dd08b459d3847d1b985-rootfs.mount: Deactivated successfully. Apr 16 23:32:32.318163 kubelet[2759]: E0416 23:32:32.317998 2759 controller.go:251] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:46340->10.0.0.2:2379: read: connection timed out" Apr 16 23:32:32.900687 kubelet[2759]: I0416 23:32:32.900641 2759 scope.go:122] "RemoveContainer" containerID="f172186664fee74882b3d372a3bbe4be272795324dd64dd08b459d3847d1b985" Apr 16 23:32:32.905724 containerd[1544]: time="2026-04-16T23:32:32.904868563Z" level=info msg="CreateContainer within sandbox \"abe753f053d575166adc9f99e1eb2e2abd746ae86dc55cf6fb457de1efa5e12a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 16 23:32:32.930021 containerd[1544]: time="2026-04-16T23:32:32.929979094Z" level=info msg="Container cb7eb72c66bfbb78a1783ac7bb003969d8d8bf51325e06ec0c63509f80dee304: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:32:32.930948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount250866019.mount: Deactivated successfully. Apr 16 23:32:32.942260 systemd[1]: cri-containerd-cfe28905f7a54da9d80a858e312aa150dc465d6a3569920af54900b8d0d841f3.scope: Deactivated successfully. Apr 16 23:32:32.942544 systemd[1]: cri-containerd-cfe28905f7a54da9d80a858e312aa150dc465d6a3569920af54900b8d0d841f3.scope: Consumed 14.439s CPU time, 122.1M memory peak, 3.4M read from disk. Apr 16 23:32:32.945201 containerd[1544]: time="2026-04-16T23:32:32.944610671Z" level=info msg="CreateContainer within sandbox \"abe753f053d575166adc9f99e1eb2e2abd746ae86dc55cf6fb457de1efa5e12a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"cb7eb72c66bfbb78a1783ac7bb003969d8d8bf51325e06ec0c63509f80dee304\"" Apr 16 23:32:32.946240 containerd[1544]: time="2026-04-16T23:32:32.945015997Z" level=info msg="received container exit event container_id:\"cfe28905f7a54da9d80a858e312aa150dc465d6a3569920af54900b8d0d841f3\" id:\"cfe28905f7a54da9d80a858e312aa150dc465d6a3569920af54900b8d0d841f3\" pid:3085 exit_status:1 exited_at:{seconds:1776382352 nanos:944629391}" Apr 16 23:32:32.947576 containerd[1544]: time="2026-04-16T23:32:32.947529394Z" level=info msg="StartContainer for \"cb7eb72c66bfbb78a1783ac7bb003969d8d8bf51325e06ec0c63509f80dee304\"" Apr 16 23:32:32.951281 containerd[1544]: time="2026-04-16T23:32:32.951243529Z" level=info msg="connecting to shim cb7eb72c66bfbb78a1783ac7bb003969d8d8bf51325e06ec0c63509f80dee304" address="unix:///run/containerd/s/b0e51fd40e37ba24564f11f5a3d247441c0dfecd0fb4ead8cb682af73ec76827" protocol=ttrpc version=3 Apr 16 23:32:32.983860 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cfe28905f7a54da9d80a858e312aa150dc465d6a3569920af54900b8d0d841f3-rootfs.mount: Deactivated successfully. Apr 16 23:32:32.994288 systemd[1]: Started cri-containerd-cb7eb72c66bfbb78a1783ac7bb003969d8d8bf51325e06ec0c63509f80dee304.scope - libcontainer container cb7eb72c66bfbb78a1783ac7bb003969d8d8bf51325e06ec0c63509f80dee304. Apr 16 23:32:33.044067 containerd[1544]: time="2026-04-16T23:32:33.043944143Z" level=info msg="StartContainer for \"cb7eb72c66bfbb78a1783ac7bb003969d8d8bf51325e06ec0c63509f80dee304\" returns successfully" Apr 16 23:32:33.909151 kubelet[2759]: I0416 23:32:33.908868 2759 scope.go:122] "RemoveContainer" containerID="cfe28905f7a54da9d80a858e312aa150dc465d6a3569920af54900b8d0d841f3" Apr 16 23:32:33.913527 containerd[1544]: time="2026-04-16T23:32:33.913464298Z" level=info msg="CreateContainer within sandbox \"02fce5e964a6e660c9f9100af7b81ab7dd993614c686c698bf4b307ca872a5f1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 16 23:32:33.926726 containerd[1544]: time="2026-04-16T23:32:33.926191168Z" level=info msg="Container eddf68bde8e19644b5ccfdab8262ee9d8efa64d2e77659bd00465c61b20600d5: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:32:33.939055 containerd[1544]: time="2026-04-16T23:32:33.939001198Z" level=info msg="CreateContainer within sandbox \"02fce5e964a6e660c9f9100af7b81ab7dd993614c686c698bf4b307ca872a5f1\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"eddf68bde8e19644b5ccfdab8262ee9d8efa64d2e77659bd00465c61b20600d5\"" Apr 16 23:32:33.939953 containerd[1544]: time="2026-04-16T23:32:33.939923092Z" level=info msg="StartContainer for \"eddf68bde8e19644b5ccfdab8262ee9d8efa64d2e77659bd00465c61b20600d5\"" Apr 16 23:32:33.941387 containerd[1544]: time="2026-04-16T23:32:33.941358233Z" level=info msg="connecting to shim eddf68bde8e19644b5ccfdab8262ee9d8efa64d2e77659bd00465c61b20600d5" address="unix:///run/containerd/s/b96aa5b3ac4dad1e24e9c53e6fb25c22e0378420b82e643a5ae9128615ebd37b" protocol=ttrpc version=3 Apr 16 23:32:33.963899 systemd[1]: Started cri-containerd-eddf68bde8e19644b5ccfdab8262ee9d8efa64d2e77659bd00465c61b20600d5.scope - libcontainer container eddf68bde8e19644b5ccfdab8262ee9d8efa64d2e77659bd00465c61b20600d5. Apr 16 23:32:33.996506 containerd[1544]: time="2026-04-16T23:32:33.996459571Z" level=info msg="StartContainer for \"eddf68bde8e19644b5ccfdab8262ee9d8efa64d2e77659bd00465c61b20600d5\" returns successfully" Apr 16 23:32:37.246860 kubelet[2759]: E0416 23:32:37.245646 2759 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:45978->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4459-2-4-n-fff9fc0546.18a6fa552d222405 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4459-2-4-n-fff9fc0546,UID:74e6ab4b3a047bb356acf8571be9be24,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4459-2-4-n-fff9fc0546,},FirstTimestamp:2026-04-16 23:32:26.780615685 +0000 UTC m=+148.754446793,LastTimestamp:2026-04-16 23:32:26.780615685 +0000 UTC m=+148.754446793,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-4-n-fff9fc0546,}"