Apr 16 23:55:16.758713 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 16 23:55:16.758734 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Thu Apr 16 22:10:49 -00 2026 Apr 16 23:55:16.758743 kernel: KASLR enabled Apr 16 23:55:16.758749 kernel: efi: EFI v2.7 by EDK II Apr 16 23:55:16.758755 kernel: efi: SMBIOS 3.0=0x43bed0000 MEMATTR=0x43a714018 ACPI 2.0=0x438430018 RNG=0x43843e818 MEMRESERVE=0x438357218 Apr 16 23:55:16.758760 kernel: random: crng init done Apr 16 23:55:16.758767 kernel: secureboot: Secure boot disabled Apr 16 23:55:16.758772 kernel: ACPI: Early table checksum verification disabled Apr 16 23:55:16.758778 kernel: ACPI: RSDP 0x0000000438430018 000024 (v02 BOCHS ) Apr 16 23:55:16.758784 kernel: ACPI: XSDT 0x000000043843FE98 000074 (v01 BOCHS BXPC 00000001 01000013) Apr 16 23:55:16.758791 kernel: ACPI: FACP 0x000000043843FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 23:55:16.758797 kernel: ACPI: DSDT 0x0000000438437518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 23:55:16.758802 kernel: ACPI: APIC 0x000000043843FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 23:55:16.758808 kernel: ACPI: PPTT 0x000000043843D898 000114 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 23:55:16.758815 kernel: ACPI: GTDT 0x000000043843E898 000068 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 23:55:16.758821 kernel: ACPI: MCFG 0x000000043843FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 23:55:16.758829 kernel: ACPI: SPCR 0x000000043843E498 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 23:55:16.758835 kernel: ACPI: DBG2 0x000000043843E798 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 23:55:16.758841 kernel: ACPI: SRAT 0x000000043843E518 0000A0 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 23:55:16.758847 kernel: ACPI: IORT 0x000000043843E618 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 23:55:16.758853 kernel: ACPI: BGRT 0x000000043843E718 000038 (v01 INTEL EDK2 00000002 01000013) Apr 16 23:55:16.758859 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Apr 16 23:55:16.758865 kernel: ACPI: Use ACPI SPCR as default console: Yes Apr 16 23:55:16.758870 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000-0x43fffffff] Apr 16 23:55:16.758876 kernel: NODE_DATA(0) allocated [mem 0x43dff1a00-0x43dff8fff] Apr 16 23:55:16.758882 kernel: Zone ranges: Apr 16 23:55:16.758889 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 16 23:55:16.758895 kernel: DMA32 empty Apr 16 23:55:16.758901 kernel: Normal [mem 0x0000000100000000-0x000000043fffffff] Apr 16 23:55:16.758907 kernel: Device empty Apr 16 23:55:16.758913 kernel: Movable zone start for each node Apr 16 23:55:16.758918 kernel: Early memory node ranges Apr 16 23:55:16.758924 kernel: node 0: [mem 0x0000000040000000-0x000000043843ffff] Apr 16 23:55:16.758930 kernel: node 0: [mem 0x0000000438440000-0x000000043872ffff] Apr 16 23:55:16.758936 kernel: node 0: [mem 0x0000000438730000-0x000000043bbfffff] Apr 16 23:55:16.758942 kernel: node 0: [mem 0x000000043bc00000-0x000000043bfdffff] Apr 16 23:55:16.758948 kernel: node 0: [mem 0x000000043bfe0000-0x000000043fffffff] Apr 16 23:55:16.758954 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x000000043fffffff] Apr 16 23:55:16.758961 kernel: cma: Reserved 16 MiB at 0x00000000ff000000 on node -1 Apr 16 23:55:16.758967 kernel: psci: probing for conduit method from ACPI. Apr 16 23:55:16.758975 kernel: psci: PSCIv1.3 detected in firmware. Apr 16 23:55:16.758982 kernel: psci: Using standard PSCI v0.2 function IDs Apr 16 23:55:16.758988 kernel: psci: Trusted OS migration not required Apr 16 23:55:16.758996 kernel: psci: SMC Calling Convention v1.1 Apr 16 23:55:16.759002 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Apr 16 23:55:16.759009 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Apr 16 23:55:16.759015 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Apr 16 23:55:16.759021 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x2 -> Node 0 Apr 16 23:55:16.759027 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x3 -> Node 0 Apr 16 23:55:16.759034 kernel: percpu: Embedded 33 pages/cpu s97752 r8192 d29224 u135168 Apr 16 23:55:16.759079 kernel: pcpu-alloc: s97752 r8192 d29224 u135168 alloc=33*4096 Apr 16 23:55:16.759086 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Apr 16 23:55:16.759092 kernel: Detected PIPT I-cache on CPU0 Apr 16 23:55:16.759099 kernel: CPU features: detected: GIC system register CPU interface Apr 16 23:55:16.759105 kernel: CPU features: detected: Spectre-v4 Apr 16 23:55:16.759113 kernel: CPU features: detected: Spectre-BHB Apr 16 23:55:16.759120 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 16 23:55:16.759126 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 16 23:55:16.759132 kernel: CPU features: detected: ARM erratum 1418040 Apr 16 23:55:16.759138 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 16 23:55:16.759145 kernel: alternatives: applying boot alternatives Apr 16 23:55:16.759153 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=openstack verity.usrhash=c4961845f9869114226296d88644496bf9e4629823927a5e8ae22de79f1c7b59 Apr 16 23:55:16.759159 kernel: Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Apr 16 23:55:16.759166 kernel: Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Apr 16 23:55:16.759172 kernel: Fallback order for Node 0: 0 Apr 16 23:55:16.759180 kernel: Built 1 zonelists, mobility grouping on. Total pages: 4194304 Apr 16 23:55:16.759186 kernel: Policy zone: Normal Apr 16 23:55:16.759192 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 16 23:55:16.759198 kernel: software IO TLB: area num 4. Apr 16 23:55:16.759205 kernel: software IO TLB: mapped [mem 0x00000000fb000000-0x00000000ff000000] (64MB) Apr 16 23:55:16.759211 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 16 23:55:16.759217 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 16 23:55:16.759224 kernel: rcu: RCU event tracing is enabled. Apr 16 23:55:16.759231 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 16 23:55:16.759237 kernel: Trampoline variant of Tasks RCU enabled. Apr 16 23:55:16.759244 kernel: Tracing variant of Tasks RCU enabled. Apr 16 23:55:16.759250 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 16 23:55:16.759258 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 16 23:55:16.759264 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 16 23:55:16.759271 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 16 23:55:16.759277 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 16 23:55:16.759283 kernel: GICv3: 256 SPIs implemented Apr 16 23:55:16.759290 kernel: GICv3: 0 Extended SPIs implemented Apr 16 23:55:16.759296 kernel: Root IRQ handler: gic_handle_irq Apr 16 23:55:16.759302 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Apr 16 23:55:16.759308 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Apr 16 23:55:16.759315 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Apr 16 23:55:16.759321 kernel: ITS [mem 0x08080000-0x0809ffff] Apr 16 23:55:16.759327 kernel: ITS@0x0000000008080000: allocated 8192 Devices @100110000 (indirect, esz 8, psz 64K, shr 1) Apr 16 23:55:16.759335 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @100120000 (flat, esz 8, psz 64K, shr 1) Apr 16 23:55:16.759341 kernel: GICv3: using LPI property table @0x0000000100130000 Apr 16 23:55:16.759348 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000100140000 Apr 16 23:55:16.759354 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 16 23:55:16.759360 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 16 23:55:16.759367 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 16 23:55:16.759373 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 16 23:55:16.759380 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 16 23:55:16.759386 kernel: arm-pv: using stolen time PV Apr 16 23:55:16.759393 kernel: Console: colour dummy device 80x25 Apr 16 23:55:16.759400 kernel: ACPI: Core revision 20240827 Apr 16 23:55:16.759407 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 16 23:55:16.759414 kernel: pid_max: default: 32768 minimum: 301 Apr 16 23:55:16.759420 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 16 23:55:16.759427 kernel: landlock: Up and running. Apr 16 23:55:16.759433 kernel: SELinux: Initializing. Apr 16 23:55:16.759440 kernel: Mount-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 16 23:55:16.759446 kernel: Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 16 23:55:16.759453 kernel: rcu: Hierarchical SRCU implementation. Apr 16 23:55:16.759460 kernel: rcu: Max phase no-delay instances is 400. Apr 16 23:55:16.759467 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 16 23:55:16.759474 kernel: Remapping and enabling EFI services. Apr 16 23:55:16.759480 kernel: smp: Bringing up secondary CPUs ... Apr 16 23:55:16.759487 kernel: Detected PIPT I-cache on CPU1 Apr 16 23:55:16.759494 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Apr 16 23:55:16.759500 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100150000 Apr 16 23:55:16.759507 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 16 23:55:16.759513 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 16 23:55:16.759520 kernel: Detected PIPT I-cache on CPU2 Apr 16 23:55:16.759532 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Apr 16 23:55:16.759539 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000100160000 Apr 16 23:55:16.759546 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 16 23:55:16.759555 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Apr 16 23:55:16.759562 kernel: Detected PIPT I-cache on CPU3 Apr 16 23:55:16.759569 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Apr 16 23:55:16.759576 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000100170000 Apr 16 23:55:16.759583 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 16 23:55:16.759591 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Apr 16 23:55:16.759597 kernel: smp: Brought up 1 node, 4 CPUs Apr 16 23:55:16.759605 kernel: SMP: Total of 4 processors activated. Apr 16 23:55:16.759611 kernel: CPU: All CPU(s) started at EL1 Apr 16 23:55:16.759618 kernel: CPU features: detected: 32-bit EL0 Support Apr 16 23:55:16.759625 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 16 23:55:16.759632 kernel: CPU features: detected: Common not Private translations Apr 16 23:55:16.759639 kernel: CPU features: detected: CRC32 instructions Apr 16 23:55:16.759646 kernel: CPU features: detected: Enhanced Virtualization Traps Apr 16 23:55:16.759654 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 16 23:55:16.759661 kernel: CPU features: detected: LSE atomic instructions Apr 16 23:55:16.759668 kernel: CPU features: detected: Privileged Access Never Apr 16 23:55:16.759675 kernel: CPU features: detected: RAS Extension Support Apr 16 23:55:16.759681 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Apr 16 23:55:16.759688 kernel: alternatives: applying system-wide alternatives Apr 16 23:55:16.759695 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Apr 16 23:55:16.759702 kernel: Memory: 16297296K/16777216K available (11200K kernel code, 2458K rwdata, 9092K rodata, 39552K init, 1038K bss, 457136K reserved, 16384K cma-reserved) Apr 16 23:55:16.759709 kernel: devtmpfs: initialized Apr 16 23:55:16.759717 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 16 23:55:16.759724 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 16 23:55:16.759731 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 16 23:55:16.759738 kernel: 0 pages in range for non-PLT usage Apr 16 23:55:16.759745 kernel: 508384 pages in range for PLT usage Apr 16 23:55:16.759752 kernel: pinctrl core: initialized pinctrl subsystem Apr 16 23:55:16.759759 kernel: SMBIOS 3.0.0 present. Apr 16 23:55:16.759765 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Apr 16 23:55:16.759772 kernel: DMI: Memory slots populated: 1/1 Apr 16 23:55:16.759780 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 16 23:55:16.759787 kernel: DMA: preallocated 2048 KiB GFP_KERNEL pool for atomic allocations Apr 16 23:55:16.759794 kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 16 23:55:16.759801 kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 16 23:55:16.759808 kernel: audit: initializing netlink subsys (disabled) Apr 16 23:55:16.759815 kernel: audit: type=2000 audit(0.043:1): state=initialized audit_enabled=0 res=1 Apr 16 23:55:16.759822 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 16 23:55:16.759829 kernel: cpuidle: using governor menu Apr 16 23:55:16.759836 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 16 23:55:16.759844 kernel: ASID allocator initialised with 32768 entries Apr 16 23:55:16.759851 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 16 23:55:16.759857 kernel: Serial: AMBA PL011 UART driver Apr 16 23:55:16.759864 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 16 23:55:16.759871 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 16 23:55:16.759878 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 16 23:55:16.759885 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 16 23:55:16.759892 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 16 23:55:16.759899 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 16 23:55:16.759907 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 16 23:55:16.759914 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 16 23:55:16.759920 kernel: ACPI: Added _OSI(Module Device) Apr 16 23:55:16.759927 kernel: ACPI: Added _OSI(Processor Device) Apr 16 23:55:16.759934 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 16 23:55:16.759941 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 16 23:55:16.759948 kernel: ACPI: Interpreter enabled Apr 16 23:55:16.759955 kernel: ACPI: Using GIC for interrupt routing Apr 16 23:55:16.759961 kernel: ACPI: MCFG table detected, 1 entries Apr 16 23:55:16.759969 kernel: ACPI: CPU0 has been hot-added Apr 16 23:55:16.759976 kernel: ACPI: CPU1 has been hot-added Apr 16 23:55:16.759983 kernel: ACPI: CPU2 has been hot-added Apr 16 23:55:16.759990 kernel: ACPI: CPU3 has been hot-added Apr 16 23:55:16.759996 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Apr 16 23:55:16.760003 kernel: printk: legacy console [ttyAMA0] enabled Apr 16 23:55:16.760010 kernel: ACPI: PCI: Interrupt link L000 configured for IRQ 35 Apr 16 23:55:16.760017 kernel: ACPI: PCI: Interrupt link L001 configured for IRQ 36 Apr 16 23:55:16.760024 kernel: ACPI: PCI: Interrupt link L002 configured for IRQ 37 Apr 16 23:55:16.760032 kernel: ACPI: PCI: Interrupt link L003 configured for IRQ 38 Apr 16 23:55:16.760045 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 16 23:55:16.760180 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 16 23:55:16.760245 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 16 23:55:16.760323 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 16 23:55:16.760382 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Apr 16 23:55:16.760440 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Apr 16 23:55:16.760453 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Apr 16 23:55:16.760460 kernel: PCI host bridge to bus 0000:00 Apr 16 23:55:16.760532 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Apr 16 23:55:16.760587 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 16 23:55:16.760640 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Apr 16 23:55:16.760692 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 16 23:55:16.760771 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Apr 16 23:55:16.760845 kernel: pci 0000:00:01.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:55:16.760907 kernel: pci 0000:00:01.0: BAR 0 [mem 0x125a0000-0x125a0fff] Apr 16 23:55:16.760983 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Apr 16 23:55:16.761068 kernel: pci 0000:00:01.0: bridge window [mem 0x12400000-0x124fffff] Apr 16 23:55:16.761133 kernel: pci 0000:00:01.0: bridge window [mem 0x8000000000-0x80000fffff 64bit pref] Apr 16 23:55:16.761195 kernel: pci 0000:00:01.0: enabling Extended Tags Apr 16 23:55:16.761262 kernel: pci 0000:00:01.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:55:16.761324 kernel: pci 0000:00:01.1: BAR 0 [mem 0x1259f000-0x1259ffff] Apr 16 23:55:16.761381 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Apr 16 23:55:16.761439 kernel: pci 0000:00:01.1: bridge window [mem 0x12300000-0x123fffff] Apr 16 23:55:16.761497 kernel: pci 0000:00:01.1: enabling Extended Tags Apr 16 23:55:16.761564 kernel: pci 0000:00:01.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:55:16.761624 kernel: pci 0000:00:01.2: BAR 0 [mem 0x1259e000-0x1259efff] Apr 16 23:55:16.761684 kernel: pci 0000:00:01.2: PCI bridge to [bus 03] Apr 16 23:55:16.761742 kernel: pci 0000:00:01.2: bridge window [mem 0x12200000-0x122fffff] Apr 16 23:55:16.761799 kernel: pci 0000:00:01.2: bridge window [mem 0x8000100000-0x80001fffff 64bit pref] Apr 16 23:55:16.761859 kernel: pci 0000:00:01.2: enabling Extended Tags Apr 16 23:55:16.761932 kernel: pci 0000:00:01.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:55:16.761993 kernel: pci 0000:00:01.3: BAR 0 [mem 0x1259d000-0x1259dfff] Apr 16 23:55:16.762068 kernel: pci 0000:00:01.3: PCI bridge to [bus 04] Apr 16 23:55:16.762131 kernel: pci 0000:00:01.3: bridge window [mem 0x8000200000-0x80002fffff 64bit pref] Apr 16 23:55:16.762189 kernel: pci 0000:00:01.3: enabling Extended Tags Apr 16 23:55:16.762254 kernel: pci 0000:00:01.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:55:16.762324 kernel: pci 0000:00:01.4: BAR 0 [mem 0x1259c000-0x1259cfff] Apr 16 23:55:16.762386 kernel: pci 0000:00:01.4: PCI bridge to [bus 05] Apr 16 23:55:16.762452 kernel: pci 0000:00:01.4: bridge window [mem 0x12100000-0x121fffff] Apr 16 23:55:16.762513 kernel: pci 0000:00:01.4: bridge window [mem 0x8000300000-0x80003fffff 64bit pref] Apr 16 23:55:16.762574 kernel: pci 0000:00:01.4: enabling Extended Tags Apr 16 23:55:16.762659 kernel: pci 0000:00:01.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:55:16.762719 kernel: pci 0000:00:01.5: BAR 0 [mem 0x1259b000-0x1259bfff] Apr 16 23:55:16.762778 kernel: pci 0000:00:01.5: PCI bridge to [bus 06] Apr 16 23:55:16.762835 kernel: pci 0000:00:01.5: bridge window [mem 0x12000000-0x120fffff] Apr 16 23:55:16.762892 kernel: pci 0000:00:01.5: bridge window [mem 0x8000400000-0x80004fffff 64bit pref] Apr 16 23:55:16.762949 kernel: pci 0000:00:01.5: enabling Extended Tags Apr 16 23:55:16.763013 kernel: pci 0000:00:01.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:55:16.763084 kernel: pci 0000:00:01.6: BAR 0 [mem 0x1259a000-0x1259afff] Apr 16 23:55:16.763142 kernel: pci 0000:00:01.6: PCI bridge to [bus 07] Apr 16 23:55:16.763199 kernel: pci 0000:00:01.6: enabling Extended Tags Apr 16 23:55:16.763276 kernel: pci 0000:00:01.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:55:16.763341 kernel: pci 0000:00:01.7: BAR 0 [mem 0x12599000-0x12599fff] Apr 16 23:55:16.763421 kernel: pci 0000:00:01.7: PCI bridge to [bus 08] Apr 16 23:55:16.763480 kernel: pci 0000:00:01.7: enabling Extended Tags Apr 16 23:55:16.763583 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:55:16.763647 kernel: pci 0000:00:02.0: BAR 0 [mem 0x12598000-0x12598fff] Apr 16 23:55:16.763706 kernel: pci 0000:00:02.0: PCI bridge to [bus 09] Apr 16 23:55:16.763764 kernel: pci 0000:00:02.0: enabling Extended Tags Apr 16 23:55:16.763830 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:55:16.763889 kernel: pci 0000:00:02.1: BAR 0 [mem 0x12597000-0x12597fff] Apr 16 23:55:16.763949 kernel: pci 0000:00:02.1: PCI bridge to [bus 0a] Apr 16 23:55:16.764007 kernel: pci 0000:00:02.1: enabling Extended Tags Apr 16 23:55:16.764087 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:55:16.764149 kernel: pci 0000:00:02.2: BAR 0 [mem 0x12596000-0x12596fff] Apr 16 23:55:16.764209 kernel: pci 0000:00:02.2: PCI bridge to [bus 0b] Apr 16 23:55:16.764282 kernel: pci 0000:00:02.2: enabling Extended Tags Apr 16 23:55:16.764362 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:55:16.764444 kernel: pci 0000:00:02.3: BAR 0 [mem 0x12595000-0x12595fff] Apr 16 23:55:16.764506 kernel: pci 0000:00:02.3: PCI bridge to [bus 0c] Apr 16 23:55:16.764567 kernel: pci 0000:00:02.3: enabling Extended Tags Apr 16 23:55:16.764636 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:55:16.764697 kernel: pci 0000:00:02.4: BAR 0 [mem 0x12594000-0x12594fff] Apr 16 23:55:16.764755 kernel: pci 0000:00:02.4: PCI bridge to [bus 0d] Apr 16 23:55:16.764812 kernel: pci 0000:00:02.4: enabling Extended Tags Apr 16 23:55:16.764878 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:55:16.764936 kernel: pci 0000:00:02.5: BAR 0 [mem 0x12593000-0x12593fff] Apr 16 23:55:16.764995 kernel: pci 0000:00:02.5: PCI bridge to [bus 0e] Apr 16 23:55:16.765082 kernel: pci 0000:00:02.5: enabling Extended Tags Apr 16 23:55:16.765151 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:55:16.765210 kernel: pci 0000:00:02.6: BAR 0 [mem 0x12592000-0x12592fff] Apr 16 23:55:16.765274 kernel: pci 0000:00:02.6: PCI bridge to [bus 0f] Apr 16 23:55:16.765337 kernel: pci 0000:00:02.6: enabling Extended Tags Apr 16 23:55:16.765407 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:55:16.765469 kernel: pci 0000:00:02.7: BAR 0 [mem 0x12591000-0x12591fff] Apr 16 23:55:16.765529 kernel: pci 0000:00:02.7: PCI bridge to [bus 10] Apr 16 23:55:16.765587 kernel: pci 0000:00:02.7: enabling Extended Tags Apr 16 23:55:16.765652 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:55:16.765710 kernel: pci 0000:00:03.0: BAR 0 [mem 0x12590000-0x12590fff] Apr 16 23:55:16.765771 kernel: pci 0000:00:03.0: PCI bridge to [bus 11] Apr 16 23:55:16.765829 kernel: pci 0000:00:03.0: enabling Extended Tags Apr 16 23:55:16.765907 kernel: pci 0000:00:03.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:55:16.765966 kernel: pci 0000:00:03.1: BAR 0 [mem 0x1258f000-0x1258ffff] Apr 16 23:55:16.766023 kernel: pci 0000:00:03.1: PCI bridge to [bus 12] Apr 16 23:55:16.766097 kernel: pci 0000:00:03.1: bridge window [io 0xf000-0xffff] Apr 16 23:55:16.766156 kernel: pci 0000:00:03.1: bridge window [mem 0x11e00000-0x11ffffff] Apr 16 23:55:16.766218 kernel: pci 0000:00:03.1: enabling Extended Tags Apr 16 23:55:16.766283 kernel: pci 0000:00:03.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:55:16.766341 kernel: pci 0000:00:03.2: BAR 0 [mem 0x1258e000-0x1258efff] Apr 16 23:55:16.766404 kernel: pci 0000:00:03.2: PCI bridge to [bus 13] Apr 16 23:55:16.766462 kernel: pci 0000:00:03.2: bridge window [io 0xe000-0xefff] Apr 16 23:55:16.766519 kernel: pci 0000:00:03.2: bridge window [mem 0x11c00000-0x11dfffff] Apr 16 23:55:16.766576 kernel: pci 0000:00:03.2: enabling Extended Tags Apr 16 23:55:16.766643 kernel: pci 0000:00:03.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:55:16.766701 kernel: pci 0000:00:03.3: BAR 0 [mem 0x1258d000-0x1258dfff] Apr 16 23:55:16.766758 kernel: pci 0000:00:03.3: PCI bridge to [bus 14] Apr 16 23:55:16.766835 kernel: pci 0000:00:03.3: bridge window [io 0xd000-0xdfff] Apr 16 23:55:16.766917 kernel: pci 0000:00:03.3: bridge window [mem 0x11a00000-0x11bfffff] Apr 16 23:55:16.766977 kernel: pci 0000:00:03.3: enabling Extended Tags Apr 16 23:55:16.767062 kernel: pci 0000:00:03.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:55:16.767128 kernel: pci 0000:00:03.4: BAR 0 [mem 0x1258c000-0x1258cfff] Apr 16 23:55:16.767186 kernel: pci 0000:00:03.4: PCI bridge to [bus 15] Apr 16 23:55:16.767243 kernel: pci 0000:00:03.4: bridge window [io 0xc000-0xcfff] Apr 16 23:55:16.767300 kernel: pci 0000:00:03.4: bridge window [mem 0x11800000-0x119fffff] Apr 16 23:55:16.767358 kernel: pci 0000:00:03.4: enabling Extended Tags Apr 16 23:55:16.767425 kernel: pci 0000:00:03.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:55:16.767483 kernel: pci 0000:00:03.5: BAR 0 [mem 0x1258b000-0x1258bfff] Apr 16 23:55:16.767543 kernel: pci 0000:00:03.5: PCI bridge to [bus 16] Apr 16 23:55:16.767600 kernel: pci 0000:00:03.5: bridge window [io 0xb000-0xbfff] Apr 16 23:55:16.767657 kernel: pci 0000:00:03.5: bridge window [mem 0x11600000-0x117fffff] Apr 16 23:55:16.767715 kernel: pci 0000:00:03.5: enabling Extended Tags Apr 16 23:55:16.767780 kernel: pci 0000:00:03.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:55:16.767838 kernel: pci 0000:00:03.6: BAR 0 [mem 0x1258a000-0x1258afff] Apr 16 23:55:16.767896 kernel: pci 0000:00:03.6: PCI bridge to [bus 17] Apr 16 23:55:16.767954 kernel: pci 0000:00:03.6: bridge window [io 0xa000-0xafff] Apr 16 23:55:16.768011 kernel: pci 0000:00:03.6: bridge window [mem 0x11400000-0x115fffff] Apr 16 23:55:16.768080 kernel: pci 0000:00:03.6: enabling Extended Tags Apr 16 23:55:16.768146 kernel: pci 0000:00:03.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:55:16.768204 kernel: pci 0000:00:03.7: BAR 0 [mem 0x12589000-0x12589fff] Apr 16 23:55:16.768277 kernel: pci 0000:00:03.7: PCI bridge to [bus 18] Apr 16 23:55:16.768349 kernel: pci 0000:00:03.7: bridge window [io 0x9000-0x9fff] Apr 16 23:55:16.768415 kernel: pci 0000:00:03.7: bridge window [mem 0x11200000-0x113fffff] Apr 16 23:55:16.768472 kernel: pci 0000:00:03.7: enabling Extended Tags Apr 16 23:55:16.768538 kernel: pci 0000:00:04.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:55:16.768596 kernel: pci 0000:00:04.0: BAR 0 [mem 0x12588000-0x12588fff] Apr 16 23:55:16.768655 kernel: pci 0000:00:04.0: PCI bridge to [bus 19] Apr 16 23:55:16.768713 kernel: pci 0000:00:04.0: bridge window [io 0x8000-0x8fff] Apr 16 23:55:16.768770 kernel: pci 0000:00:04.0: bridge window [mem 0x11000000-0x111fffff] Apr 16 23:55:16.768828 kernel: pci 0000:00:04.0: enabling Extended Tags Apr 16 23:55:16.768892 kernel: pci 0000:00:04.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:55:16.768950 kernel: pci 0000:00:04.1: BAR 0 [mem 0x12587000-0x12587fff] Apr 16 23:55:16.769007 kernel: pci 0000:00:04.1: PCI bridge to [bus 1a] Apr 16 23:55:16.769083 kernel: pci 0000:00:04.1: bridge window [io 0x7000-0x7fff] Apr 16 23:55:16.769144 kernel: pci 0000:00:04.1: bridge window [mem 0x10e00000-0x10ffffff] Apr 16 23:55:16.769204 kernel: pci 0000:00:04.1: enabling Extended Tags Apr 16 23:55:16.769269 kernel: pci 0000:00:04.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:55:16.769327 kernel: pci 0000:00:04.2: BAR 0 [mem 0x12586000-0x12586fff] Apr 16 23:55:16.769385 kernel: pci 0000:00:04.2: PCI bridge to [bus 1b] Apr 16 23:55:16.769441 kernel: pci 0000:00:04.2: bridge window [io 0x6000-0x6fff] Apr 16 23:55:16.769501 kernel: pci 0000:00:04.2: bridge window [mem 0x10c00000-0x10dfffff] Apr 16 23:55:16.769558 kernel: pci 0000:00:04.2: enabling Extended Tags Apr 16 23:55:16.769622 kernel: pci 0000:00:04.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:55:16.769680 kernel: pci 0000:00:04.3: BAR 0 [mem 0x12585000-0x12585fff] Apr 16 23:55:16.769737 kernel: pci 0000:00:04.3: PCI bridge to [bus 1c] Apr 16 23:55:16.769794 kernel: pci 0000:00:04.3: bridge window [io 0x5000-0x5fff] Apr 16 23:55:16.769850 kernel: pci 0000:00:04.3: bridge window [mem 0x10a00000-0x10bfffff] Apr 16 23:55:16.769910 kernel: pci 0000:00:04.3: enabling Extended Tags Apr 16 23:55:16.769974 kernel: pci 0000:00:04.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:55:16.770034 kernel: pci 0000:00:04.4: BAR 0 [mem 0x12584000-0x12584fff] Apr 16 23:55:16.770109 kernel: pci 0000:00:04.4: PCI bridge to [bus 1d] Apr 16 23:55:16.770167 kernel: pci 0000:00:04.4: bridge window [io 0x4000-0x4fff] Apr 16 23:55:16.770225 kernel: pci 0000:00:04.4: bridge window [mem 0x10800000-0x109fffff] Apr 16 23:55:16.770294 kernel: pci 0000:00:04.4: enabling Extended Tags Apr 16 23:55:16.770367 kernel: pci 0000:00:04.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:55:16.770428 kernel: pci 0000:00:04.5: BAR 0 [mem 0x12583000-0x12583fff] Apr 16 23:55:16.770485 kernel: pci 0000:00:04.5: PCI bridge to [bus 1e] Apr 16 23:55:16.770542 kernel: pci 0000:00:04.5: bridge window [io 0x3000-0x3fff] Apr 16 23:55:16.770600 kernel: pci 0000:00:04.5: bridge window [mem 0x10600000-0x107fffff] Apr 16 23:55:16.770658 kernel: pci 0000:00:04.5: enabling Extended Tags Apr 16 23:55:16.770724 kernel: pci 0000:00:04.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:55:16.770782 kernel: pci 0000:00:04.6: BAR 0 [mem 0x12582000-0x12582fff] Apr 16 23:55:16.770840 kernel: pci 0000:00:04.6: PCI bridge to [bus 1f] Apr 16 23:55:16.770897 kernel: pci 0000:00:04.6: bridge window [io 0x2000-0x2fff] Apr 16 23:55:16.770955 kernel: pci 0000:00:04.6: bridge window [mem 0x10400000-0x105fffff] Apr 16 23:55:16.771012 kernel: pci 0000:00:04.6: enabling Extended Tags Apr 16 23:55:16.771095 kernel: pci 0000:00:04.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:55:16.771157 kernel: pci 0000:00:04.7: BAR 0 [mem 0x12581000-0x12581fff] Apr 16 23:55:16.771214 kernel: pci 0000:00:04.7: PCI bridge to [bus 20] Apr 16 23:55:16.771271 kernel: pci 0000:00:04.7: bridge window [io 0x1000-0x1fff] Apr 16 23:55:16.771328 kernel: pci 0000:00:04.7: bridge window [mem 0x10200000-0x103fffff] Apr 16 23:55:16.771386 kernel: pci 0000:00:04.7: enabling Extended Tags Apr 16 23:55:16.771450 kernel: pci 0000:00:05.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:55:16.771511 kernel: pci 0000:00:05.0: BAR 0 [mem 0x12580000-0x12580fff] Apr 16 23:55:16.771568 kernel: pci 0000:00:05.0: PCI bridge to [bus 21] Apr 16 23:55:16.771626 kernel: pci 0000:00:05.0: bridge window [io 0x0000-0x0fff] Apr 16 23:55:16.771683 kernel: pci 0000:00:05.0: bridge window [mem 0x10000000-0x101fffff] Apr 16 23:55:16.771740 kernel: pci 0000:00:05.0: enabling Extended Tags Apr 16 23:55:16.771809 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Apr 16 23:55:16.771873 kernel: pci 0000:01:00.0: BAR 1 [mem 0x12400000-0x12400fff] Apr 16 23:55:16.771933 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Apr 16 23:55:16.771993 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref] Apr 16 23:55:16.772062 kernel: pci 0000:01:00.0: enabling Extended Tags Apr 16 23:55:16.772131 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Apr 16 23:55:16.772192 kernel: pci 0000:02:00.0: BAR 0 [mem 0x12300000-0x12303fff 64bit] Apr 16 23:55:16.772254 kernel: pci 0000:02:00.0: enabling Extended Tags Apr 16 23:55:16.772348 kernel: pci 0000:03:00.0: [1af4:1042] type 00 class 0x010000 PCIe Endpoint Apr 16 23:55:16.772417 kernel: pci 0000:03:00.0: BAR 1 [mem 0x12200000-0x12200fff] Apr 16 23:55:16.772478 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000100000-0x8000103fff 64bit pref] Apr 16 23:55:16.772538 kernel: pci 0000:03:00.0: enabling Extended Tags Apr 16 23:55:16.772605 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Apr 16 23:55:16.772666 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000200000-0x8000203fff 64bit pref] Apr 16 23:55:16.772726 kernel: pci 0000:04:00.0: enabling Extended Tags Apr 16 23:55:16.772798 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Apr 16 23:55:16.772862 kernel: pci 0000:05:00.0: BAR 1 [mem 0x12100000-0x12100fff] Apr 16 23:55:16.772922 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000300000-0x8000303fff 64bit pref] Apr 16 23:55:16.772981 kernel: pci 0000:05:00.0: enabling Extended Tags Apr 16 23:55:16.773066 kernel: pci 0000:06:00.0: [1af4:1050] type 00 class 0x038000 PCIe Endpoint Apr 16 23:55:16.773136 kernel: pci 0000:06:00.0: BAR 1 [mem 0x12000000-0x12000fff] Apr 16 23:55:16.773198 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref] Apr 16 23:55:16.773262 kernel: pci 0000:06:00.0: enabling Extended Tags Apr 16 23:55:16.773322 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Apr 16 23:55:16.773381 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Apr 16 23:55:16.773439 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Apr 16 23:55:16.773498 kernel: pci 0000:00:01.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Apr 16 23:55:16.773557 kernel: pci 0000:00:01.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Apr 16 23:55:16.773615 kernel: pci 0000:00:01.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Apr 16 23:55:16.773675 kernel: pci 0000:00:01.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Apr 16 23:55:16.773736 kernel: pci 0000:00:01.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Apr 16 23:55:16.773794 kernel: pci 0000:00:01.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Apr 16 23:55:16.773854 kernel: pci 0000:00:01.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Apr 16 23:55:16.773912 kernel: pci 0000:00:01.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Apr 16 23:55:16.773969 kernel: pci 0000:00:01.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Apr 16 23:55:16.774029 kernel: pci 0000:00:01.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Apr 16 23:55:16.774105 kernel: pci 0000:00:01.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Apr 16 23:55:16.774176 kernel: pci 0000:00:01.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Apr 16 23:55:16.774238 kernel: pci 0000:00:01.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Apr 16 23:55:16.774297 kernel: pci 0000:00:01.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Apr 16 23:55:16.774355 kernel: pci 0000:00:01.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Apr 16 23:55:16.774415 kernel: pci 0000:00:01.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 16 23:55:16.774474 kernel: pci 0000:00:01.6: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 07] add_size 200000 add_align 100000 Apr 16 23:55:16.774532 kernel: pci 0000:00:01.6: bridge window [mem 0x00100000-0x000fffff] to [bus 07] add_size 200000 add_align 100000 Apr 16 23:55:16.774595 kernel: pci 0000:00:01.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 16 23:55:16.774653 kernel: pci 0000:00:01.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Apr 16 23:55:16.774711 kernel: pci 0000:00:01.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Apr 16 23:55:16.774771 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 16 23:55:16.774830 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Apr 16 23:55:16.774889 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Apr 16 23:55:16.774949 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Apr 16 23:55:16.775009 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0a] add_size 200000 add_align 100000 Apr 16 23:55:16.775090 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff] to [bus 0a] add_size 200000 add_align 100000 Apr 16 23:55:16.775155 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 0b] add_size 1000 Apr 16 23:55:16.775214 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Apr 16 23:55:16.775272 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x000fffff] to [bus 0b] add_size 200000 add_align 100000 Apr 16 23:55:16.775332 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 0c] add_size 1000 Apr 16 23:55:16.775390 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0c] add_size 200000 add_align 100000 Apr 16 23:55:16.775451 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 0c] add_size 200000 add_align 100000 Apr 16 23:55:16.775511 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 0d] add_size 1000 Apr 16 23:55:16.775569 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0d] add_size 200000 add_align 100000 Apr 16 23:55:16.775627 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 0d] add_size 200000 add_align 100000 Apr 16 23:55:16.775686 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Apr 16 23:55:16.775744 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0e] add_size 200000 add_align 100000 Apr 16 23:55:16.775804 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x000fffff] to [bus 0e] add_size 200000 add_align 100000 Apr 16 23:55:16.775864 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Apr 16 23:55:16.775921 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0f] add_size 200000 add_align 100000 Apr 16 23:55:16.775979 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x000fffff] to [bus 0f] add_size 200000 add_align 100000 Apr 16 23:55:16.776052 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Apr 16 23:55:16.776119 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 10] add_size 200000 add_align 100000 Apr 16 23:55:16.776178 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 10] add_size 200000 add_align 100000 Apr 16 23:55:16.776242 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Apr 16 23:55:16.776320 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 11] add_size 200000 add_align 100000 Apr 16 23:55:16.776381 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 11] add_size 200000 add_align 100000 Apr 16 23:55:16.776443 kernel: pci 0000:00:03.1: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Apr 16 23:55:16.776501 kernel: pci 0000:00:03.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 12] add_size 200000 add_align 100000 Apr 16 23:55:16.776559 kernel: pci 0000:00:03.1: bridge window [mem 0x00100000-0x000fffff] to [bus 12] add_size 200000 add_align 100000 Apr 16 23:55:16.776622 kernel: pci 0000:00:03.2: bridge window [io 0x1000-0x0fff] to [bus 13] add_size 1000 Apr 16 23:55:16.776686 kernel: pci 0000:00:03.2: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 13] add_size 200000 add_align 100000 Apr 16 23:55:16.776746 kernel: pci 0000:00:03.2: bridge window [mem 0x00100000-0x000fffff] to [bus 13] add_size 200000 add_align 100000 Apr 16 23:55:16.776808 kernel: pci 0000:00:03.3: bridge window [io 0x1000-0x0fff] to [bus 14] add_size 1000 Apr 16 23:55:16.776869 kernel: pci 0000:00:03.3: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 14] add_size 200000 add_align 100000 Apr 16 23:55:16.776928 kernel: pci 0000:00:03.3: bridge window [mem 0x00100000-0x000fffff] to [bus 14] add_size 200000 add_align 100000 Apr 16 23:55:16.776989 kernel: pci 0000:00:03.4: bridge window [io 0x1000-0x0fff] to [bus 15] add_size 1000 Apr 16 23:55:16.777059 kernel: pci 0000:00:03.4: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 15] add_size 200000 add_align 100000 Apr 16 23:55:16.777121 kernel: pci 0000:00:03.4: bridge window [mem 0x00100000-0x000fffff] to [bus 15] add_size 200000 add_align 100000 Apr 16 23:55:16.777185 kernel: pci 0000:00:03.5: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Apr 16 23:55:16.777243 kernel: pci 0000:00:03.5: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 16] add_size 200000 add_align 100000 Apr 16 23:55:16.777301 kernel: pci 0000:00:03.5: bridge window [mem 0x00100000-0x000fffff] to [bus 16] add_size 200000 add_align 100000 Apr 16 23:55:16.777361 kernel: pci 0000:00:03.6: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Apr 16 23:55:16.777419 kernel: pci 0000:00:03.6: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 17] add_size 200000 add_align 100000 Apr 16 23:55:16.777479 kernel: pci 0000:00:03.6: bridge window [mem 0x00100000-0x000fffff] to [bus 17] add_size 200000 add_align 100000 Apr 16 23:55:16.777538 kernel: pci 0000:00:03.7: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Apr 16 23:55:16.777599 kernel: pci 0000:00:03.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 18] add_size 200000 add_align 100000 Apr 16 23:55:16.777656 kernel: pci 0000:00:03.7: bridge window [mem 0x00100000-0x000fffff] to [bus 18] add_size 200000 add_align 100000 Apr 16 23:55:16.777717 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Apr 16 23:55:16.777775 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 19] add_size 200000 add_align 100000 Apr 16 23:55:16.777833 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 19] add_size 200000 add_align 100000 Apr 16 23:55:16.777892 kernel: pci 0000:00:04.1: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Apr 16 23:55:16.777950 kernel: pci 0000:00:04.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1a] add_size 200000 add_align 100000 Apr 16 23:55:16.778010 kernel: pci 0000:00:04.1: bridge window [mem 0x00100000-0x000fffff] to [bus 1a] add_size 200000 add_align 100000 Apr 16 23:55:16.778081 kernel: pci 0000:00:04.2: bridge window [io 0x1000-0x0fff] to [bus 1b] add_size 1000 Apr 16 23:55:16.778142 kernel: pci 0000:00:04.2: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1b] add_size 200000 add_align 100000 Apr 16 23:55:16.778202 kernel: pci 0000:00:04.2: bridge window [mem 0x00100000-0x000fffff] to [bus 1b] add_size 200000 add_align 100000 Apr 16 23:55:16.778264 kernel: pci 0000:00:04.3: bridge window [io 0x1000-0x0fff] to [bus 1c] add_size 1000 Apr 16 23:55:16.778322 kernel: pci 0000:00:04.3: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1c] add_size 200000 add_align 100000 Apr 16 23:55:16.778380 kernel: pci 0000:00:04.3: bridge window [mem 0x00100000-0x000fffff] to [bus 1c] add_size 200000 add_align 100000 Apr 16 23:55:16.778443 kernel: pci 0000:00:04.4: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Apr 16 23:55:16.778503 kernel: pci 0000:00:04.4: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1d] add_size 200000 add_align 100000 Apr 16 23:55:16.778561 kernel: pci 0000:00:04.4: bridge window [mem 0x00100000-0x000fffff] to [bus 1d] add_size 200000 add_align 100000 Apr 16 23:55:16.778623 kernel: pci 0000:00:04.5: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Apr 16 23:55:16.778683 kernel: pci 0000:00:04.5: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1e] add_size 200000 add_align 100000 Apr 16 23:55:16.778741 kernel: pci 0000:00:04.5: bridge window [mem 0x00100000-0x000fffff] to [bus 1e] add_size 200000 add_align 100000 Apr 16 23:55:16.778802 kernel: pci 0000:00:04.6: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Apr 16 23:55:16.778864 kernel: pci 0000:00:04.6: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1f] add_size 200000 add_align 100000 Apr 16 23:55:16.778928 kernel: pci 0000:00:04.6: bridge window [mem 0x00100000-0x000fffff] to [bus 1f] add_size 200000 add_align 100000 Apr 16 23:55:16.778993 kernel: pci 0000:00:04.7: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Apr 16 23:55:16.779073 kernel: pci 0000:00:04.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 20] add_size 200000 add_align 100000 Apr 16 23:55:16.779136 kernel: pci 0000:00:04.7: bridge window [mem 0x00100000-0x000fffff] to [bus 20] add_size 200000 add_align 100000 Apr 16 23:55:16.779216 kernel: pci 0000:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Apr 16 23:55:16.779277 kernel: pci 0000:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 21] add_size 200000 add_align 100000 Apr 16 23:55:16.779337 kernel: pci 0000:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 21] add_size 200000 add_align 100000 Apr 16 23:55:16.779398 kernel: pci 0000:00:01.0: bridge window [mem 0x10000000-0x101fffff]: assigned Apr 16 23:55:16.779463 kernel: pci 0000:00:01.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref]: assigned Apr 16 23:55:16.779523 kernel: pci 0000:00:01.1: bridge window [mem 0x10200000-0x103fffff]: assigned Apr 16 23:55:16.779581 kernel: pci 0000:00:01.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref]: assigned Apr 16 23:55:16.779640 kernel: pci 0000:00:01.2: bridge window [mem 0x10400000-0x105fffff]: assigned Apr 16 23:55:16.779698 kernel: pci 0000:00:01.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref]: assigned Apr 16 23:55:16.779758 kernel: pci 0000:00:01.3: bridge window [mem 0x10600000-0x107fffff]: assigned Apr 16 23:55:16.779816 kernel: pci 0000:00:01.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref]: assigned Apr 16 23:55:16.779877 kernel: pci 0000:00:01.4: bridge window [mem 0x10800000-0x109fffff]: assigned Apr 16 23:55:16.779940 kernel: pci 0000:00:01.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref]: assigned Apr 16 23:55:16.779999 kernel: pci 0000:00:01.5: bridge window [mem 0x10a00000-0x10bfffff]: assigned Apr 16 23:55:16.780076 kernel: pci 0000:00:01.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref]: assigned Apr 16 23:55:16.780140 kernel: pci 0000:00:01.6: bridge window [mem 0x10c00000-0x10dfffff]: assigned Apr 16 23:55:16.780201 kernel: pci 0000:00:01.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref]: assigned Apr 16 23:55:16.780274 kernel: pci 0000:00:01.7: bridge window [mem 0x10e00000-0x10ffffff]: assigned Apr 16 23:55:16.780349 kernel: pci 0000:00:01.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref]: assigned Apr 16 23:55:16.780417 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff]: assigned Apr 16 23:55:16.780477 kernel: pci 0000:00:02.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref]: assigned Apr 16 23:55:16.780539 kernel: pci 0000:00:02.1: bridge window [mem 0x11200000-0x113fffff]: assigned Apr 16 23:55:16.780599 kernel: pci 0000:00:02.1: bridge window [mem 0x8001200000-0x80013fffff 64bit pref]: assigned Apr 16 23:55:16.780658 kernel: pci 0000:00:02.2: bridge window [mem 0x11400000-0x115fffff]: assigned Apr 16 23:55:16.780718 kernel: pci 0000:00:02.2: bridge window [mem 0x8001400000-0x80015fffff 64bit pref]: assigned Apr 16 23:55:16.780793 kernel: pci 0000:00:02.3: bridge window [mem 0x11600000-0x117fffff]: assigned Apr 16 23:55:16.780851 kernel: pci 0000:00:02.3: bridge window [mem 0x8001600000-0x80017fffff 64bit pref]: assigned Apr 16 23:55:16.780914 kernel: pci 0000:00:02.4: bridge window [mem 0x11800000-0x119fffff]: assigned Apr 16 23:55:16.780975 kernel: pci 0000:00:02.4: bridge window [mem 0x8001800000-0x80019fffff 64bit pref]: assigned Apr 16 23:55:16.781035 kernel: pci 0000:00:02.5: bridge window [mem 0x11a00000-0x11bfffff]: assigned Apr 16 23:55:16.781115 kernel: pci 0000:00:02.5: bridge window [mem 0x8001a00000-0x8001bfffff 64bit pref]: assigned Apr 16 23:55:16.781175 kernel: pci 0000:00:02.6: bridge window [mem 0x11c00000-0x11dfffff]: assigned Apr 16 23:55:16.781236 kernel: pci 0000:00:02.6: bridge window [mem 0x8001c00000-0x8001dfffff 64bit pref]: assigned Apr 16 23:55:16.781298 kernel: pci 0000:00:02.7: bridge window [mem 0x11e00000-0x11ffffff]: assigned Apr 16 23:55:16.781358 kernel: pci 0000:00:02.7: bridge window [mem 0x8001e00000-0x8001ffffff 64bit pref]: assigned Apr 16 23:55:16.781423 kernel: pci 0000:00:03.0: bridge window [mem 0x12000000-0x121fffff]: assigned Apr 16 23:55:16.781500 kernel: pci 0000:00:03.0: bridge window [mem 0x8002000000-0x80021fffff 64bit pref]: assigned Apr 16 23:55:16.781560 kernel: pci 0000:00:03.1: bridge window [mem 0x12200000-0x123fffff]: assigned Apr 16 23:55:16.781620 kernel: pci 0000:00:03.1: bridge window [mem 0x8002200000-0x80023fffff 64bit pref]: assigned Apr 16 23:55:16.781680 kernel: pci 0000:00:03.2: bridge window [mem 0x12400000-0x125fffff]: assigned Apr 16 23:55:16.781738 kernel: pci 0000:00:03.2: bridge window [mem 0x8002400000-0x80025fffff 64bit pref]: assigned Apr 16 23:55:16.781797 kernel: pci 0000:00:03.3: bridge window [mem 0x12600000-0x127fffff]: assigned Apr 16 23:55:16.781855 kernel: pci 0000:00:03.3: bridge window [mem 0x8002600000-0x80027fffff 64bit pref]: assigned Apr 16 23:55:16.781917 kernel: pci 0000:00:03.4: bridge window [mem 0x12800000-0x129fffff]: assigned Apr 16 23:55:16.781977 kernel: pci 0000:00:03.4: bridge window [mem 0x8002800000-0x80029fffff 64bit pref]: assigned Apr 16 23:55:16.782037 kernel: pci 0000:00:03.5: bridge window [mem 0x12a00000-0x12bfffff]: assigned Apr 16 23:55:16.782107 kernel: pci 0000:00:03.5: bridge window [mem 0x8002a00000-0x8002bfffff 64bit pref]: assigned Apr 16 23:55:16.782169 kernel: pci 0000:00:03.6: bridge window [mem 0x12c00000-0x12dfffff]: assigned Apr 16 23:55:16.782228 kernel: pci 0000:00:03.6: bridge window [mem 0x8002c00000-0x8002dfffff 64bit pref]: assigned Apr 16 23:55:16.782288 kernel: pci 0000:00:03.7: bridge window [mem 0x12e00000-0x12ffffff]: assigned Apr 16 23:55:16.782355 kernel: pci 0000:00:03.7: bridge window [mem 0x8002e00000-0x8002ffffff 64bit pref]: assigned Apr 16 23:55:16.782416 kernel: pci 0000:00:04.0: bridge window [mem 0x13000000-0x131fffff]: assigned Apr 16 23:55:16.782477 kernel: pci 0000:00:04.0: bridge window [mem 0x8003000000-0x80031fffff 64bit pref]: assigned Apr 16 23:55:16.782538 kernel: pci 0000:00:04.1: bridge window [mem 0x13200000-0x133fffff]: assigned Apr 16 23:55:16.782596 kernel: pci 0000:00:04.1: bridge window [mem 0x8003200000-0x80033fffff 64bit pref]: assigned Apr 16 23:55:16.782654 kernel: pci 0000:00:04.2: bridge window [mem 0x13400000-0x135fffff]: assigned Apr 16 23:55:16.782715 kernel: pci 0000:00:04.2: bridge window [mem 0x8003400000-0x80035fffff 64bit pref]: assigned Apr 16 23:55:16.782774 kernel: pci 0000:00:04.3: bridge window [mem 0x13600000-0x137fffff]: assigned Apr 16 23:55:16.782834 kernel: pci 0000:00:04.3: bridge window [mem 0x8003600000-0x80037fffff 64bit pref]: assigned Apr 16 23:55:16.782910 kernel: pci 0000:00:04.4: bridge window [mem 0x13800000-0x139fffff]: assigned Apr 16 23:55:16.782969 kernel: pci 0000:00:04.4: bridge window [mem 0x8003800000-0x80039fffff 64bit pref]: assigned Apr 16 23:55:16.783029 kernel: pci 0000:00:04.5: bridge window [mem 0x13a00000-0x13bfffff]: assigned Apr 16 23:55:16.783105 kernel: pci 0000:00:04.5: bridge window [mem 0x8003a00000-0x8003bfffff 64bit pref]: assigned Apr 16 23:55:16.783171 kernel: pci 0000:00:04.6: bridge window [mem 0x13c00000-0x13dfffff]: assigned Apr 16 23:55:16.783230 kernel: pci 0000:00:04.6: bridge window [mem 0x8003c00000-0x8003dfffff 64bit pref]: assigned Apr 16 23:55:16.783291 kernel: pci 0000:00:04.7: bridge window [mem 0x13e00000-0x13ffffff]: assigned Apr 16 23:55:16.783350 kernel: pci 0000:00:04.7: bridge window [mem 0x8003e00000-0x8003ffffff 64bit pref]: assigned Apr 16 23:55:16.783408 kernel: pci 0000:00:05.0: bridge window [mem 0x14000000-0x141fffff]: assigned Apr 16 23:55:16.783469 kernel: pci 0000:00:05.0: bridge window [mem 0x8004000000-0x80041fffff 64bit pref]: assigned Apr 16 23:55:16.783530 kernel: pci 0000:00:01.0: BAR 0 [mem 0x14200000-0x14200fff]: assigned Apr 16 23:55:16.783589 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x1fff]: assigned Apr 16 23:55:16.783651 kernel: pci 0000:00:01.1: BAR 0 [mem 0x14201000-0x14201fff]: assigned Apr 16 23:55:16.783709 kernel: pci 0000:00:01.1: bridge window [io 0x2000-0x2fff]: assigned Apr 16 23:55:16.783769 kernel: pci 0000:00:01.2: BAR 0 [mem 0x14202000-0x14202fff]: assigned Apr 16 23:55:16.783827 kernel: pci 0000:00:01.2: bridge window [io 0x3000-0x3fff]: assigned Apr 16 23:55:16.783887 kernel: pci 0000:00:01.3: BAR 0 [mem 0x14203000-0x14203fff]: assigned Apr 16 23:55:16.783946 kernel: pci 0000:00:01.3: bridge window [io 0x4000-0x4fff]: assigned Apr 16 23:55:16.784006 kernel: pci 0000:00:01.4: BAR 0 [mem 0x14204000-0x14204fff]: assigned Apr 16 23:55:16.784077 kernel: pci 0000:00:01.4: bridge window [io 0x5000-0x5fff]: assigned Apr 16 23:55:16.784143 kernel: pci 0000:00:01.5: BAR 0 [mem 0x14205000-0x14205fff]: assigned Apr 16 23:55:16.784201 kernel: pci 0000:00:01.5: bridge window [io 0x6000-0x6fff]: assigned Apr 16 23:55:16.784275 kernel: pci 0000:00:01.6: BAR 0 [mem 0x14206000-0x14206fff]: assigned Apr 16 23:55:16.784346 kernel: pci 0000:00:01.6: bridge window [io 0x7000-0x7fff]: assigned Apr 16 23:55:16.784406 kernel: pci 0000:00:01.7: BAR 0 [mem 0x14207000-0x14207fff]: assigned Apr 16 23:55:16.784464 kernel: pci 0000:00:01.7: bridge window [io 0x8000-0x8fff]: assigned Apr 16 23:55:16.784523 kernel: pci 0000:00:02.0: BAR 0 [mem 0x14208000-0x14208fff]: assigned Apr 16 23:55:16.784581 kernel: pci 0000:00:02.0: bridge window [io 0x9000-0x9fff]: assigned Apr 16 23:55:16.784644 kernel: pci 0000:00:02.1: BAR 0 [mem 0x14209000-0x14209fff]: assigned Apr 16 23:55:16.784701 kernel: pci 0000:00:02.1: bridge window [io 0xa000-0xafff]: assigned Apr 16 23:55:16.784762 kernel: pci 0000:00:02.2: BAR 0 [mem 0x1420a000-0x1420afff]: assigned Apr 16 23:55:16.784821 kernel: pci 0000:00:02.2: bridge window [io 0xb000-0xbfff]: assigned Apr 16 23:55:16.784881 kernel: pci 0000:00:02.3: BAR 0 [mem 0x1420b000-0x1420bfff]: assigned Apr 16 23:55:16.784938 kernel: pci 0000:00:02.3: bridge window [io 0xc000-0xcfff]: assigned Apr 16 23:55:16.784997 kernel: pci 0000:00:02.4: BAR 0 [mem 0x1420c000-0x1420cfff]: assigned Apr 16 23:55:16.785074 kernel: pci 0000:00:02.4: bridge window [io 0xd000-0xdfff]: assigned Apr 16 23:55:16.785140 kernel: pci 0000:00:02.5: BAR 0 [mem 0x1420d000-0x1420dfff]: assigned Apr 16 23:55:16.785202 kernel: pci 0000:00:02.5: bridge window [io 0xe000-0xefff]: assigned Apr 16 23:55:16.785261 kernel: pci 0000:00:02.6: BAR 0 [mem 0x1420e000-0x1420efff]: assigned Apr 16 23:55:16.785322 kernel: pci 0000:00:02.6: bridge window [io 0xf000-0xffff]: assigned Apr 16 23:55:16.785386 kernel: pci 0000:00:02.7: BAR 0 [mem 0x1420f000-0x1420ffff]: assigned Apr 16 23:55:16.785446 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: can't assign; no space Apr 16 23:55:16.785504 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: failed to assign Apr 16 23:55:16.785564 kernel: pci 0000:00:03.0: BAR 0 [mem 0x14210000-0x14210fff]: assigned Apr 16 23:55:16.785627 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: can't assign; no space Apr 16 23:55:16.785695 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: failed to assign Apr 16 23:55:16.785763 kernel: pci 0000:00:03.1: BAR 0 [mem 0x14211000-0x14211fff]: assigned Apr 16 23:55:16.785822 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: can't assign; no space Apr 16 23:55:16.785885 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: failed to assign Apr 16 23:55:16.785946 kernel: pci 0000:00:03.2: BAR 0 [mem 0x14212000-0x14212fff]: assigned Apr 16 23:55:16.786010 kernel: pci 0000:00:03.2: bridge window [io size 0x1000]: can't assign; no space Apr 16 23:55:16.786081 kernel: pci 0000:00:03.2: bridge window [io size 0x1000]: failed to assign Apr 16 23:55:16.786143 kernel: pci 0000:00:03.3: BAR 0 [mem 0x14213000-0x14213fff]: assigned Apr 16 23:55:16.786201 kernel: pci 0000:00:03.3: bridge window [io size 0x1000]: can't assign; no space Apr 16 23:55:16.786258 kernel: pci 0000:00:03.3: bridge window [io size 0x1000]: failed to assign Apr 16 23:55:16.786319 kernel: pci 0000:00:03.4: BAR 0 [mem 0x14214000-0x14214fff]: assigned Apr 16 23:55:16.786377 kernel: pci 0000:00:03.4: bridge window [io size 0x1000]: can't assign; no space Apr 16 23:55:16.786450 kernel: pci 0000:00:03.4: bridge window [io size 0x1000]: failed to assign Apr 16 23:55:16.786513 kernel: pci 0000:00:03.5: BAR 0 [mem 0x14215000-0x14215fff]: assigned Apr 16 23:55:16.786578 kernel: pci 0000:00:03.5: bridge window [io size 0x1000]: can't assign; no space Apr 16 23:55:16.786635 kernel: pci 0000:00:03.5: bridge window [io size 0x1000]: failed to assign Apr 16 23:55:16.786695 kernel: pci 0000:00:03.6: BAR 0 [mem 0x14216000-0x14216fff]: assigned Apr 16 23:55:16.786761 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: can't assign; no space Apr 16 23:55:16.786826 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: failed to assign Apr 16 23:55:16.786887 kernel: pci 0000:00:03.7: BAR 0 [mem 0x14217000-0x14217fff]: assigned Apr 16 23:55:16.786947 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: can't assign; no space Apr 16 23:55:16.787007 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: failed to assign Apr 16 23:55:16.787080 kernel: pci 0000:00:04.0: BAR 0 [mem 0x14218000-0x14218fff]: assigned Apr 16 23:55:16.787141 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: can't assign; no space Apr 16 23:55:16.787202 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: failed to assign Apr 16 23:55:16.787262 kernel: pci 0000:00:04.1: BAR 0 [mem 0x14219000-0x14219fff]: assigned Apr 16 23:55:16.787321 kernel: pci 0000:00:04.1: bridge window [io size 0x1000]: can't assign; no space Apr 16 23:55:16.787380 kernel: pci 0000:00:04.1: bridge window [io size 0x1000]: failed to assign Apr 16 23:55:16.787440 kernel: pci 0000:00:04.2: BAR 0 [mem 0x1421a000-0x1421afff]: assigned Apr 16 23:55:16.787500 kernel: pci 0000:00:04.2: bridge window [io size 0x1000]: can't assign; no space Apr 16 23:55:16.787562 kernel: pci 0000:00:04.2: bridge window [io size 0x1000]: failed to assign Apr 16 23:55:16.787622 kernel: pci 0000:00:04.3: BAR 0 [mem 0x1421b000-0x1421bfff]: assigned Apr 16 23:55:16.787680 kernel: pci 0000:00:04.3: bridge window [io size 0x1000]: can't assign; no space Apr 16 23:55:16.787739 kernel: pci 0000:00:04.3: bridge window [io size 0x1000]: failed to assign Apr 16 23:55:16.787800 kernel: pci 0000:00:04.4: BAR 0 [mem 0x1421c000-0x1421cfff]: assigned Apr 16 23:55:16.787860 kernel: pci 0000:00:04.4: bridge window [io size 0x1000]: can't assign; no space Apr 16 23:55:16.787925 kernel: pci 0000:00:04.4: bridge window [io size 0x1000]: failed to assign Apr 16 23:55:16.787987 kernel: pci 0000:00:04.5: BAR 0 [mem 0x1421d000-0x1421dfff]: assigned Apr 16 23:55:16.788059 kernel: pci 0000:00:04.5: bridge window [io size 0x1000]: can't assign; no space Apr 16 23:55:16.788120 kernel: pci 0000:00:04.5: bridge window [io size 0x1000]: failed to assign Apr 16 23:55:16.788182 kernel: pci 0000:00:04.6: BAR 0 [mem 0x1421e000-0x1421efff]: assigned Apr 16 23:55:16.788257 kernel: pci 0000:00:04.6: bridge window [io size 0x1000]: can't assign; no space Apr 16 23:55:16.788361 kernel: pci 0000:00:04.6: bridge window [io size 0x1000]: failed to assign Apr 16 23:55:16.788425 kernel: pci 0000:00:04.7: BAR 0 [mem 0x1421f000-0x1421ffff]: assigned Apr 16 23:55:16.788484 kernel: pci 0000:00:04.7: bridge window [io size 0x1000]: can't assign; no space Apr 16 23:55:16.788542 kernel: pci 0000:00:04.7: bridge window [io size 0x1000]: failed to assign Apr 16 23:55:16.788602 kernel: pci 0000:00:05.0: BAR 0 [mem 0x14220000-0x14220fff]: assigned Apr 16 23:55:16.788661 kernel: pci 0000:00:05.0: bridge window [io size 0x1000]: can't assign; no space Apr 16 23:55:16.788720 kernel: pci 0000:00:05.0: bridge window [io size 0x1000]: failed to assign Apr 16 23:55:16.788779 kernel: pci 0000:00:05.0: bridge window [io 0x1000-0x1fff]: assigned Apr 16 23:55:16.788836 kernel: pci 0000:00:04.7: bridge window [io 0x2000-0x2fff]: assigned Apr 16 23:55:16.788895 kernel: pci 0000:00:04.6: bridge window [io 0x3000-0x3fff]: assigned Apr 16 23:55:16.788953 kernel: pci 0000:00:04.5: bridge window [io 0x4000-0x4fff]: assigned Apr 16 23:55:16.789017 kernel: pci 0000:00:04.4: bridge window [io 0x5000-0x5fff]: assigned Apr 16 23:55:16.789096 kernel: pci 0000:00:04.3: bridge window [io 0x6000-0x6fff]: assigned Apr 16 23:55:16.789161 kernel: pci 0000:00:04.2: bridge window [io 0x7000-0x7fff]: assigned Apr 16 23:55:16.789221 kernel: pci 0000:00:04.1: bridge window [io 0x8000-0x8fff]: assigned Apr 16 23:55:16.789281 kernel: pci 0000:00:04.0: bridge window [io 0x9000-0x9fff]: assigned Apr 16 23:55:16.789340 kernel: pci 0000:00:03.7: bridge window [io 0xa000-0xafff]: assigned Apr 16 23:55:16.789399 kernel: pci 0000:00:03.6: bridge window [io 0xb000-0xbfff]: assigned Apr 16 23:55:16.789458 kernel: pci 0000:00:03.5: bridge window [io 0xc000-0xcfff]: assigned Apr 16 23:55:16.789517 kernel: pci 0000:00:03.4: bridge window [io 0xd000-0xdfff]: assigned Apr 16 23:55:16.789576 kernel: pci 0000:00:03.3: bridge window [io 0xe000-0xefff]: assigned Apr 16 23:55:16.789638 kernel: pci 0000:00:03.2: bridge window [io 0xf000-0xffff]: assigned Apr 16 23:55:16.789698 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: can't assign; no space Apr 16 23:55:16.789757 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: failed to assign Apr 16 23:55:16.789816 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: can't assign; no space Apr 16 23:55:16.789874 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: failed to assign Apr 16 23:55:16.789934 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: can't assign; no space Apr 16 23:55:16.789992 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: failed to assign Apr 16 23:55:16.790062 kernel: pci 0000:00:02.6: bridge window [io size 0x1000]: can't assign; no space Apr 16 23:55:16.790126 kernel: pci 0000:00:02.6: bridge window [io size 0x1000]: failed to assign Apr 16 23:55:16.790187 kernel: pci 0000:00:02.5: bridge window [io size 0x1000]: can't assign; no space Apr 16 23:55:16.790248 kernel: pci 0000:00:02.5: bridge window [io size 0x1000]: failed to assign Apr 16 23:55:16.790312 kernel: pci 0000:00:02.4: bridge window [io size 0x1000]: can't assign; no space Apr 16 23:55:16.790373 kernel: pci 0000:00:02.4: bridge window [io size 0x1000]: failed to assign Apr 16 23:55:16.790437 kernel: pci 0000:00:02.3: bridge window [io size 0x1000]: can't assign; no space Apr 16 23:55:16.790516 kernel: pci 0000:00:02.3: bridge window [io size 0x1000]: failed to assign Apr 16 23:55:16.790586 kernel: pci 0000:00:02.2: bridge window [io size 0x1000]: can't assign; no space Apr 16 23:55:16.790648 kernel: pci 0000:00:02.2: bridge window [io size 0x1000]: failed to assign Apr 16 23:55:16.790707 kernel: pci 0000:00:02.1: bridge window [io size 0x1000]: can't assign; no space Apr 16 23:55:16.790766 kernel: pci 0000:00:02.1: bridge window [io size 0x1000]: failed to assign Apr 16 23:55:16.790827 kernel: pci 0000:00:02.0: bridge window [io size 0x1000]: can't assign; no space Apr 16 23:55:16.790886 kernel: pci 0000:00:02.0: bridge window [io size 0x1000]: failed to assign Apr 16 23:55:16.790946 kernel: pci 0000:00:01.7: bridge window [io size 0x1000]: can't assign; no space Apr 16 23:55:16.791005 kernel: pci 0000:00:01.7: bridge window [io size 0x1000]: failed to assign Apr 16 23:55:16.791081 kernel: pci 0000:00:01.6: bridge window [io size 0x1000]: can't assign; no space Apr 16 23:55:16.791142 kernel: pci 0000:00:01.6: bridge window [io size 0x1000]: failed to assign Apr 16 23:55:16.791202 kernel: pci 0000:00:01.5: bridge window [io size 0x1000]: can't assign; no space Apr 16 23:55:16.791266 kernel: pci 0000:00:01.5: bridge window [io size 0x1000]: failed to assign Apr 16 23:55:16.791326 kernel: pci 0000:00:01.4: bridge window [io size 0x1000]: can't assign; no space Apr 16 23:55:16.791387 kernel: pci 0000:00:01.4: bridge window [io size 0x1000]: failed to assign Apr 16 23:55:16.791450 kernel: pci 0000:00:01.3: bridge window [io size 0x1000]: can't assign; no space Apr 16 23:55:16.791508 kernel: pci 0000:00:01.3: bridge window [io size 0x1000]: failed to assign Apr 16 23:55:16.791577 kernel: pci 0000:00:01.2: bridge window [io size 0x1000]: can't assign; no space Apr 16 23:55:16.791637 kernel: pci 0000:00:01.2: bridge window [io size 0x1000]: failed to assign Apr 16 23:55:16.791698 kernel: pci 0000:00:01.1: bridge window [io size 0x1000]: can't assign; no space Apr 16 23:55:16.791759 kernel: pci 0000:00:01.1: bridge window [io size 0x1000]: failed to assign Apr 16 23:55:16.791822 kernel: pci 0000:00:01.0: bridge window [io size 0x1000]: can't assign; no space Apr 16 23:55:16.791882 kernel: pci 0000:00:01.0: bridge window [io size 0x1000]: failed to assign Apr 16 23:55:16.791949 kernel: pci 0000:01:00.0: ROM [mem 0x10000000-0x1007ffff pref]: assigned Apr 16 23:55:16.792010 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Apr 16 23:55:16.792081 kernel: pci 0000:01:00.0: BAR 1 [mem 0x10080000-0x10080fff]: assigned Apr 16 23:55:16.792140 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Apr 16 23:55:16.792201 kernel: pci 0000:00:01.0: bridge window [mem 0x10000000-0x101fffff] Apr 16 23:55:16.792270 kernel: pci 0000:00:01.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Apr 16 23:55:16.792343 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10200000-0x10203fff 64bit]: assigned Apr 16 23:55:16.792405 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Apr 16 23:55:16.792466 kernel: pci 0000:00:01.1: bridge window [mem 0x10200000-0x103fffff] Apr 16 23:55:16.792526 kernel: pci 0000:00:01.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Apr 16 23:55:16.792593 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref]: assigned Apr 16 23:55:16.792656 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10400000-0x10400fff]: assigned Apr 16 23:55:16.792720 kernel: pci 0000:00:01.2: PCI bridge to [bus 03] Apr 16 23:55:16.792778 kernel: pci 0000:00:01.2: bridge window [mem 0x10400000-0x105fffff] Apr 16 23:55:16.792837 kernel: pci 0000:00:01.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Apr 16 23:55:16.792902 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref]: assigned Apr 16 23:55:16.792961 kernel: pci 0000:00:01.3: PCI bridge to [bus 04] Apr 16 23:55:16.793020 kernel: pci 0000:00:01.3: bridge window [mem 0x10600000-0x107fffff] Apr 16 23:55:16.793094 kernel: pci 0000:00:01.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Apr 16 23:55:16.793160 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000800000-0x8000803fff 64bit pref]: assigned Apr 16 23:55:16.793230 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff]: assigned Apr 16 23:55:16.793294 kernel: pci 0000:00:01.4: PCI bridge to [bus 05] Apr 16 23:55:16.793357 kernel: pci 0000:00:01.4: bridge window [mem 0x10800000-0x109fffff] Apr 16 23:55:16.793417 kernel: pci 0000:00:01.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Apr 16 23:55:16.793485 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000a00000-0x8000a03fff 64bit pref]: assigned Apr 16 23:55:16.793564 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10a00000-0x10a00fff]: assigned Apr 16 23:55:16.793623 kernel: pci 0000:00:01.5: PCI bridge to [bus 06] Apr 16 23:55:16.793684 kernel: pci 0000:00:01.5: bridge window [mem 0x10a00000-0x10bfffff] Apr 16 23:55:16.793744 kernel: pci 0000:00:01.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 16 23:55:16.793802 kernel: pci 0000:00:01.6: PCI bridge to [bus 07] Apr 16 23:55:16.793861 kernel: pci 0000:00:01.6: bridge window [mem 0x10c00000-0x10dfffff] Apr 16 23:55:16.793919 kernel: pci 0000:00:01.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 16 23:55:16.793977 kernel: pci 0000:00:01.7: PCI bridge to [bus 08] Apr 16 23:55:16.794035 kernel: pci 0000:00:01.7: bridge window [mem 0x10e00000-0x10ffffff] Apr 16 23:55:16.794120 kernel: pci 0000:00:01.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 16 23:55:16.794181 kernel: pci 0000:00:02.0: PCI bridge to [bus 09] Apr 16 23:55:16.794239 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff] Apr 16 23:55:16.794297 kernel: pci 0000:00:02.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Apr 16 23:55:16.794363 kernel: pci 0000:00:02.1: PCI bridge to [bus 0a] Apr 16 23:55:16.794424 kernel: pci 0000:00:02.1: bridge window [mem 0x11200000-0x113fffff] Apr 16 23:55:16.794487 kernel: pci 0000:00:02.1: bridge window [mem 0x8001200000-0x80013fffff 64bit pref] Apr 16 23:55:16.794547 kernel: pci 0000:00:02.2: PCI bridge to [bus 0b] Apr 16 23:55:16.794606 kernel: pci 0000:00:02.2: bridge window [mem 0x11400000-0x115fffff] Apr 16 23:55:16.794664 kernel: pci 0000:00:02.2: bridge window [mem 0x8001400000-0x80015fffff 64bit pref] Apr 16 23:55:16.794722 kernel: pci 0000:00:02.3: PCI bridge to [bus 0c] Apr 16 23:55:16.794781 kernel: pci 0000:00:02.3: bridge window [mem 0x11600000-0x117fffff] Apr 16 23:55:16.794838 kernel: pci 0000:00:02.3: bridge window [mem 0x8001600000-0x80017fffff 64bit pref] Apr 16 23:55:16.794913 kernel: pci 0000:00:02.4: PCI bridge to [bus 0d] Apr 16 23:55:16.794971 kernel: pci 0000:00:02.4: bridge window [mem 0x11800000-0x119fffff] Apr 16 23:55:16.795029 kernel: pci 0000:00:02.4: bridge window [mem 0x8001800000-0x80019fffff 64bit pref] Apr 16 23:55:16.795102 kernel: pci 0000:00:02.5: PCI bridge to [bus 0e] Apr 16 23:55:16.795162 kernel: pci 0000:00:02.5: bridge window [mem 0x11a00000-0x11bfffff] Apr 16 23:55:16.795220 kernel: pci 0000:00:02.5: bridge window [mem 0x8001a00000-0x8001bfffff 64bit pref] Apr 16 23:55:16.795281 kernel: pci 0000:00:02.6: PCI bridge to [bus 0f] Apr 16 23:55:16.795341 kernel: pci 0000:00:02.6: bridge window [mem 0x11c00000-0x11dfffff] Apr 16 23:55:16.795401 kernel: pci 0000:00:02.6: bridge window [mem 0x8001c00000-0x8001dfffff 64bit pref] Apr 16 23:55:16.795460 kernel: pci 0000:00:02.7: PCI bridge to [bus 10] Apr 16 23:55:16.795518 kernel: pci 0000:00:02.7: bridge window [mem 0x11e00000-0x11ffffff] Apr 16 23:55:16.795576 kernel: pci 0000:00:02.7: bridge window [mem 0x8001e00000-0x8001ffffff 64bit pref] Apr 16 23:55:16.795635 kernel: pci 0000:00:03.0: PCI bridge to [bus 11] Apr 16 23:55:16.795695 kernel: pci 0000:00:03.0: bridge window [mem 0x12000000-0x121fffff] Apr 16 23:55:16.795756 kernel: pci 0000:00:03.0: bridge window [mem 0x8002000000-0x80021fffff 64bit pref] Apr 16 23:55:16.795819 kernel: pci 0000:00:03.1: PCI bridge to [bus 12] Apr 16 23:55:16.795877 kernel: pci 0000:00:03.1: bridge window [mem 0x12200000-0x123fffff] Apr 16 23:55:16.795935 kernel: pci 0000:00:03.1: bridge window [mem 0x8002200000-0x80023fffff 64bit pref] Apr 16 23:55:16.795993 kernel: pci 0000:00:03.2: PCI bridge to [bus 13] Apr 16 23:55:16.796069 kernel: pci 0000:00:03.2: bridge window [io 0xf000-0xffff] Apr 16 23:55:16.796130 kernel: pci 0000:00:03.2: bridge window [mem 0x12400000-0x125fffff] Apr 16 23:55:16.796189 kernel: pci 0000:00:03.2: bridge window [mem 0x8002400000-0x80025fffff 64bit pref] Apr 16 23:55:16.796251 kernel: pci 0000:00:03.3: PCI bridge to [bus 14] Apr 16 23:55:16.796334 kernel: pci 0000:00:03.3: bridge window [io 0xe000-0xefff] Apr 16 23:55:16.796398 kernel: pci 0000:00:03.3: bridge window [mem 0x12600000-0x127fffff] Apr 16 23:55:16.796457 kernel: pci 0000:00:03.3: bridge window [mem 0x8002600000-0x80027fffff 64bit pref] Apr 16 23:55:16.796516 kernel: pci 0000:00:03.4: PCI bridge to [bus 15] Apr 16 23:55:16.796575 kernel: pci 0000:00:03.4: bridge window [io 0xd000-0xdfff] Apr 16 23:55:16.796633 kernel: pci 0000:00:03.4: bridge window [mem 0x12800000-0x129fffff] Apr 16 23:55:16.796691 kernel: pci 0000:00:03.4: bridge window [mem 0x8002800000-0x80029fffff 64bit pref] Apr 16 23:55:16.796754 kernel: pci 0000:00:03.5: PCI bridge to [bus 16] Apr 16 23:55:16.796813 kernel: pci 0000:00:03.5: bridge window [io 0xc000-0xcfff] Apr 16 23:55:16.796871 kernel: pci 0000:00:03.5: bridge window [mem 0x12a00000-0x12bfffff] Apr 16 23:55:16.796928 kernel: pci 0000:00:03.5: bridge window [mem 0x8002a00000-0x8002bfffff 64bit pref] Apr 16 23:55:16.796987 kernel: pci 0000:00:03.6: PCI bridge to [bus 17] Apr 16 23:55:16.797061 kernel: pci 0000:00:03.6: bridge window [io 0xb000-0xbfff] Apr 16 23:55:16.797126 kernel: pci 0000:00:03.6: bridge window [mem 0x12c00000-0x12dfffff] Apr 16 23:55:16.797186 kernel: pci 0000:00:03.6: bridge window [mem 0x8002c00000-0x8002dfffff 64bit pref] Apr 16 23:55:16.797245 kernel: pci 0000:00:03.7: PCI bridge to [bus 18] Apr 16 23:55:16.797306 kernel: pci 0000:00:03.7: bridge window [io 0xa000-0xafff] Apr 16 23:55:16.797363 kernel: pci 0000:00:03.7: bridge window [mem 0x12e00000-0x12ffffff] Apr 16 23:55:16.797421 kernel: pci 0000:00:03.7: bridge window [mem 0x8002e00000-0x8002ffffff 64bit pref] Apr 16 23:55:16.797479 kernel: pci 0000:00:04.0: PCI bridge to [bus 19] Apr 16 23:55:16.797538 kernel: pci 0000:00:04.0: bridge window [io 0x9000-0x9fff] Apr 16 23:55:16.797596 kernel: pci 0000:00:04.0: bridge window [mem 0x13000000-0x131fffff] Apr 16 23:55:16.797653 kernel: pci 0000:00:04.0: bridge window [mem 0x8003000000-0x80031fffff 64bit pref] Apr 16 23:55:16.797712 kernel: pci 0000:00:04.1: PCI bridge to [bus 1a] Apr 16 23:55:16.797772 kernel: pci 0000:00:04.1: bridge window [io 0x8000-0x8fff] Apr 16 23:55:16.797830 kernel: pci 0000:00:04.1: bridge window [mem 0x13200000-0x133fffff] Apr 16 23:55:16.797889 kernel: pci 0000:00:04.1: bridge window [mem 0x8003200000-0x80033fffff 64bit pref] Apr 16 23:55:16.797949 kernel: pci 0000:00:04.2: PCI bridge to [bus 1b] Apr 16 23:55:16.798008 kernel: pci 0000:00:04.2: bridge window [io 0x7000-0x7fff] Apr 16 23:55:16.798085 kernel: pci 0000:00:04.2: bridge window [mem 0x13400000-0x135fffff] Apr 16 23:55:16.798145 kernel: pci 0000:00:04.2: bridge window [mem 0x8003400000-0x80035fffff 64bit pref] Apr 16 23:55:16.798204 kernel: pci 0000:00:04.3: PCI bridge to [bus 1c] Apr 16 23:55:16.798268 kernel: pci 0000:00:04.3: bridge window [io 0x6000-0x6fff] Apr 16 23:55:16.798327 kernel: pci 0000:00:04.3: bridge window [mem 0x13600000-0x137fffff] Apr 16 23:55:16.798388 kernel: pci 0000:00:04.3: bridge window [mem 0x8003600000-0x80037fffff 64bit pref] Apr 16 23:55:16.798447 kernel: pci 0000:00:04.4: PCI bridge to [bus 1d] Apr 16 23:55:16.798505 kernel: pci 0000:00:04.4: bridge window [io 0x5000-0x5fff] Apr 16 23:55:16.798563 kernel: pci 0000:00:04.4: bridge window [mem 0x13800000-0x139fffff] Apr 16 23:55:16.798622 kernel: pci 0000:00:04.4: bridge window [mem 0x8003800000-0x80039fffff 64bit pref] Apr 16 23:55:16.798680 kernel: pci 0000:00:04.5: PCI bridge to [bus 1e] Apr 16 23:55:16.798739 kernel: pci 0000:00:04.5: bridge window [io 0x4000-0x4fff] Apr 16 23:55:16.798800 kernel: pci 0000:00:04.5: bridge window [mem 0x13a00000-0x13bfffff] Apr 16 23:55:16.798858 kernel: pci 0000:00:04.5: bridge window [mem 0x8003a00000-0x8003bfffff 64bit pref] Apr 16 23:55:16.798918 kernel: pci 0000:00:04.6: PCI bridge to [bus 1f] Apr 16 23:55:16.798979 kernel: pci 0000:00:04.6: bridge window [io 0x3000-0x3fff] Apr 16 23:55:16.799038 kernel: pci 0000:00:04.6: bridge window [mem 0x13c00000-0x13dfffff] Apr 16 23:55:16.799112 kernel: pci 0000:00:04.6: bridge window [mem 0x8003c00000-0x8003dfffff 64bit pref] Apr 16 23:55:16.799172 kernel: pci 0000:00:04.7: PCI bridge to [bus 20] Apr 16 23:55:16.799230 kernel: pci 0000:00:04.7: bridge window [io 0x2000-0x2fff] Apr 16 23:55:16.799292 kernel: pci 0000:00:04.7: bridge window [mem 0x13e00000-0x13ffffff] Apr 16 23:55:16.799350 kernel: pci 0000:00:04.7: bridge window [mem 0x8003e00000-0x8003ffffff 64bit pref] Apr 16 23:55:16.799410 kernel: pci 0000:00:05.0: PCI bridge to [bus 21] Apr 16 23:55:16.799470 kernel: pci 0000:00:05.0: bridge window [io 0x1000-0x1fff] Apr 16 23:55:16.799528 kernel: pci 0000:00:05.0: bridge window [mem 0x14000000-0x141fffff] Apr 16 23:55:16.799585 kernel: pci 0000:00:05.0: bridge window [mem 0x8004000000-0x80041fffff 64bit pref] Apr 16 23:55:16.799647 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Apr 16 23:55:16.799701 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 16 23:55:16.799756 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Apr 16 23:55:16.799820 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Apr 16 23:55:16.799876 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Apr 16 23:55:16.799937 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Apr 16 23:55:16.799991 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Apr 16 23:55:16.800063 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Apr 16 23:55:16.800123 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Apr 16 23:55:16.800185 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Apr 16 23:55:16.800240 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Apr 16 23:55:16.800317 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Apr 16 23:55:16.800374 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Apr 16 23:55:16.800435 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Apr 16 23:55:16.800493 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 16 23:55:16.800554 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Apr 16 23:55:16.800609 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 16 23:55:16.800676 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Apr 16 23:55:16.800735 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 16 23:55:16.800795 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Apr 16 23:55:16.800853 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Apr 16 23:55:16.800916 kernel: pci_bus 0000:0a: resource 1 [mem 0x11200000-0x113fffff] Apr 16 23:55:16.800972 kernel: pci_bus 0000:0a: resource 2 [mem 0x8001200000-0x80013fffff 64bit pref] Apr 16 23:55:16.801034 kernel: pci_bus 0000:0b: resource 1 [mem 0x11400000-0x115fffff] Apr 16 23:55:16.801110 kernel: pci_bus 0000:0b: resource 2 [mem 0x8001400000-0x80015fffff 64bit pref] Apr 16 23:55:16.801173 kernel: pci_bus 0000:0c: resource 1 [mem 0x11600000-0x117fffff] Apr 16 23:55:16.801234 kernel: pci_bus 0000:0c: resource 2 [mem 0x8001600000-0x80017fffff 64bit pref] Apr 16 23:55:16.801297 kernel: pci_bus 0000:0d: resource 1 [mem 0x11800000-0x119fffff] Apr 16 23:55:16.801353 kernel: pci_bus 0000:0d: resource 2 [mem 0x8001800000-0x80019fffff 64bit pref] Apr 16 23:55:16.801415 kernel: pci_bus 0000:0e: resource 1 [mem 0x11a00000-0x11bfffff] Apr 16 23:55:16.801469 kernel: pci_bus 0000:0e: resource 2 [mem 0x8001a00000-0x8001bfffff 64bit pref] Apr 16 23:55:16.801530 kernel: pci_bus 0000:0f: resource 1 [mem 0x11c00000-0x11dfffff] Apr 16 23:55:16.801586 kernel: pci_bus 0000:0f: resource 2 [mem 0x8001c00000-0x8001dfffff 64bit pref] Apr 16 23:55:16.801650 kernel: pci_bus 0000:10: resource 1 [mem 0x11e00000-0x11ffffff] Apr 16 23:55:16.801706 kernel: pci_bus 0000:10: resource 2 [mem 0x8001e00000-0x8001ffffff 64bit pref] Apr 16 23:55:16.801773 kernel: pci_bus 0000:11: resource 1 [mem 0x12000000-0x121fffff] Apr 16 23:55:16.801828 kernel: pci_bus 0000:11: resource 2 [mem 0x8002000000-0x80021fffff 64bit pref] Apr 16 23:55:16.801890 kernel: pci_bus 0000:12: resource 1 [mem 0x12200000-0x123fffff] Apr 16 23:55:16.801949 kernel: pci_bus 0000:12: resource 2 [mem 0x8002200000-0x80023fffff 64bit pref] Apr 16 23:55:16.802009 kernel: pci_bus 0000:13: resource 0 [io 0xf000-0xffff] Apr 16 23:55:16.802077 kernel: pci_bus 0000:13: resource 1 [mem 0x12400000-0x125fffff] Apr 16 23:55:16.802134 kernel: pci_bus 0000:13: resource 2 [mem 0x8002400000-0x80025fffff 64bit pref] Apr 16 23:55:16.802196 kernel: pci_bus 0000:14: resource 0 [io 0xe000-0xefff] Apr 16 23:55:16.802251 kernel: pci_bus 0000:14: resource 1 [mem 0x12600000-0x127fffff] Apr 16 23:55:16.802308 kernel: pci_bus 0000:14: resource 2 [mem 0x8002600000-0x80027fffff 64bit pref] Apr 16 23:55:16.802371 kernel: pci_bus 0000:15: resource 0 [io 0xd000-0xdfff] Apr 16 23:55:16.802425 kernel: pci_bus 0000:15: resource 1 [mem 0x12800000-0x129fffff] Apr 16 23:55:16.802479 kernel: pci_bus 0000:15: resource 2 [mem 0x8002800000-0x80029fffff 64bit pref] Apr 16 23:55:16.802540 kernel: pci_bus 0000:16: resource 0 [io 0xc000-0xcfff] Apr 16 23:55:16.802595 kernel: pci_bus 0000:16: resource 1 [mem 0x12a00000-0x12bfffff] Apr 16 23:55:16.802649 kernel: pci_bus 0000:16: resource 2 [mem 0x8002a00000-0x8002bfffff 64bit pref] Apr 16 23:55:16.802712 kernel: pci_bus 0000:17: resource 0 [io 0xb000-0xbfff] Apr 16 23:55:16.802767 kernel: pci_bus 0000:17: resource 1 [mem 0x12c00000-0x12dfffff] Apr 16 23:55:16.802823 kernel: pci_bus 0000:17: resource 2 [mem 0x8002c00000-0x8002dfffff 64bit pref] Apr 16 23:55:16.802884 kernel: pci_bus 0000:18: resource 0 [io 0xa000-0xafff] Apr 16 23:55:16.802940 kernel: pci_bus 0000:18: resource 1 [mem 0x12e00000-0x12ffffff] Apr 16 23:55:16.802994 kernel: pci_bus 0000:18: resource 2 [mem 0x8002e00000-0x8002ffffff 64bit pref] Apr 16 23:55:16.803075 kernel: pci_bus 0000:19: resource 0 [io 0x9000-0x9fff] Apr 16 23:55:16.803137 kernel: pci_bus 0000:19: resource 1 [mem 0x13000000-0x131fffff] Apr 16 23:55:16.803191 kernel: pci_bus 0000:19: resource 2 [mem 0x8003000000-0x80031fffff 64bit pref] Apr 16 23:55:16.803253 kernel: pci_bus 0000:1a: resource 0 [io 0x8000-0x8fff] Apr 16 23:55:16.803307 kernel: pci_bus 0000:1a: resource 1 [mem 0x13200000-0x133fffff] Apr 16 23:55:16.803361 kernel: pci_bus 0000:1a: resource 2 [mem 0x8003200000-0x80033fffff 64bit pref] Apr 16 23:55:16.803423 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Apr 16 23:55:16.803479 kernel: pci_bus 0000:1b: resource 1 [mem 0x13400000-0x135fffff] Apr 16 23:55:16.803534 kernel: pci_bus 0000:1b: resource 2 [mem 0x8003400000-0x80035fffff 64bit pref] Apr 16 23:55:16.803595 kernel: pci_bus 0000:1c: resource 0 [io 0x6000-0x6fff] Apr 16 23:55:16.803650 kernel: pci_bus 0000:1c: resource 1 [mem 0x13600000-0x137fffff] Apr 16 23:55:16.803705 kernel: pci_bus 0000:1c: resource 2 [mem 0x8003600000-0x80037fffff 64bit pref] Apr 16 23:55:16.803765 kernel: pci_bus 0000:1d: resource 0 [io 0x5000-0x5fff] Apr 16 23:55:16.803820 kernel: pci_bus 0000:1d: resource 1 [mem 0x13800000-0x139fffff] Apr 16 23:55:16.803878 kernel: pci_bus 0000:1d: resource 2 [mem 0x8003800000-0x80039fffff 64bit pref] Apr 16 23:55:16.803940 kernel: pci_bus 0000:1e: resource 0 [io 0x4000-0x4fff] Apr 16 23:55:16.803995 kernel: pci_bus 0000:1e: resource 1 [mem 0x13a00000-0x13bfffff] Apr 16 23:55:16.804068 kernel: pci_bus 0000:1e: resource 2 [mem 0x8003a00000-0x8003bfffff 64bit pref] Apr 16 23:55:16.804132 kernel: pci_bus 0000:1f: resource 0 [io 0x3000-0x3fff] Apr 16 23:55:16.804190 kernel: pci_bus 0000:1f: resource 1 [mem 0x13c00000-0x13dfffff] Apr 16 23:55:16.804245 kernel: pci_bus 0000:1f: resource 2 [mem 0x8003c00000-0x8003dfffff 64bit pref] Apr 16 23:55:16.804338 kernel: pci_bus 0000:20: resource 0 [io 0x2000-0x2fff] Apr 16 23:55:16.804397 kernel: pci_bus 0000:20: resource 1 [mem 0x13e00000-0x13ffffff] Apr 16 23:55:16.804451 kernel: pci_bus 0000:20: resource 2 [mem 0x8003e00000-0x8003ffffff 64bit pref] Apr 16 23:55:16.804511 kernel: pci_bus 0000:21: resource 0 [io 0x1000-0x1fff] Apr 16 23:55:16.804567 kernel: pci_bus 0000:21: resource 1 [mem 0x14000000-0x141fffff] Apr 16 23:55:16.804621 kernel: pci_bus 0000:21: resource 2 [mem 0x8004000000-0x80041fffff 64bit pref] Apr 16 23:55:16.804630 kernel: iommu: Default domain type: Translated Apr 16 23:55:16.804641 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 16 23:55:16.804649 kernel: efivars: Registered efivars operations Apr 16 23:55:16.804656 kernel: vgaarb: loaded Apr 16 23:55:16.804664 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 16 23:55:16.804671 kernel: VFS: Disk quotas dquot_6.6.0 Apr 16 23:55:16.804678 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 16 23:55:16.804686 kernel: pnp: PnP ACPI init Apr 16 23:55:16.804759 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Apr 16 23:55:16.804771 kernel: pnp: PnP ACPI: found 1 devices Apr 16 23:55:16.804780 kernel: NET: Registered PF_INET protocol family Apr 16 23:55:16.804787 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 16 23:55:16.804795 kernel: tcp_listen_portaddr_hash hash table entries: 8192 (order: 5, 131072 bytes, linear) Apr 16 23:55:16.804803 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 16 23:55:16.804811 kernel: TCP established hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 16 23:55:16.804818 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Apr 16 23:55:16.804826 kernel: TCP: Hash tables configured (established 131072 bind 65536) Apr 16 23:55:16.804833 kernel: UDP hash table entries: 8192 (order: 6, 262144 bytes, linear) Apr 16 23:55:16.804840 kernel: UDP-Lite hash table entries: 8192 (order: 6, 262144 bytes, linear) Apr 16 23:55:16.804849 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 16 23:55:16.804916 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Apr 16 23:55:16.804926 kernel: PCI: CLS 0 bytes, default 64 Apr 16 23:55:16.804933 kernel: kvm [1]: HYP mode not available Apr 16 23:55:16.804941 kernel: Initialise system trusted keyrings Apr 16 23:55:16.804948 kernel: workingset: timestamp_bits=39 max_order=22 bucket_order=0 Apr 16 23:55:16.804956 kernel: Key type asymmetric registered Apr 16 23:55:16.804963 kernel: Asymmetric key parser 'x509' registered Apr 16 23:55:16.804972 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Apr 16 23:55:16.804980 kernel: io scheduler mq-deadline registered Apr 16 23:55:16.804987 kernel: io scheduler kyber registered Apr 16 23:55:16.804994 kernel: io scheduler bfq registered Apr 16 23:55:16.805002 kernel: ACPI: \_SB_.L001: Enabled at IRQ 36 Apr 16 23:55:16.805079 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 50 Apr 16 23:55:16.805142 kernel: pcieport 0000:00:01.0: AER: enabled with IRQ 50 Apr 16 23:55:16.805202 kernel: pcieport 0000:00:01.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 23:55:16.805263 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 51 Apr 16 23:55:16.805325 kernel: pcieport 0000:00:01.1: AER: enabled with IRQ 51 Apr 16 23:55:16.805385 kernel: pcieport 0000:00:01.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 23:55:16.805447 kernel: pcieport 0000:00:01.2: PME: Signaling with IRQ 52 Apr 16 23:55:16.805506 kernel: pcieport 0000:00:01.2: AER: enabled with IRQ 52 Apr 16 23:55:16.805564 kernel: pcieport 0000:00:01.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 23:55:16.805624 kernel: pcieport 0000:00:01.3: PME: Signaling with IRQ 53 Apr 16 23:55:16.805684 kernel: pcieport 0000:00:01.3: AER: enabled with IRQ 53 Apr 16 23:55:16.805744 kernel: pcieport 0000:00:01.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 23:55:16.805807 kernel: pcieport 0000:00:01.4: PME: Signaling with IRQ 54 Apr 16 23:55:16.805867 kernel: pcieport 0000:00:01.4: AER: enabled with IRQ 54 Apr 16 23:55:16.805926 kernel: pcieport 0000:00:01.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 23:55:16.805988 kernel: pcieport 0000:00:01.5: PME: Signaling with IRQ 55 Apr 16 23:55:16.806059 kernel: pcieport 0000:00:01.5: AER: enabled with IRQ 55 Apr 16 23:55:16.806121 kernel: pcieport 0000:00:01.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 23:55:16.806181 kernel: pcieport 0000:00:01.6: PME: Signaling with IRQ 56 Apr 16 23:55:16.806241 kernel: pcieport 0000:00:01.6: AER: enabled with IRQ 56 Apr 16 23:55:16.806304 kernel: pcieport 0000:00:01.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 23:55:16.806364 kernel: pcieport 0000:00:01.7: PME: Signaling with IRQ 57 Apr 16 23:55:16.806422 kernel: pcieport 0000:00:01.7: AER: enabled with IRQ 57 Apr 16 23:55:16.806481 kernel: pcieport 0000:00:01.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 23:55:16.806492 kernel: ACPI: \_SB_.L002: Enabled at IRQ 37 Apr 16 23:55:16.806551 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 58 Apr 16 23:55:16.806609 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 58 Apr 16 23:55:16.806672 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 23:55:16.806732 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 59 Apr 16 23:55:16.806793 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 59 Apr 16 23:55:16.806853 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 23:55:16.806913 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 60 Apr 16 23:55:16.806972 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 60 Apr 16 23:55:16.807031 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 23:55:16.807111 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 61 Apr 16 23:55:16.807175 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 61 Apr 16 23:55:16.807233 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 23:55:16.807294 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 62 Apr 16 23:55:16.807353 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 62 Apr 16 23:55:16.807412 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 23:55:16.807471 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 63 Apr 16 23:55:16.807530 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 63 Apr 16 23:55:16.807590 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 23:55:16.807653 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 64 Apr 16 23:55:16.807712 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 64 Apr 16 23:55:16.807771 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 23:55:16.807832 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 65 Apr 16 23:55:16.807891 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 65 Apr 16 23:55:16.807949 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 23:55:16.807959 kernel: ACPI: \_SB_.L003: Enabled at IRQ 38 Apr 16 23:55:16.808017 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 66 Apr 16 23:55:16.808098 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 66 Apr 16 23:55:16.808160 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 23:55:16.808219 kernel: pcieport 0000:00:03.1: PME: Signaling with IRQ 67 Apr 16 23:55:16.808296 kernel: pcieport 0000:00:03.1: AER: enabled with IRQ 67 Apr 16 23:55:16.808359 kernel: pcieport 0000:00:03.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 23:55:16.808420 kernel: pcieport 0000:00:03.2: PME: Signaling with IRQ 68 Apr 16 23:55:16.808481 kernel: pcieport 0000:00:03.2: AER: enabled with IRQ 68 Apr 16 23:55:16.808540 kernel: pcieport 0000:00:03.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 23:55:16.808602 kernel: pcieport 0000:00:03.3: PME: Signaling with IRQ 69 Apr 16 23:55:16.808661 kernel: pcieport 0000:00:03.3: AER: enabled with IRQ 69 Apr 16 23:55:16.808720 kernel: pcieport 0000:00:03.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 23:55:16.808780 kernel: pcieport 0000:00:03.4: PME: Signaling with IRQ 70 Apr 16 23:55:16.808839 kernel: pcieport 0000:00:03.4: AER: enabled with IRQ 70 Apr 16 23:55:16.808898 kernel: pcieport 0000:00:03.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 23:55:16.808958 kernel: pcieport 0000:00:03.5: PME: Signaling with IRQ 71 Apr 16 23:55:16.809019 kernel: pcieport 0000:00:03.5: AER: enabled with IRQ 71 Apr 16 23:55:16.809095 kernel: pcieport 0000:00:03.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 23:55:16.809157 kernel: pcieport 0000:00:03.6: PME: Signaling with IRQ 72 Apr 16 23:55:16.809217 kernel: pcieport 0000:00:03.6: AER: enabled with IRQ 72 Apr 16 23:55:16.809275 kernel: pcieport 0000:00:03.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 23:55:16.809335 kernel: pcieport 0000:00:03.7: PME: Signaling with IRQ 73 Apr 16 23:55:16.809393 kernel: pcieport 0000:00:03.7: AER: enabled with IRQ 73 Apr 16 23:55:16.809452 kernel: pcieport 0000:00:03.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 23:55:16.809465 kernel: ACPI: \_SB_.L000: Enabled at IRQ 35 Apr 16 23:55:16.809525 kernel: pcieport 0000:00:04.0: PME: Signaling with IRQ 74 Apr 16 23:55:16.809583 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 74 Apr 16 23:55:16.809644 kernel: pcieport 0000:00:04.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 23:55:16.809704 kernel: pcieport 0000:00:04.1: PME: Signaling with IRQ 75 Apr 16 23:55:16.809763 kernel: pcieport 0000:00:04.1: AER: enabled with IRQ 75 Apr 16 23:55:16.809821 kernel: pcieport 0000:00:04.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 23:55:16.809881 kernel: pcieport 0000:00:04.2: PME: Signaling with IRQ 76 Apr 16 23:55:16.809942 kernel: pcieport 0000:00:04.2: AER: enabled with IRQ 76 Apr 16 23:55:16.810001 kernel: pcieport 0000:00:04.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 23:55:16.810074 kernel: pcieport 0000:00:04.3: PME: Signaling with IRQ 77 Apr 16 23:55:16.810136 kernel: pcieport 0000:00:04.3: AER: enabled with IRQ 77 Apr 16 23:55:16.810196 kernel: pcieport 0000:00:04.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 23:55:16.810257 kernel: pcieport 0000:00:04.4: PME: Signaling with IRQ 78 Apr 16 23:55:16.810315 kernel: pcieport 0000:00:04.4: AER: enabled with IRQ 78 Apr 16 23:55:16.810373 kernel: pcieport 0000:00:04.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 23:55:16.810437 kernel: pcieport 0000:00:04.5: PME: Signaling with IRQ 79 Apr 16 23:55:16.810496 kernel: pcieport 0000:00:04.5: AER: enabled with IRQ 79 Apr 16 23:55:16.810554 kernel: pcieport 0000:00:04.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 23:55:16.810615 kernel: pcieport 0000:00:04.6: PME: Signaling with IRQ 80 Apr 16 23:55:16.810674 kernel: pcieport 0000:00:04.6: AER: enabled with IRQ 80 Apr 16 23:55:16.810735 kernel: pcieport 0000:00:04.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 23:55:16.810794 kernel: pcieport 0000:00:04.7: PME: Signaling with IRQ 81 Apr 16 23:55:16.810855 kernel: pcieport 0000:00:04.7: AER: enabled with IRQ 81 Apr 16 23:55:16.810914 kernel: pcieport 0000:00:04.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 23:55:16.810975 kernel: pcieport 0000:00:05.0: PME: Signaling with IRQ 82 Apr 16 23:55:16.811036 kernel: pcieport 0000:00:05.0: AER: enabled with IRQ 82 Apr 16 23:55:16.811114 kernel: pcieport 0000:00:05.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 23:55:16.811125 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 16 23:55:16.811132 kernel: ACPI: button: Power Button [PWRB] Apr 16 23:55:16.811196 kernel: virtio-pci 0000:01:00.0: enabling device (0000 -> 0002) Apr 16 23:55:16.811263 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Apr 16 23:55:16.811273 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 16 23:55:16.811281 kernel: thunder_xcv, ver 1.0 Apr 16 23:55:16.811288 kernel: thunder_bgx, ver 1.0 Apr 16 23:55:16.811296 kernel: nicpf, ver 1.0 Apr 16 23:55:16.811303 kernel: nicvf, ver 1.0 Apr 16 23:55:16.811375 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 16 23:55:16.811432 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-04-16T23:55:16 UTC (1776383716) Apr 16 23:55:16.811455 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 16 23:55:16.811465 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Apr 16 23:55:16.811473 kernel: watchdog: NMI not fully supported Apr 16 23:55:16.811481 kernel: watchdog: Hard watchdog permanently disabled Apr 16 23:55:16.811489 kernel: NET: Registered PF_INET6 protocol family Apr 16 23:55:16.811497 kernel: Segment Routing with IPv6 Apr 16 23:55:16.811504 kernel: In-situ OAM (IOAM) with IPv6 Apr 16 23:55:16.811512 kernel: NET: Registered PF_PACKET protocol family Apr 16 23:55:16.811520 kernel: Key type dns_resolver registered Apr 16 23:55:16.811528 kernel: registered taskstats version 1 Apr 16 23:55:16.811537 kernel: Loading compiled-in X.509 certificates Apr 16 23:55:16.811545 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: 4acad53138393591155ecb80320b4c1550e344f8' Apr 16 23:55:16.811553 kernel: Demotion targets for Node 0: null Apr 16 23:55:16.811561 kernel: Key type .fscrypt registered Apr 16 23:55:16.811570 kernel: Key type fscrypt-provisioning registered Apr 16 23:55:16.811577 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 16 23:55:16.811585 kernel: ima: Allocated hash algorithm: sha1 Apr 16 23:55:16.811593 kernel: ima: No architecture policies found Apr 16 23:55:16.811602 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 16 23:55:16.811610 kernel: clk: Disabling unused clocks Apr 16 23:55:16.811617 kernel: PM: genpd: Disabling unused power domains Apr 16 23:55:16.811625 kernel: Warning: unable to open an initial console. Apr 16 23:55:16.811633 kernel: Freeing unused kernel memory: 39552K Apr 16 23:55:16.811641 kernel: Run /init as init process Apr 16 23:55:16.811648 kernel: with arguments: Apr 16 23:55:16.811656 kernel: /init Apr 16 23:55:16.811664 kernel: with environment: Apr 16 23:55:16.811674 kernel: HOME=/ Apr 16 23:55:16.811682 kernel: TERM=linux Apr 16 23:55:16.811691 systemd[1]: Successfully made /usr/ read-only. Apr 16 23:55:16.811702 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 16 23:55:16.811711 systemd[1]: Detected virtualization kvm. Apr 16 23:55:16.811719 systemd[1]: Detected architecture arm64. Apr 16 23:55:16.811727 systemd[1]: Running in initrd. Apr 16 23:55:16.811736 systemd[1]: No hostname configured, using default hostname. Apr 16 23:55:16.811745 systemd[1]: Hostname set to . Apr 16 23:55:16.811752 systemd[1]: Initializing machine ID from VM UUID. Apr 16 23:55:16.811760 systemd[1]: Queued start job for default target initrd.target. Apr 16 23:55:16.811768 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 23:55:16.811776 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 23:55:16.811785 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 16 23:55:16.811793 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 23:55:16.811803 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 16 23:55:16.811812 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 16 23:55:16.811821 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 16 23:55:16.811829 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 16 23:55:16.811837 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 23:55:16.811846 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 23:55:16.811854 systemd[1]: Reached target paths.target - Path Units. Apr 16 23:55:16.811863 systemd[1]: Reached target slices.target - Slice Units. Apr 16 23:55:16.811871 systemd[1]: Reached target swap.target - Swaps. Apr 16 23:55:16.811880 systemd[1]: Reached target timers.target - Timer Units. Apr 16 23:55:16.811888 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 23:55:16.811896 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 23:55:16.811905 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 16 23:55:16.811913 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 16 23:55:16.811921 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 23:55:16.811929 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 23:55:16.811938 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 23:55:16.811946 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 23:55:16.811956 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 16 23:55:16.811964 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 23:55:16.811972 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 16 23:55:16.811981 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 16 23:55:16.811989 systemd[1]: Starting systemd-fsck-usr.service... Apr 16 23:55:16.811997 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 23:55:16.812007 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 23:55:16.812016 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 23:55:16.812024 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 16 23:55:16.812032 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 23:55:16.812054 systemd[1]: Finished systemd-fsck-usr.service. Apr 16 23:55:16.812063 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 16 23:55:16.812093 systemd-journald[311]: Collecting audit messages is disabled. Apr 16 23:55:16.812114 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:55:16.812125 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 16 23:55:16.812133 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 16 23:55:16.812142 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 23:55:16.812150 kernel: Bridge firewalling registered Apr 16 23:55:16.812159 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 23:55:16.812167 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 23:55:16.812177 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 23:55:16.812185 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 23:55:16.812194 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 16 23:55:16.812203 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 23:55:16.812211 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 23:55:16.812220 systemd-journald[311]: Journal started Apr 16 23:55:16.812238 systemd-journald[311]: Runtime Journal (/run/log/journal/da218761f4594f2a91ce368b7a1e513c) is 8M, max 319.5M, 311.5M free. Apr 16 23:55:16.751745 systemd-modules-load[313]: Inserted module 'overlay' Apr 16 23:55:16.768295 systemd-modules-load[313]: Inserted module 'br_netfilter' Apr 16 23:55:16.815354 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 23:55:16.815743 dracut-cmdline[340]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=openstack verity.usrhash=c4961845f9869114226296d88644496bf9e4629823927a5e8ae22de79f1c7b59 Apr 16 23:55:16.815718 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 23:55:16.830348 systemd-tmpfiles[370]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 16 23:55:16.833125 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 23:55:16.837875 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 23:55:16.876829 systemd-resolved[403]: Positive Trust Anchors: Apr 16 23:55:16.876848 systemd-resolved[403]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 23:55:16.876879 systemd-resolved[403]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 23:55:16.882329 systemd-resolved[403]: Defaulting to hostname 'linux'. Apr 16 23:55:16.883340 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 23:55:16.885490 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 23:55:16.889284 kernel: SCSI subsystem initialized Apr 16 23:55:16.893065 kernel: Loading iSCSI transport class v2.0-870. Apr 16 23:55:16.901058 kernel: iscsi: registered transport (tcp) Apr 16 23:55:16.914077 kernel: iscsi: registered transport (qla4xxx) Apr 16 23:55:16.914118 kernel: QLogic iSCSI HBA Driver Apr 16 23:55:16.930813 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 16 23:55:16.950678 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 23:55:16.953288 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 16 23:55:16.995984 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 16 23:55:16.999501 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 16 23:55:17.062123 kernel: raid6: neonx8 gen() 15776 MB/s Apr 16 23:55:17.079096 kernel: raid6: neonx4 gen() 15779 MB/s Apr 16 23:55:17.096076 kernel: raid6: neonx2 gen() 13170 MB/s Apr 16 23:55:17.113095 kernel: raid6: neonx1 gen() 10438 MB/s Apr 16 23:55:17.130088 kernel: raid6: int64x8 gen() 6883 MB/s Apr 16 23:55:17.147062 kernel: raid6: int64x4 gen() 7321 MB/s Apr 16 23:55:17.164097 kernel: raid6: int64x2 gen() 6073 MB/s Apr 16 23:55:17.181094 kernel: raid6: int64x1 gen() 5046 MB/s Apr 16 23:55:17.181141 kernel: raid6: using algorithm neonx4 gen() 15779 MB/s Apr 16 23:55:17.198094 kernel: raid6: .... xor() 10294 MB/s, rmw enabled Apr 16 23:55:17.198143 kernel: raid6: using neon recovery algorithm Apr 16 23:55:17.203471 kernel: xor: measuring software checksum speed Apr 16 23:55:17.203488 kernel: 8regs : 21653 MB/sec Apr 16 23:55:17.204133 kernel: 32regs : 21676 MB/sec Apr 16 23:55:17.205267 kernel: arm64_neon : 28041 MB/sec Apr 16 23:55:17.205314 kernel: xor: using function: arm64_neon (28041 MB/sec) Apr 16 23:55:17.258169 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 16 23:55:17.265103 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 16 23:55:17.267933 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 23:55:17.296658 systemd-udevd[563]: Using default interface naming scheme 'v255'. Apr 16 23:55:17.300785 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 23:55:17.303389 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 16 23:55:17.329423 dracut-pre-trigger[574]: rd.md=0: removing MD RAID activation Apr 16 23:55:17.350888 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 23:55:17.353102 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 23:55:17.431705 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 23:55:17.434031 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 16 23:55:17.471301 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 16 23:55:17.474644 kernel: virtio_blk virtio1: [vda] 104857600 512-byte logical blocks (53.7 GB/50.0 GiB) Apr 16 23:55:17.492701 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 16 23:55:17.492797 kernel: GPT:17805311 != 104857599 Apr 16 23:55:17.492839 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 16 23:55:17.492871 kernel: GPT:17805311 != 104857599 Apr 16 23:55:17.493743 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 16 23:55:17.493769 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 23:55:17.508075 kernel: ACPI: bus type USB registered Apr 16 23:55:17.510509 kernel: usbcore: registered new interface driver usbfs Apr 16 23:55:17.510540 kernel: usbcore: registered new interface driver hub Apr 16 23:55:17.512209 kernel: usbcore: registered new device driver usb Apr 16 23:55:17.517140 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 23:55:17.517312 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:55:17.521256 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 23:55:17.523270 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 23:55:17.529806 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 16 23:55:17.529988 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 16 23:55:17.530097 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 16 23:55:17.532279 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 16 23:55:17.532387 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 16 23:55:17.534054 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 16 23:55:17.534161 kernel: hub 1-0:1.0: USB hub found Apr 16 23:55:17.535068 kernel: hub 1-0:1.0: 4 ports detected Apr 16 23:55:17.537130 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 16 23:55:17.537264 kernel: hub 2-0:1.0: USB hub found Apr 16 23:55:17.537752 kernel: hub 2-0:1.0: 4 ports detected Apr 16 23:55:17.555130 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:55:17.585620 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 16 23:55:17.598112 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 16 23:55:17.601088 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 16 23:55:17.607475 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 16 23:55:17.608567 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 16 23:55:17.618468 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 16 23:55:17.619762 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 23:55:17.621508 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 23:55:17.623360 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 23:55:17.625768 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 16 23:55:17.627458 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 16 23:55:17.645826 disk-uuid[661]: Primary Header is updated. Apr 16 23:55:17.645826 disk-uuid[661]: Secondary Entries is updated. Apr 16 23:55:17.645826 disk-uuid[661]: Secondary Header is updated. Apr 16 23:55:17.649845 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 16 23:55:17.657081 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 23:55:17.775081 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 16 23:55:17.907009 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Apr 16 23:55:17.907136 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 16 23:55:17.907540 kernel: usbcore: registered new interface driver usbhid Apr 16 23:55:17.908077 kernel: usbhid: USB HID core driver Apr 16 23:55:18.013079 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Apr 16 23:55:18.139074 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:01.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Apr 16 23:55:18.191068 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Apr 16 23:55:18.672103 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 23:55:18.672490 disk-uuid[666]: The operation has completed successfully. Apr 16 23:55:18.731370 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 16 23:55:18.731461 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 16 23:55:18.759107 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 16 23:55:18.775732 sh[685]: Success Apr 16 23:55:18.789962 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 16 23:55:18.789991 kernel: device-mapper: uevent: version 1.0.3 Apr 16 23:55:18.790006 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 16 23:55:18.797062 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Apr 16 23:55:18.886159 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 16 23:55:18.888830 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 16 23:55:18.911277 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 16 23:55:18.936066 kernel: BTRFS: device fsid 10cedb9e-43f1-4d98-9b55-3b84c3a61868 devid 1 transid 33 /dev/mapper/usr (253:0) scanned by mount (698) Apr 16 23:55:18.938869 kernel: BTRFS info (device dm-0): first mount of filesystem 10cedb9e-43f1-4d98-9b55-3b84c3a61868 Apr 16 23:55:18.938891 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 16 23:55:18.965165 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 16 23:55:18.965188 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 16 23:55:18.968640 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 16 23:55:18.969839 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 16 23:55:18.971085 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 16 23:55:18.971838 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 16 23:55:18.973316 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 16 23:55:19.008068 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (729) Apr 16 23:55:19.011668 kernel: BTRFS info (device vda6): first mount of filesystem 29b48a10-1a8e-4627-ab21-f0862573351d Apr 16 23:55:19.011704 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Apr 16 23:55:19.021069 kernel: BTRFS info (device vda6): turning on async discard Apr 16 23:55:19.021105 kernel: BTRFS info (device vda6): enabling free space tree Apr 16 23:55:19.025064 kernel: BTRFS info (device vda6): last unmount of filesystem 29b48a10-1a8e-4627-ab21-f0862573351d Apr 16 23:55:19.025843 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 16 23:55:19.027635 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 16 23:55:19.063153 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 23:55:19.070233 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 23:55:19.104466 systemd-networkd[867]: lo: Link UP Apr 16 23:55:19.104478 systemd-networkd[867]: lo: Gained carrier Apr 16 23:55:19.105412 systemd-networkd[867]: Enumeration completed Apr 16 23:55:19.105528 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 23:55:19.107081 systemd[1]: Reached target network.target - Network. Apr 16 23:55:19.107283 systemd-networkd[867]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:55:19.107287 systemd-networkd[867]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 23:55:19.108230 systemd-networkd[867]: eth0: Link UP Apr 16 23:55:19.108408 systemd-networkd[867]: eth0: Gained carrier Apr 16 23:55:19.108418 systemd-networkd[867]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:55:19.122127 systemd-networkd[867]: eth0: DHCPv4 address 10.0.0.99/25, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 16 23:55:19.190087 ignition[816]: Ignition 2.22.0 Apr 16 23:55:19.190100 ignition[816]: Stage: fetch-offline Apr 16 23:55:19.190129 ignition[816]: no configs at "/usr/lib/ignition/base.d" Apr 16 23:55:19.190136 ignition[816]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Apr 16 23:55:19.192977 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 23:55:19.190213 ignition[816]: parsed url from cmdline: "" Apr 16 23:55:19.195318 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 16 23:55:19.190216 ignition[816]: no config URL provided Apr 16 23:55:19.190221 ignition[816]: reading system config file "/usr/lib/ignition/user.ign" Apr 16 23:55:19.190228 ignition[816]: no config at "/usr/lib/ignition/user.ign" Apr 16 23:55:19.190233 ignition[816]: failed to fetch config: resource requires networking Apr 16 23:55:19.190375 ignition[816]: Ignition finished successfully Apr 16 23:55:19.222141 ignition[880]: Ignition 2.22.0 Apr 16 23:55:19.222158 ignition[880]: Stage: fetch Apr 16 23:55:19.222277 ignition[880]: no configs at "/usr/lib/ignition/base.d" Apr 16 23:55:19.222285 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Apr 16 23:55:19.222354 ignition[880]: parsed url from cmdline: "" Apr 16 23:55:19.222356 ignition[880]: no config URL provided Apr 16 23:55:19.222361 ignition[880]: reading system config file "/usr/lib/ignition/user.ign" Apr 16 23:55:19.222367 ignition[880]: no config at "/usr/lib/ignition/user.ign" Apr 16 23:55:19.222593 ignition[880]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Apr 16 23:55:19.222833 ignition[880]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Apr 16 23:55:19.222846 ignition[880]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Apr 16 23:55:20.222794 ignition[880]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Apr 16 23:55:20.223094 ignition[880]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Apr 16 23:55:21.165233 systemd-networkd[867]: eth0: Gained IPv6LL Apr 16 23:55:21.223029 ignition[880]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Apr 16 23:55:21.223245 ignition[880]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Apr 16 23:55:22.223325 ignition[880]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Apr 16 23:55:22.223394 ignition[880]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Apr 16 23:55:23.223660 ignition[880]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Apr 16 23:55:23.223740 ignition[880]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Apr 16 23:55:24.223932 ignition[880]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Apr 16 23:55:24.224091 ignition[880]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Apr 16 23:55:25.115324 ignition[880]: GET result: OK Apr 16 23:55:25.115451 ignition[880]: parsing config with SHA512: 970adce36f0baed5576819b61d21c42943c926516ac6da35445a45a4280f9db6e5132aa4014bfc4fab7709e965054fc8878977c07d81a032100eb93dee90763d Apr 16 23:55:25.120469 unknown[880]: fetched base config from "system" Apr 16 23:55:25.120482 unknown[880]: fetched base config from "system" Apr 16 23:55:25.120807 ignition[880]: fetch: fetch complete Apr 16 23:55:25.120487 unknown[880]: fetched user config from "openstack" Apr 16 23:55:25.120811 ignition[880]: fetch: fetch passed Apr 16 23:55:25.124314 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 16 23:55:25.120849 ignition[880]: Ignition finished successfully Apr 16 23:55:25.126693 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 16 23:55:25.154968 ignition[888]: Ignition 2.22.0 Apr 16 23:55:25.154991 ignition[888]: Stage: kargs Apr 16 23:55:25.155147 ignition[888]: no configs at "/usr/lib/ignition/base.d" Apr 16 23:55:25.155157 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Apr 16 23:55:25.155850 ignition[888]: kargs: kargs passed Apr 16 23:55:25.155890 ignition[888]: Ignition finished successfully Apr 16 23:55:25.160728 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 16 23:55:25.162682 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 16 23:55:25.191993 ignition[896]: Ignition 2.22.0 Apr 16 23:55:25.192015 ignition[896]: Stage: disks Apr 16 23:55:25.192169 ignition[896]: no configs at "/usr/lib/ignition/base.d" Apr 16 23:55:25.192178 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Apr 16 23:55:25.192939 ignition[896]: disks: disks passed Apr 16 23:55:25.196506 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 16 23:55:25.192981 ignition[896]: Ignition finished successfully Apr 16 23:55:25.197696 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 16 23:55:25.199098 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 16 23:55:25.200644 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 23:55:25.202255 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 23:55:25.203879 systemd[1]: Reached target basic.target - Basic System. Apr 16 23:55:25.206228 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 16 23:55:25.246711 systemd-fsck[906]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Apr 16 23:55:25.250141 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 16 23:55:25.252540 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 16 23:55:25.409107 kernel: EXT4-fs (vda9): mounted filesystem 717eabe0-7ee2-4bf7-a9aa-0d27bb05c125 r/w with ordered data mode. Quota mode: none. Apr 16 23:55:25.409610 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 16 23:55:25.410762 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 16 23:55:25.414386 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 23:55:25.416386 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 16 23:55:25.417323 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 16 23:55:25.417892 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Apr 16 23:55:25.420244 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 16 23:55:25.420271 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 23:55:25.432382 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 16 23:55:25.434393 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 16 23:55:25.450706 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (916) Apr 16 23:55:25.453579 kernel: BTRFS info (device vda6): first mount of filesystem 29b48a10-1a8e-4627-ab21-f0862573351d Apr 16 23:55:25.453644 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Apr 16 23:55:25.461335 kernel: BTRFS info (device vda6): turning on async discard Apr 16 23:55:25.461362 kernel: BTRFS info (device vda6): enabling free space tree Apr 16 23:55:25.463477 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 23:55:25.499065 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Apr 16 23:55:25.517888 initrd-setup-root[946]: cut: /sysroot/etc/passwd: No such file or directory Apr 16 23:55:25.524968 initrd-setup-root[953]: cut: /sysroot/etc/group: No such file or directory Apr 16 23:55:25.528663 initrd-setup-root[960]: cut: /sysroot/etc/shadow: No such file or directory Apr 16 23:55:25.533695 initrd-setup-root[967]: cut: /sysroot/etc/gshadow: No such file or directory Apr 16 23:55:25.652498 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 16 23:55:25.654725 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 16 23:55:25.656199 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 16 23:55:25.673594 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 16 23:55:25.676084 kernel: BTRFS info (device vda6): last unmount of filesystem 29b48a10-1a8e-4627-ab21-f0862573351d Apr 16 23:55:25.699001 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 16 23:55:25.705598 ignition[1035]: INFO : Ignition 2.22.0 Apr 16 23:55:25.705598 ignition[1035]: INFO : Stage: mount Apr 16 23:55:25.707082 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 23:55:25.707082 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Apr 16 23:55:25.707082 ignition[1035]: INFO : mount: mount passed Apr 16 23:55:25.707082 ignition[1035]: INFO : Ignition finished successfully Apr 16 23:55:25.707628 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 16 23:55:26.560081 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Apr 16 23:55:28.565094 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Apr 16 23:55:32.576108 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Apr 16 23:55:32.579648 coreos-metadata[918]: Apr 16 23:55:32.579 WARN failed to locate config-drive, using the metadata service API instead Apr 16 23:55:32.596282 coreos-metadata[918]: Apr 16 23:55:32.596 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Apr 16 23:55:33.404142 coreos-metadata[918]: Apr 16 23:55:33.404 INFO Fetch successful Apr 16 23:55:33.405120 coreos-metadata[918]: Apr 16 23:55:33.404 INFO wrote hostname ci-4459-2-4-n-fcb502653b to /sysroot/etc/hostname Apr 16 23:55:33.407096 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Apr 16 23:55:33.407199 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Apr 16 23:55:33.411102 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 16 23:55:33.425984 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 23:55:33.458080 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1053) Apr 16 23:55:33.460703 kernel: BTRFS info (device vda6): first mount of filesystem 29b48a10-1a8e-4627-ab21-f0862573351d Apr 16 23:55:33.460765 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Apr 16 23:55:33.468615 kernel: BTRFS info (device vda6): turning on async discard Apr 16 23:55:33.468679 kernel: BTRFS info (device vda6): enabling free space tree Apr 16 23:55:33.470108 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 23:55:33.501313 ignition[1071]: INFO : Ignition 2.22.0 Apr 16 23:55:33.501313 ignition[1071]: INFO : Stage: files Apr 16 23:55:33.502846 ignition[1071]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 23:55:33.502846 ignition[1071]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Apr 16 23:55:33.502846 ignition[1071]: DEBUG : files: compiled without relabeling support, skipping Apr 16 23:55:33.506320 ignition[1071]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 16 23:55:33.506320 ignition[1071]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 16 23:55:33.511922 ignition[1071]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 16 23:55:33.513592 ignition[1071]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 16 23:55:33.513592 ignition[1071]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 16 23:55:33.512832 unknown[1071]: wrote ssh authorized keys file for user: core Apr 16 23:55:33.518735 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 16 23:55:33.520508 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Apr 16 23:55:33.566524 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 16 23:55:33.670871 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 16 23:55:33.670871 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 16 23:55:33.674546 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 16 23:55:33.674546 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 16 23:55:33.674546 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 16 23:55:33.674546 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 23:55:33.674546 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 23:55:33.674546 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 23:55:33.674546 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 23:55:33.674546 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 23:55:33.674546 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 23:55:33.674546 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 16 23:55:33.674546 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 16 23:55:33.674546 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 16 23:55:33.674546 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Apr 16 23:55:33.962035 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 16 23:55:34.516011 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 16 23:55:34.516011 ignition[1071]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 16 23:55:34.520209 ignition[1071]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 23:55:34.520209 ignition[1071]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 23:55:34.520209 ignition[1071]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 16 23:55:34.520209 ignition[1071]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 16 23:55:34.520209 ignition[1071]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 16 23:55:34.520209 ignition[1071]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 16 23:55:34.520209 ignition[1071]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 16 23:55:34.520209 ignition[1071]: INFO : files: files passed Apr 16 23:55:34.520209 ignition[1071]: INFO : Ignition finished successfully Apr 16 23:55:34.522073 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 16 23:55:34.524149 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 16 23:55:34.526826 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 16 23:55:34.539738 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 16 23:55:34.541069 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 16 23:55:34.545966 initrd-setup-root-after-ignition[1102]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 23:55:34.545966 initrd-setup-root-after-ignition[1102]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 16 23:55:34.548650 initrd-setup-root-after-ignition[1106]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 23:55:34.548249 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 23:55:34.549859 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 16 23:55:34.553679 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 16 23:55:34.594317 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 16 23:55:34.594451 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 16 23:55:34.596561 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 16 23:55:34.598160 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 16 23:55:34.599744 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 16 23:55:34.600551 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 16 23:55:34.626871 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 23:55:34.629192 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 16 23:55:34.649101 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 16 23:55:34.650189 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 23:55:34.652007 systemd[1]: Stopped target timers.target - Timer Units. Apr 16 23:55:34.653664 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 16 23:55:34.653782 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 23:55:34.655921 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 16 23:55:34.657770 systemd[1]: Stopped target basic.target - Basic System. Apr 16 23:55:34.659192 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 16 23:55:34.660680 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 23:55:34.662364 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 16 23:55:34.663986 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 16 23:55:34.665741 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 16 23:55:34.667348 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 23:55:34.669089 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 16 23:55:34.670853 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 16 23:55:34.672379 systemd[1]: Stopped target swap.target - Swaps. Apr 16 23:55:34.673707 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 16 23:55:34.673833 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 16 23:55:34.675765 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 16 23:55:34.676815 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 23:55:34.678480 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 16 23:55:34.682109 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 23:55:34.683640 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 16 23:55:34.683748 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 16 23:55:34.686288 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 16 23:55:34.686416 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 23:55:34.688140 systemd[1]: ignition-files.service: Deactivated successfully. Apr 16 23:55:34.688241 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 16 23:55:34.690731 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 16 23:55:34.691573 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 16 23:55:34.691706 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 23:55:34.694055 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 16 23:55:34.695515 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 16 23:55:34.695631 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 23:55:34.697236 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 16 23:55:34.697337 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 23:55:34.702150 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 16 23:55:34.710229 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 16 23:55:34.725542 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 16 23:55:34.728488 ignition[1127]: INFO : Ignition 2.22.0 Apr 16 23:55:34.728488 ignition[1127]: INFO : Stage: umount Apr 16 23:55:34.731023 ignition[1127]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 23:55:34.731023 ignition[1127]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Apr 16 23:55:34.731023 ignition[1127]: INFO : umount: umount passed Apr 16 23:55:34.731023 ignition[1127]: INFO : Ignition finished successfully Apr 16 23:55:34.732187 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 16 23:55:34.732319 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 16 23:55:34.735526 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 16 23:55:34.735582 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 16 23:55:34.736824 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 16 23:55:34.736866 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 16 23:55:34.738341 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 16 23:55:34.738384 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 16 23:55:34.739893 systemd[1]: Stopped target network.target - Network. Apr 16 23:55:34.742303 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 16 23:55:34.742362 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 23:55:34.743957 systemd[1]: Stopped target paths.target - Path Units. Apr 16 23:55:34.745426 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 16 23:55:34.749118 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 23:55:34.750173 systemd[1]: Stopped target slices.target - Slice Units. Apr 16 23:55:34.751583 systemd[1]: Stopped target sockets.target - Socket Units. Apr 16 23:55:34.753178 systemd[1]: iscsid.socket: Deactivated successfully. Apr 16 23:55:34.753215 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 23:55:34.754805 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 16 23:55:34.754835 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 23:55:34.756218 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 16 23:55:34.756277 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 16 23:55:34.757906 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 16 23:55:34.757946 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 16 23:55:34.762317 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 16 23:55:34.763584 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 16 23:55:34.775632 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 16 23:55:34.775742 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 16 23:55:34.779918 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 16 23:55:34.780275 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 16 23:55:34.780419 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 16 23:55:34.783800 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 16 23:55:34.784488 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 16 23:55:34.786629 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 16 23:55:34.786686 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 16 23:55:34.789384 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 16 23:55:34.790779 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 16 23:55:34.790834 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 23:55:34.792919 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 16 23:55:34.792962 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 16 23:55:34.795677 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 16 23:55:34.795717 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 16 23:55:34.797376 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 16 23:55:34.797419 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 23:55:34.799877 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 23:55:34.802423 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 16 23:55:34.802481 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 16 23:55:34.802726 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 16 23:55:34.803157 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 16 23:55:34.805882 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 16 23:55:34.805920 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 16 23:55:34.809840 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 16 23:55:34.819176 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 23:55:34.821005 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 16 23:55:34.823066 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 16 23:55:34.824418 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 16 23:55:34.824482 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 16 23:55:34.825565 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 16 23:55:34.825596 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 23:55:34.827127 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 16 23:55:34.827174 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 16 23:55:34.829553 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 16 23:55:34.829601 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 16 23:55:34.831815 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 16 23:55:34.831862 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 23:55:34.835167 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 16 23:55:34.836843 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 16 23:55:34.836902 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 23:55:34.839903 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 16 23:55:34.839947 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 23:55:34.842989 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 23:55:34.843033 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:55:34.847133 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Apr 16 23:55:34.847184 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Apr 16 23:55:34.847214 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 16 23:55:34.853555 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 16 23:55:34.853759 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 16 23:55:34.855863 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 16 23:55:34.858194 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 16 23:55:34.878787 systemd[1]: Switching root. Apr 16 23:55:34.913451 systemd-journald[311]: Journal stopped Apr 16 23:55:35.940568 systemd-journald[311]: Received SIGTERM from PID 1 (systemd). Apr 16 23:55:35.940640 kernel: SELinux: policy capability network_peer_controls=1 Apr 16 23:55:35.940659 kernel: SELinux: policy capability open_perms=1 Apr 16 23:55:35.940678 kernel: SELinux: policy capability extended_socket_class=1 Apr 16 23:55:35.940691 kernel: SELinux: policy capability always_check_network=0 Apr 16 23:55:35.940703 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 16 23:55:35.940716 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 16 23:55:35.940731 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 16 23:55:35.940740 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 16 23:55:35.940750 kernel: SELinux: policy capability userspace_initial_context=0 Apr 16 23:55:35.940759 kernel: audit: type=1403 audit(1776383735.032:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 16 23:55:35.940772 systemd[1]: Successfully loaded SELinux policy in 53.096ms. Apr 16 23:55:35.940793 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.739ms. Apr 16 23:55:35.940807 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 16 23:55:35.940818 systemd[1]: Detected virtualization kvm. Apr 16 23:55:35.940828 systemd[1]: Detected architecture arm64. Apr 16 23:55:35.940838 systemd[1]: Detected first boot. Apr 16 23:55:35.940847 systemd[1]: Hostname set to . Apr 16 23:55:35.940857 systemd[1]: Initializing machine ID from VM UUID. Apr 16 23:55:35.940867 zram_generator::config[1173]: No configuration found. Apr 16 23:55:35.940879 kernel: NET: Registered PF_VSOCK protocol family Apr 16 23:55:35.940891 systemd[1]: Populated /etc with preset unit settings. Apr 16 23:55:35.940902 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 16 23:55:35.940912 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 16 23:55:35.940922 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 16 23:55:35.940932 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 16 23:55:35.940942 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 16 23:55:35.940952 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 16 23:55:35.940963 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 16 23:55:35.940973 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 16 23:55:35.940983 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 16 23:55:35.940996 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 16 23:55:35.941008 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 16 23:55:35.941018 systemd[1]: Created slice user.slice - User and Session Slice. Apr 16 23:55:35.941028 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 23:55:35.941049 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 23:55:35.941064 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 16 23:55:35.941076 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 16 23:55:35.941087 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 16 23:55:35.941098 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 23:55:35.941108 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Apr 16 23:55:35.941118 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 23:55:35.941131 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 23:55:35.941142 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 16 23:55:35.941153 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 16 23:55:35.941163 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 16 23:55:35.941173 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 16 23:55:35.941183 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 23:55:35.941194 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 23:55:35.941204 systemd[1]: Reached target slices.target - Slice Units. Apr 16 23:55:35.941214 systemd[1]: Reached target swap.target - Swaps. Apr 16 23:55:35.941224 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 16 23:55:35.941235 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 16 23:55:35.941245 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 16 23:55:35.941255 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 23:55:35.941266 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 23:55:35.941276 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 23:55:35.941286 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 16 23:55:35.941296 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 16 23:55:35.941305 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 16 23:55:35.941315 systemd[1]: Mounting media.mount - External Media Directory... Apr 16 23:55:35.941327 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 16 23:55:35.941337 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 16 23:55:35.941348 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 16 23:55:35.941359 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 16 23:55:35.941369 systemd[1]: Reached target machines.target - Containers. Apr 16 23:55:35.941379 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 16 23:55:35.941389 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 23:55:35.941399 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 23:55:35.941409 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 16 23:55:35.941420 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 23:55:35.941430 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 23:55:35.941440 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 23:55:35.941450 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 16 23:55:35.941460 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 23:55:35.941470 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 16 23:55:35.941480 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 16 23:55:35.941490 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 16 23:55:35.941502 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 16 23:55:35.941512 systemd[1]: Stopped systemd-fsck-usr.service. Apr 16 23:55:35.941522 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 23:55:35.941532 kernel: loop: module loaded Apr 16 23:55:35.941542 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 23:55:35.941553 kernel: fuse: init (API version 7.41) Apr 16 23:55:35.941564 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 23:55:35.941574 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 16 23:55:35.941587 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 16 23:55:35.941597 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 16 23:55:35.941607 kernel: ACPI: bus type drm_connector registered Apr 16 23:55:35.941617 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 23:55:35.941629 systemd[1]: verity-setup.service: Deactivated successfully. Apr 16 23:55:35.941639 systemd[1]: Stopped verity-setup.service. Apr 16 23:55:35.941649 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 16 23:55:35.941682 systemd-journald[1237]: Collecting audit messages is disabled. Apr 16 23:55:35.941708 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 16 23:55:35.941718 systemd[1]: Mounted media.mount - External Media Directory. Apr 16 23:55:35.941730 systemd-journald[1237]: Journal started Apr 16 23:55:35.941751 systemd-journald[1237]: Runtime Journal (/run/log/journal/da218761f4594f2a91ce368b7a1e513c) is 8M, max 319.5M, 311.5M free. Apr 16 23:55:35.729544 systemd[1]: Queued start job for default target multi-user.target. Apr 16 23:55:35.741438 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 16 23:55:35.741875 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 16 23:55:35.946108 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 23:55:35.944205 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 16 23:55:35.945517 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 16 23:55:35.946991 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 16 23:55:35.950166 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 23:55:35.951673 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 16 23:55:35.952960 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 16 23:55:35.953147 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 16 23:55:35.954332 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 23:55:35.954482 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 23:55:35.955691 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 23:55:35.955845 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 23:55:35.958413 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 23:55:35.958602 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 23:55:35.960006 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 16 23:55:35.960421 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 16 23:55:35.961690 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 23:55:35.961844 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 23:55:35.963137 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 23:55:35.964359 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 23:55:35.965789 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 16 23:55:35.967190 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 16 23:55:35.979070 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 16 23:55:35.981263 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 16 23:55:35.983184 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 16 23:55:35.984179 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 16 23:55:35.984206 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 23:55:35.985906 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 16 23:55:35.994194 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 16 23:55:35.995195 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 23:55:35.997495 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 16 23:55:35.999507 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 16 23:55:36.000633 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 23:55:36.001600 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 16 23:55:36.002723 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 23:55:36.006369 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 23:55:36.010781 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 16 23:55:36.016237 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 16 23:55:36.020584 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 16 23:55:36.021815 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 16 23:55:36.024734 systemd-journald[1237]: Time spent on flushing to /var/log/journal/da218761f4594f2a91ce368b7a1e513c is 29.561ms for 1735 entries. Apr 16 23:55:36.024734 systemd-journald[1237]: System Journal (/var/log/journal/da218761f4594f2a91ce368b7a1e513c) is 8M, max 584.8M, 576.8M free. Apr 16 23:55:36.070021 systemd-journald[1237]: Received client request to flush runtime journal. Apr 16 23:55:36.070080 kernel: loop0: detected capacity change from 0 to 100632 Apr 16 23:55:36.024422 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 16 23:55:36.028759 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 16 23:55:36.032972 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 16 23:55:36.036535 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 23:55:36.040485 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 23:55:36.072865 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 16 23:55:36.090602 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 16 23:55:36.091353 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 16 23:55:36.097154 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 16 23:55:36.101067 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 16 23:55:36.101801 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 23:55:36.129066 kernel: loop1: detected capacity change from 0 to 209336 Apr 16 23:55:36.138088 systemd-tmpfiles[1311]: ACLs are not supported, ignoring. Apr 16 23:55:36.138398 systemd-tmpfiles[1311]: ACLs are not supported, ignoring. Apr 16 23:55:36.141816 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 23:55:36.197074 kernel: loop2: detected capacity change from 0 to 1632 Apr 16 23:55:36.236124 kernel: loop3: detected capacity change from 0 to 119840 Apr 16 23:55:36.290073 kernel: loop4: detected capacity change from 0 to 100632 Apr 16 23:55:36.317060 kernel: loop5: detected capacity change from 0 to 209336 Apr 16 23:55:36.360078 kernel: loop6: detected capacity change from 0 to 1632 Apr 16 23:55:36.370079 kernel: loop7: detected capacity change from 0 to 119840 Apr 16 23:55:36.399523 (sd-merge)[1319]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-stackit'. Apr 16 23:55:36.399929 (sd-merge)[1319]: Merged extensions into '/usr'. Apr 16 23:55:36.404592 systemd[1]: Reload requested from client PID 1293 ('systemd-sysext') (unit systemd-sysext.service)... Apr 16 23:55:36.404617 systemd[1]: Reloading... Apr 16 23:55:36.456065 zram_generator::config[1345]: No configuration found. Apr 16 23:55:36.595433 systemd[1]: Reloading finished in 190 ms. Apr 16 23:55:36.615023 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 16 23:55:36.616516 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 16 23:55:36.630241 systemd[1]: Starting ensure-sysext.service... Apr 16 23:55:36.631838 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 23:55:36.634309 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 23:55:36.646003 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 16 23:55:36.646333 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 16 23:55:36.646687 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 16 23:55:36.646970 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 16 23:55:36.647738 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 16 23:55:36.648073 systemd-tmpfiles[1383]: ACLs are not supported, ignoring. Apr 16 23:55:36.648192 systemd-tmpfiles[1383]: ACLs are not supported, ignoring. Apr 16 23:55:36.649018 systemd[1]: Reload requested from client PID 1382 ('systemctl') (unit ensure-sysext.service)... Apr 16 23:55:36.649036 systemd[1]: Reloading... Apr 16 23:55:36.664696 systemd-tmpfiles[1383]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 23:55:36.664925 systemd-tmpfiles[1383]: Skipping /boot Apr 16 23:55:36.667328 systemd-udevd[1384]: Using default interface naming scheme 'v255'. Apr 16 23:55:36.670959 systemd-tmpfiles[1383]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 23:55:36.671085 systemd-tmpfiles[1383]: Skipping /boot Apr 16 23:55:36.698130 zram_generator::config[1412]: No configuration found. Apr 16 23:55:36.842109 systemd[1]: Reloading finished in 192 ms. Apr 16 23:55:36.850568 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 23:55:36.859161 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 23:55:36.878240 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 16 23:55:36.884698 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 16 23:55:36.888261 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 16 23:55:36.892757 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 23:55:36.902406 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 23:55:36.910999 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 16 23:55:36.915731 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Apr 16 23:55:36.936085 kernel: mousedev: PS/2 mouse device common for all mice Apr 16 23:55:36.943511 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 16 23:55:36.945805 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 23:55:36.946975 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 23:55:36.950848 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 23:55:36.953425 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 23:55:36.954632 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 23:55:36.958313 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 16 23:55:36.959518 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 23:55:36.962346 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 16 23:55:36.978213 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 16 23:55:36.980329 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 23:55:36.980492 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 23:55:36.988373 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 16 23:55:36.991116 systemd[1]: Finished ensure-sysext.service. Apr 16 23:55:36.998795 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 23:55:36.999005 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 23:55:37.001302 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 23:55:37.004320 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 23:55:37.008703 systemd[1]: Starting modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm... Apr 16 23:55:37.009799 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 23:55:37.009849 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 23:55:37.009884 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 23:55:37.009917 systemd[1]: Reached target time-set.target - System Time Set. Apr 16 23:55:37.012786 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 23:55:37.019274 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 23:55:37.021658 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 16 23:55:37.024692 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 23:55:37.024865 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 23:55:37.027865 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 23:55:37.028979 kernel: [drm] pci: virtio-gpu-pci detected at 0000:06:00.0 Apr 16 23:55:37.029094 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 16 23:55:37.029118 kernel: [drm] features: -context_init Apr 16 23:55:37.030372 kernel: [drm] number of scanouts: 1 Apr 16 23:55:37.032090 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 16 23:55:37.032131 kernel: [drm] number of cap sets: 0 Apr 16 23:55:37.032157 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 16 23:55:37.033269 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:06:00.0 on minor 0 Apr 16 23:55:37.039809 kernel: Console: switching to colour frame buffer device 160x50 Apr 16 23:55:37.043240 kernel: virtio-pci 0000:06:00.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 16 23:55:37.060068 kernel: PTP clock support registered Apr 16 23:55:37.060400 augenrules[1539]: No rules Apr 16 23:55:37.061752 systemd[1]: audit-rules.service: Deactivated successfully. Apr 16 23:55:37.063086 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 16 23:55:37.064554 systemd[1]: modprobe@ptp_kvm.service: Deactivated successfully. Apr 16 23:55:37.065202 systemd[1]: Finished modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm. Apr 16 23:55:37.071167 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 16 23:55:37.105264 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 23:55:37.134692 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 23:55:37.136116 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:55:37.141868 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 23:55:37.153384 ldconfig[1287]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 16 23:55:37.163890 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 16 23:55:37.169250 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 16 23:55:37.189032 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 16 23:55:37.195875 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 16 23:55:37.197389 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 16 23:55:37.199200 systemd-networkd[1494]: lo: Link UP Apr 16 23:55:37.199209 systemd-networkd[1494]: lo: Gained carrier Apr 16 23:55:37.200236 systemd-networkd[1494]: Enumeration completed Apr 16 23:55:37.200356 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 23:55:37.200698 systemd-networkd[1494]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:55:37.200709 systemd-networkd[1494]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 23:55:37.201403 systemd-networkd[1494]: eth0: Link UP Apr 16 23:55:37.201516 systemd-networkd[1494]: eth0: Gained carrier Apr 16 23:55:37.201536 systemd-networkd[1494]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:55:37.202741 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 16 23:55:37.205094 systemd-resolved[1495]: Positive Trust Anchors: Apr 16 23:55:37.205116 systemd-resolved[1495]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 23:55:37.205149 systemd-resolved[1495]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 23:55:37.205164 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 16 23:55:37.212330 systemd-resolved[1495]: Using system hostname 'ci-4459-2-4-n-fcb502653b'. Apr 16 23:55:37.213799 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 23:55:37.215221 systemd[1]: Reached target network.target - Network. Apr 16 23:55:37.215968 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 23:55:37.219141 systemd-networkd[1494]: eth0: DHCPv4 address 10.0.0.99/25, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 16 23:55:37.227299 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 16 23:55:37.293385 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:55:37.294679 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 23:55:37.295760 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 16 23:55:37.296914 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 16 23:55:37.298265 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 16 23:55:37.299269 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 16 23:55:37.300364 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 16 23:55:37.301446 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 16 23:55:37.301481 systemd[1]: Reached target paths.target - Path Units. Apr 16 23:55:37.302269 systemd[1]: Reached target timers.target - Timer Units. Apr 16 23:55:37.304638 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 16 23:55:37.306854 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 16 23:55:37.309523 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 16 23:55:37.310794 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 16 23:55:37.311956 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 16 23:55:37.318183 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 16 23:55:37.320635 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 16 23:55:37.322297 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 16 23:55:37.323299 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 23:55:37.324114 systemd[1]: Reached target basic.target - Basic System. Apr 16 23:55:37.324951 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 16 23:55:37.324985 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 16 23:55:37.327805 systemd[1]: Starting chronyd.service - NTP client/server... Apr 16 23:55:37.329481 systemd[1]: Starting containerd.service - containerd container runtime... Apr 16 23:55:37.331543 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 16 23:55:37.334255 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 16 23:55:37.338200 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 16 23:55:37.339058 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Apr 16 23:55:37.340122 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 16 23:55:37.341915 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 16 23:55:37.342959 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 16 23:55:37.344839 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 16 23:55:37.355377 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 16 23:55:37.359297 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 16 23:55:37.361226 jq[1591]: false Apr 16 23:55:37.361523 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 16 23:55:37.365821 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 16 23:55:37.368519 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 16 23:55:37.372431 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 16 23:55:37.373058 systemd[1]: Starting update-engine.service - Update Engine... Apr 16 23:55:37.373984 extend-filesystems[1592]: Found /dev/vda6 Apr 16 23:55:37.375081 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 16 23:55:37.379545 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 16 23:55:37.380950 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 16 23:55:37.381150 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 16 23:55:37.381384 systemd[1]: motdgen.service: Deactivated successfully. Apr 16 23:55:37.381551 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 16 23:55:37.383552 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 16 23:55:37.383730 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 16 23:55:37.386594 jq[1609]: true Apr 16 23:55:37.388145 extend-filesystems[1592]: Found /dev/vda9 Apr 16 23:55:37.389235 extend-filesystems[1592]: Checking size of /dev/vda9 Apr 16 23:55:37.391893 chronyd[1584]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Apr 16 23:55:37.394795 chronyd[1584]: Loaded seccomp filter (level 2) Apr 16 23:55:37.395692 systemd[1]: Started chronyd.service - NTP client/server. Apr 16 23:55:37.399468 jq[1619]: true Apr 16 23:55:37.406078 (ntainerd)[1624]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 16 23:55:37.409976 extend-filesystems[1592]: Resized partition /dev/vda9 Apr 16 23:55:37.415145 tar[1613]: linux-arm64/LICENSE Apr 16 23:55:37.415145 tar[1613]: linux-arm64/helm Apr 16 23:55:37.418574 update_engine[1607]: I20260416 23:55:37.418289 1607 main.cc:92] Flatcar Update Engine starting Apr 16 23:55:37.420884 extend-filesystems[1639]: resize2fs 1.47.3 (8-Jul-2025) Apr 16 23:55:37.429784 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 12499963 blocks Apr 16 23:55:37.447141 dbus-daemon[1587]: [system] SELinux support is enabled Apr 16 23:55:37.447378 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 16 23:55:37.452306 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 16 23:55:37.452339 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 16 23:55:37.474363 update_engine[1607]: I20260416 23:55:37.463102 1607 update_check_scheduler.cc:74] Next update check in 3m29s Apr 16 23:55:37.453761 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 16 23:55:37.453778 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 16 23:55:37.461005 systemd[1]: Started update-engine.service - Update Engine. Apr 16 23:55:37.465076 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 16 23:55:37.468569 systemd-logind[1603]: New seat seat0. Apr 16 23:55:37.475754 systemd-logind[1603]: Watching system buttons on /dev/input/event0 (Power Button) Apr 16 23:55:37.475782 systemd-logind[1603]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Apr 16 23:55:37.476491 systemd[1]: Started systemd-logind.service - User Login Management. Apr 16 23:55:37.548268 locksmithd[1650]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 16 23:55:37.652096 containerd[1624]: time="2026-04-16T23:55:37Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 16 23:55:37.655387 containerd[1624]: time="2026-04-16T23:55:37.655192640Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Apr 16 23:55:37.667348 containerd[1624]: time="2026-04-16T23:55:37.667306560Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.72µs" Apr 16 23:55:37.667348 containerd[1624]: time="2026-04-16T23:55:37.667342480Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 16 23:55:37.672797 containerd[1624]: time="2026-04-16T23:55:37.667361320Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 16 23:55:37.674736 containerd[1624]: time="2026-04-16T23:55:37.674629720Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 16 23:55:37.674736 containerd[1624]: time="2026-04-16T23:55:37.674672720Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 16 23:55:37.674736 containerd[1624]: time="2026-04-16T23:55:37.674698000Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 16 23:55:37.674829 containerd[1624]: time="2026-04-16T23:55:37.674754560Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 16 23:55:37.674829 containerd[1624]: time="2026-04-16T23:55:37.674766200Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 16 23:55:37.675076 bash[1649]: Updated "/home/core/.ssh/authorized_keys" Apr 16 23:55:37.676146 containerd[1624]: time="2026-04-16T23:55:37.675276000Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 16 23:55:37.676146 containerd[1624]: time="2026-04-16T23:55:37.675304920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 16 23:55:37.676146 containerd[1624]: time="2026-04-16T23:55:37.675321680Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 16 23:55:37.676146 containerd[1624]: time="2026-04-16T23:55:37.675330200Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 16 23:55:37.676146 containerd[1624]: time="2026-04-16T23:55:37.675421520Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 16 23:55:37.676146 containerd[1624]: time="2026-04-16T23:55:37.675931960Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 16 23:55:37.676146 containerd[1624]: time="2026-04-16T23:55:37.676020840Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 16 23:55:37.676146 containerd[1624]: time="2026-04-16T23:55:37.676034880Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 16 23:55:37.676146 containerd[1624]: time="2026-04-16T23:55:37.676093360Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 16 23:55:37.677694 containerd[1624]: time="2026-04-16T23:55:37.676380800Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 16 23:55:37.677694 containerd[1624]: time="2026-04-16T23:55:37.676464720Z" level=info msg="metadata content store policy set" policy=shared Apr 16 23:55:37.679091 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 16 23:55:37.683007 systemd[1]: Starting sshkeys.service... Apr 16 23:55:37.703873 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 16 23:55:37.706395 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 16 23:55:37.723067 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Apr 16 23:55:37.743627 containerd[1624]: time="2026-04-16T23:55:37.743537120Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 16 23:55:37.743904 containerd[1624]: time="2026-04-16T23:55:37.743839520Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 16 23:55:37.743999 containerd[1624]: time="2026-04-16T23:55:37.743983840Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 16 23:55:37.744528 containerd[1624]: time="2026-04-16T23:55:37.744083360Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 16 23:55:37.744528 containerd[1624]: time="2026-04-16T23:55:37.744106200Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 16 23:55:37.744528 containerd[1624]: time="2026-04-16T23:55:37.744118720Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 16 23:55:37.744528 containerd[1624]: time="2026-04-16T23:55:37.744133200Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 16 23:55:37.744528 containerd[1624]: time="2026-04-16T23:55:37.744147720Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 16 23:55:37.744528 containerd[1624]: time="2026-04-16T23:55:37.744160440Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 16 23:55:37.744528 containerd[1624]: time="2026-04-16T23:55:37.744173560Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 16 23:55:37.744528 containerd[1624]: time="2026-04-16T23:55:37.744194360Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 16 23:55:37.744528 containerd[1624]: time="2026-04-16T23:55:37.744210080Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 16 23:55:37.744528 containerd[1624]: time="2026-04-16T23:55:37.744385080Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 16 23:55:37.744528 containerd[1624]: time="2026-04-16T23:55:37.744411120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 16 23:55:37.744528 containerd[1624]: time="2026-04-16T23:55:37.744426640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 16 23:55:37.744528 containerd[1624]: time="2026-04-16T23:55:37.744437560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 16 23:55:37.744528 containerd[1624]: time="2026-04-16T23:55:37.744448280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 16 23:55:37.744797 containerd[1624]: time="2026-04-16T23:55:37.744458760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 16 23:55:37.746157 containerd[1624]: time="2026-04-16T23:55:37.746109600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 16 23:55:37.746290 containerd[1624]: time="2026-04-16T23:55:37.746272720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 16 23:55:37.746352 containerd[1624]: time="2026-04-16T23:55:37.746339240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 16 23:55:37.746524 containerd[1624]: time="2026-04-16T23:55:37.746498040Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 16 23:55:37.746904 containerd[1624]: time="2026-04-16T23:55:37.746872440Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 16 23:55:37.747214 containerd[1624]: time="2026-04-16T23:55:37.747193760Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 16 23:55:37.747304 containerd[1624]: time="2026-04-16T23:55:37.747288840Z" level=info msg="Start snapshots syncer" Apr 16 23:55:37.747394 containerd[1624]: time="2026-04-16T23:55:37.747374320Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 16 23:55:37.747991 containerd[1624]: time="2026-04-16T23:55:37.747941200Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 16 23:55:37.748355 containerd[1624]: time="2026-04-16T23:55:37.748320800Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 16 23:55:37.748502 containerd[1624]: time="2026-04-16T23:55:37.748483640Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 16 23:55:37.748711 containerd[1624]: time="2026-04-16T23:55:37.748683920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 16 23:55:37.748889 containerd[1624]: time="2026-04-16T23:55:37.748868920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 16 23:55:37.749061 containerd[1624]: time="2026-04-16T23:55:37.749018360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 16 23:55:37.749148 containerd[1624]: time="2026-04-16T23:55:37.749131080Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 16 23:55:37.749223 containerd[1624]: time="2026-04-16T23:55:37.749207920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 16 23:55:37.749597 containerd[1624]: time="2026-04-16T23:55:37.749292480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 16 23:55:37.749597 containerd[1624]: time="2026-04-16T23:55:37.749315160Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 16 23:55:37.749597 containerd[1624]: time="2026-04-16T23:55:37.749355280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 16 23:55:37.749597 containerd[1624]: time="2026-04-16T23:55:37.749368400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 16 23:55:37.749597 containerd[1624]: time="2026-04-16T23:55:37.749383480Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 16 23:55:37.749597 containerd[1624]: time="2026-04-16T23:55:37.749549160Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 16 23:55:37.749775 containerd[1624]: time="2026-04-16T23:55:37.749754720Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 16 23:55:37.749830 containerd[1624]: time="2026-04-16T23:55:37.749814640Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 16 23:55:37.749887 containerd[1624]: time="2026-04-16T23:55:37.749870120Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 16 23:55:37.749936 containerd[1624]: time="2026-04-16T23:55:37.749925160Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 16 23:55:37.749991 containerd[1624]: time="2026-04-16T23:55:37.749978760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 16 23:55:37.750215 containerd[1624]: time="2026-04-16T23:55:37.750186960Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 16 23:55:37.750768 containerd[1624]: time="2026-04-16T23:55:37.750535080Z" level=info msg="runtime interface created" Apr 16 23:55:37.750768 containerd[1624]: time="2026-04-16T23:55:37.750553680Z" level=info msg="created NRI interface" Apr 16 23:55:37.750768 containerd[1624]: time="2026-04-16T23:55:37.750565680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 16 23:55:37.750768 containerd[1624]: time="2026-04-16T23:55:37.750581160Z" level=info msg="Connect containerd service" Apr 16 23:55:37.750768 containerd[1624]: time="2026-04-16T23:55:37.750608600Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 16 23:55:37.751772 containerd[1624]: time="2026-04-16T23:55:37.751735960Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 16 23:55:37.846833 containerd[1624]: time="2026-04-16T23:55:37.846723520Z" level=info msg="Start subscribing containerd event" Apr 16 23:55:37.847139 containerd[1624]: time="2026-04-16T23:55:37.846959800Z" level=info msg="Start recovering state" Apr 16 23:55:37.847139 containerd[1624]: time="2026-04-16T23:55:37.847065680Z" level=info msg="Start event monitor" Apr 16 23:55:37.847352 containerd[1624]: time="2026-04-16T23:55:37.847272080Z" level=info msg="Start cni network conf syncer for default" Apr 16 23:55:37.847352 containerd[1624]: time="2026-04-16T23:55:37.847291600Z" level=info msg="Start streaming server" Apr 16 23:55:37.847352 containerd[1624]: time="2026-04-16T23:55:37.847303040Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 16 23:55:37.847589 containerd[1624]: time="2026-04-16T23:55:37.847481120Z" level=info msg="runtime interface starting up..." Apr 16 23:55:37.847589 containerd[1624]: time="2026-04-16T23:55:37.847496960Z" level=info msg="starting plugins..." Apr 16 23:55:37.847589 containerd[1624]: time="2026-04-16T23:55:37.847516640Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 16 23:55:37.848017 containerd[1624]: time="2026-04-16T23:55:37.847994800Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 16 23:55:37.848219 containerd[1624]: time="2026-04-16T23:55:37.848141040Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 16 23:55:37.848682 containerd[1624]: time="2026-04-16T23:55:37.848658760Z" level=info msg="containerd successfully booted in 0.240485s" Apr 16 23:55:37.850166 systemd[1]: Started containerd.service - containerd container runtime. Apr 16 23:55:37.902276 sshd_keygen[1625]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 16 23:55:37.923608 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 16 23:55:37.927216 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 16 23:55:37.939670 systemd[1]: issuegen.service: Deactivated successfully. Apr 16 23:55:37.939887 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 16 23:55:37.942652 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 16 23:55:37.961465 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 16 23:55:37.964382 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 16 23:55:37.966570 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Apr 16 23:55:37.968009 systemd[1]: Reached target getty.target - Login Prompts. Apr 16 23:55:37.971341 tar[1613]: linux-arm64/README.md Apr 16 23:55:37.994864 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 16 23:55:38.037065 kernel: EXT4-fs (vda9): resized filesystem to 12499963 Apr 16 23:55:38.069858 extend-filesystems[1639]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 16 23:55:38.069858 extend-filesystems[1639]: old_desc_blocks = 1, new_desc_blocks = 6 Apr 16 23:55:38.069858 extend-filesystems[1639]: The filesystem on /dev/vda9 is now 12499963 (4k) blocks long. Apr 16 23:55:38.074154 extend-filesystems[1592]: Resized filesystem in /dev/vda9 Apr 16 23:55:38.071214 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 16 23:55:38.071437 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 16 23:55:38.354064 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Apr 16 23:55:38.734068 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Apr 16 23:55:39.021338 systemd-networkd[1494]: eth0: Gained IPv6LL Apr 16 23:55:39.023750 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 16 23:55:39.025598 systemd[1]: Reached target network-online.target - Network is Online. Apr 16 23:55:39.027897 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:55:39.030025 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 16 23:55:39.065243 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 16 23:55:40.279482 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:55:40.283425 (kubelet)[1722]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 23:55:40.361077 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Apr 16 23:55:40.745067 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Apr 16 23:55:40.974436 kubelet[1722]: E0416 23:55:40.974337 1722 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 23:55:40.977504 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 23:55:40.977639 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 23:55:40.977985 systemd[1]: kubelet.service: Consumed 813ms CPU time, 259.5M memory peak. Apr 16 23:55:44.373070 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Apr 16 23:55:44.378972 coreos-metadata[1586]: Apr 16 23:55:44.378 WARN failed to locate config-drive, using the metadata service API instead Apr 16 23:55:44.398270 coreos-metadata[1586]: Apr 16 23:55:44.398 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Apr 16 23:55:44.754063 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Apr 16 23:55:44.759888 coreos-metadata[1665]: Apr 16 23:55:44.759 WARN failed to locate config-drive, using the metadata service API instead Apr 16 23:55:44.772673 coreos-metadata[1665]: Apr 16 23:55:44.772 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Apr 16 23:55:47.618614 coreos-metadata[1665]: Apr 16 23:55:47.618 INFO Fetch successful Apr 16 23:55:47.618614 coreos-metadata[1665]: Apr 16 23:55:47.618 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 16 23:55:48.412116 coreos-metadata[1586]: Apr 16 23:55:48.412 INFO Fetch successful Apr 16 23:55:48.412116 coreos-metadata[1586]: Apr 16 23:55:48.412 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Apr 16 23:55:49.235596 coreos-metadata[1665]: Apr 16 23:55:49.235 INFO Fetch successful Apr 16 23:55:49.238339 unknown[1665]: wrote ssh authorized keys file for user: core Apr 16 23:55:49.246258 coreos-metadata[1586]: Apr 16 23:55:49.246 INFO Fetch successful Apr 16 23:55:49.246258 coreos-metadata[1586]: Apr 16 23:55:49.246 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Apr 16 23:55:49.272337 update-ssh-keys[1741]: Updated "/home/core/.ssh/authorized_keys" Apr 16 23:55:49.273171 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 16 23:55:49.274413 systemd[1]: Finished sshkeys.service. Apr 16 23:55:50.271321 coreos-metadata[1586]: Apr 16 23:55:50.271 INFO Fetch successful Apr 16 23:55:50.271321 coreos-metadata[1586]: Apr 16 23:55:50.271 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Apr 16 23:55:51.001634 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 16 23:55:51.003124 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:55:51.082535 coreos-metadata[1586]: Apr 16 23:55:51.082 INFO Fetch successful Apr 16 23:55:51.082535 coreos-metadata[1586]: Apr 16 23:55:51.082 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Apr 16 23:55:51.139007 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:55:51.143188 (kubelet)[1752]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 23:55:51.182005 kubelet[1752]: E0416 23:55:51.181946 1752 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 23:55:51.185005 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 23:55:51.185152 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 23:55:51.185612 systemd[1]: kubelet.service: Consumed 145ms CPU time, 108M memory peak. Apr 16 23:55:51.896967 coreos-metadata[1586]: Apr 16 23:55:51.896 INFO Fetch successful Apr 16 23:55:51.896967 coreos-metadata[1586]: Apr 16 23:55:51.896 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Apr 16 23:55:52.709632 coreos-metadata[1586]: Apr 16 23:55:52.709 INFO Fetch successful Apr 16 23:55:52.734536 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 16 23:55:52.734945 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 16 23:55:52.735091 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 16 23:55:52.737105 systemd[1]: Startup finished in 3.033s (kernel) + 18.426s (initrd) + 17.758s (userspace) = 39.219s. Apr 16 23:55:59.516136 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 16 23:55:59.518701 systemd[1]: Started sshd@0-10.0.0.99:22-50.85.169.122:50922.service - OpenSSH per-connection server daemon (50.85.169.122:50922). Apr 16 23:55:59.691275 sshd[1767]: Accepted publickey for core from 50.85.169.122 port 50922 ssh2: RSA SHA256:u5daex5fbfF6gkH3TuW6SsdItDTN0gEqgyO+gco2L6k Apr 16 23:55:59.693758 sshd-session[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:55:59.699950 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 16 23:55:59.701021 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 16 23:55:59.707648 systemd-logind[1603]: New session 1 of user core. Apr 16 23:55:59.724479 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 16 23:55:59.726899 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 16 23:55:59.740298 (systemd)[1772]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 16 23:55:59.742711 systemd-logind[1603]: New session c1 of user core. Apr 16 23:55:59.856371 systemd[1772]: Queued start job for default target default.target. Apr 16 23:55:59.866979 systemd[1772]: Created slice app.slice - User Application Slice. Apr 16 23:55:59.867017 systemd[1772]: Reached target paths.target - Paths. Apr 16 23:55:59.867086 systemd[1772]: Reached target timers.target - Timers. Apr 16 23:55:59.868430 systemd[1772]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 16 23:55:59.877915 systemd[1772]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 16 23:55:59.877989 systemd[1772]: Reached target sockets.target - Sockets. Apr 16 23:55:59.878033 systemd[1772]: Reached target basic.target - Basic System. Apr 16 23:55:59.878092 systemd[1772]: Reached target default.target - Main User Target. Apr 16 23:55:59.878118 systemd[1772]: Startup finished in 128ms. Apr 16 23:55:59.878554 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 16 23:55:59.879927 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 16 23:55:59.937185 systemd[1]: Started sshd@1-10.0.0.99:22-50.85.169.122:39374.service - OpenSSH per-connection server daemon (50.85.169.122:39374). Apr 16 23:56:00.049499 sshd[1783]: Accepted publickey for core from 50.85.169.122 port 39374 ssh2: RSA SHA256:u5daex5fbfF6gkH3TuW6SsdItDTN0gEqgyO+gco2L6k Apr 16 23:56:00.050890 sshd-session[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:56:00.054855 systemd-logind[1603]: New session 2 of user core. Apr 16 23:56:00.066392 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 16 23:56:00.102701 sshd[1786]: Connection closed by 50.85.169.122 port 39374 Apr 16 23:56:00.103079 sshd-session[1783]: pam_unix(sshd:session): session closed for user core Apr 16 23:56:00.106836 systemd[1]: sshd@1-10.0.0.99:22-50.85.169.122:39374.service: Deactivated successfully. Apr 16 23:56:00.108419 systemd[1]: session-2.scope: Deactivated successfully. Apr 16 23:56:00.109144 systemd-logind[1603]: Session 2 logged out. Waiting for processes to exit. Apr 16 23:56:00.110008 systemd-logind[1603]: Removed session 2. Apr 16 23:56:00.134125 systemd[1]: Started sshd@2-10.0.0.99:22-50.85.169.122:39380.service - OpenSSH per-connection server daemon (50.85.169.122:39380). Apr 16 23:56:00.241857 sshd[1793]: Accepted publickey for core from 50.85.169.122 port 39380 ssh2: RSA SHA256:u5daex5fbfF6gkH3TuW6SsdItDTN0gEqgyO+gco2L6k Apr 16 23:56:00.243295 sshd-session[1793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:56:00.247251 systemd-logind[1603]: New session 3 of user core. Apr 16 23:56:00.257188 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 16 23:56:00.288555 sshd[1796]: Connection closed by 50.85.169.122 port 39380 Apr 16 23:56:00.289237 sshd-session[1793]: pam_unix(sshd:session): session closed for user core Apr 16 23:56:00.292429 systemd-logind[1603]: Session 3 logged out. Waiting for processes to exit. Apr 16 23:56:00.292660 systemd[1]: sshd@2-10.0.0.99:22-50.85.169.122:39380.service: Deactivated successfully. Apr 16 23:56:00.295361 systemd[1]: session-3.scope: Deactivated successfully. Apr 16 23:56:00.296656 systemd-logind[1603]: Removed session 3. Apr 16 23:56:00.317181 systemd[1]: Started sshd@3-10.0.0.99:22-50.85.169.122:39382.service - OpenSSH per-connection server daemon (50.85.169.122:39382). Apr 16 23:56:00.418144 sshd[1803]: Accepted publickey for core from 50.85.169.122 port 39382 ssh2: RSA SHA256:u5daex5fbfF6gkH3TuW6SsdItDTN0gEqgyO+gco2L6k Apr 16 23:56:00.419178 sshd-session[1803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:56:00.422772 systemd-logind[1603]: New session 4 of user core. Apr 16 23:56:00.432383 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 16 23:56:00.467783 sshd[1806]: Connection closed by 50.85.169.122 port 39382 Apr 16 23:56:00.468355 sshd-session[1803]: pam_unix(sshd:session): session closed for user core Apr 16 23:56:00.471900 systemd-logind[1603]: Session 4 logged out. Waiting for processes to exit. Apr 16 23:56:00.472170 systemd[1]: sshd@3-10.0.0.99:22-50.85.169.122:39382.service: Deactivated successfully. Apr 16 23:56:00.474729 systemd[1]: session-4.scope: Deactivated successfully. Apr 16 23:56:00.476192 systemd-logind[1603]: Removed session 4. Apr 16 23:56:00.492722 systemd[1]: Started sshd@4-10.0.0.99:22-50.85.169.122:39390.service - OpenSSH per-connection server daemon (50.85.169.122:39390). Apr 16 23:56:00.609198 sshd[1812]: Accepted publickey for core from 50.85.169.122 port 39390 ssh2: RSA SHA256:u5daex5fbfF6gkH3TuW6SsdItDTN0gEqgyO+gco2L6k Apr 16 23:56:00.610451 sshd-session[1812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:56:00.613898 systemd-logind[1603]: New session 5 of user core. Apr 16 23:56:00.624311 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 16 23:56:00.660868 sudo[1816]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 16 23:56:00.661159 sudo[1816]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 23:56:00.678201 sudo[1816]: pam_unix(sudo:session): session closed for user root Apr 16 23:56:00.693632 sshd[1815]: Connection closed by 50.85.169.122 port 39390 Apr 16 23:56:00.692963 sshd-session[1812]: pam_unix(sshd:session): session closed for user core Apr 16 23:56:00.696772 systemd[1]: sshd@4-10.0.0.99:22-50.85.169.122:39390.service: Deactivated successfully. Apr 16 23:56:00.699291 systemd[1]: session-5.scope: Deactivated successfully. Apr 16 23:56:00.699870 systemd-logind[1603]: Session 5 logged out. Waiting for processes to exit. Apr 16 23:56:00.701236 systemd-logind[1603]: Removed session 5. Apr 16 23:56:00.720452 systemd[1]: Started sshd@5-10.0.0.99:22-50.85.169.122:39404.service - OpenSSH per-connection server daemon (50.85.169.122:39404). Apr 16 23:56:00.822177 sshd[1824]: Accepted publickey for core from 50.85.169.122 port 39404 ssh2: RSA SHA256:u5daex5fbfF6gkH3TuW6SsdItDTN0gEqgyO+gco2L6k Apr 16 23:56:00.823297 sshd-session[1824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:56:00.826915 systemd-logind[1603]: New session 6 of user core. Apr 16 23:56:00.837177 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 16 23:56:00.859208 sudo[1829]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 16 23:56:00.859460 sudo[1829]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 23:56:00.862584 sudo[1829]: pam_unix(sudo:session): session closed for user root Apr 16 23:56:00.866969 sudo[1828]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 16 23:56:00.867240 sudo[1828]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 23:56:00.875610 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 16 23:56:00.909308 augenrules[1851]: No rules Apr 16 23:56:00.910508 systemd[1]: audit-rules.service: Deactivated successfully. Apr 16 23:56:00.910694 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 16 23:56:00.911821 sudo[1828]: pam_unix(sudo:session): session closed for user root Apr 16 23:56:00.927421 sshd[1827]: Connection closed by 50.85.169.122 port 39404 Apr 16 23:56:00.926979 sshd-session[1824]: pam_unix(sshd:session): session closed for user core Apr 16 23:56:00.930545 systemd[1]: sshd@5-10.0.0.99:22-50.85.169.122:39404.service: Deactivated successfully. Apr 16 23:56:00.931922 systemd[1]: session-6.scope: Deactivated successfully. Apr 16 23:56:00.932603 systemd-logind[1603]: Session 6 logged out. Waiting for processes to exit. Apr 16 23:56:00.933794 systemd-logind[1603]: Removed session 6. Apr 16 23:56:00.954936 systemd[1]: Started sshd@6-10.0.0.99:22-50.85.169.122:39408.service - OpenSSH per-connection server daemon (50.85.169.122:39408). Apr 16 23:56:01.056031 sshd[1860]: Accepted publickey for core from 50.85.169.122 port 39408 ssh2: RSA SHA256:u5daex5fbfF6gkH3TuW6SsdItDTN0gEqgyO+gco2L6k Apr 16 23:56:01.057402 sshd-session[1860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:56:01.060948 systemd-logind[1603]: New session 7 of user core. Apr 16 23:56:01.072380 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 16 23:56:01.095459 sudo[1864]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 16 23:56:01.095711 sudo[1864]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 23:56:01.179441 chronyd[1584]: Selected source PHC0 Apr 16 23:56:01.251104 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 16 23:56:01.252341 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:56:01.360662 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:56:01.363890 (kubelet)[1891]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 23:56:01.399524 kubelet[1891]: E0416 23:56:01.399483 1891 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 23:56:01.401897 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 23:56:01.402117 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 23:56:01.402487 systemd[1]: kubelet.service: Consumed 135ms CPU time, 105.2M memory peak. Apr 16 23:56:01.461134 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 16 23:56:01.475591 (dockerd)[1903]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 16 23:56:01.722896 dockerd[1903]: time="2026-04-16T23:56:01.722778698Z" level=info msg="Starting up" Apr 16 23:56:01.723682 dockerd[1903]: time="2026-04-16T23:56:01.723658514Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 16 23:56:01.732989 dockerd[1903]: time="2026-04-16T23:56:01.732958964Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 16 23:56:01.769604 dockerd[1903]: time="2026-04-16T23:56:01.769546560Z" level=info msg="Loading containers: start." Apr 16 23:56:01.779061 kernel: Initializing XFRM netlink socket Apr 16 23:56:02.029628 systemd-networkd[1494]: docker0: Link UP Apr 16 23:56:02.039977 dockerd[1903]: time="2026-04-16T23:56:02.039938743Z" level=info msg="Loading containers: done." Apr 16 23:56:02.050637 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2951996889-merged.mount: Deactivated successfully. Apr 16 23:56:02.059323 dockerd[1903]: time="2026-04-16T23:56:02.059268502Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 16 23:56:02.059458 dockerd[1903]: time="2026-04-16T23:56:02.059339965Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Apr 16 23:56:02.059458 dockerd[1903]: time="2026-04-16T23:56:02.059408202Z" level=info msg="Initializing buildkit" Apr 16 23:56:02.094437 dockerd[1903]: time="2026-04-16T23:56:02.094313607Z" level=info msg="Completed buildkit initialization" Apr 16 23:56:02.102871 dockerd[1903]: time="2026-04-16T23:56:02.102830602Z" level=info msg="Daemon has completed initialization" Apr 16 23:56:02.103075 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 16 23:56:02.103623 dockerd[1903]: time="2026-04-16T23:56:02.102960951Z" level=info msg="API listen on /run/docker.sock" Apr 16 23:56:02.807806 containerd[1624]: time="2026-04-16T23:56:02.807772788Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 16 23:56:03.254968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1855591686.mount: Deactivated successfully. Apr 16 23:56:04.245802 containerd[1624]: time="2026-04-16T23:56:04.245727838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:04.247519 containerd[1624]: time="2026-04-16T23:56:04.247465579Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=27008885" Apr 16 23:56:04.249517 containerd[1624]: time="2026-04-16T23:56:04.249461233Z" level=info msg="ImageCreate event name:\"sha256:51b83c5cb2f791f72696c040be904535bad3c81a6ffc19a55013ac150a24d9b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:04.252950 containerd[1624]: time="2026-04-16T23:56:04.252909654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:04.254580 containerd[1624]: time="2026-04-16T23:56:04.254543042Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:51b83c5cb2f791f72696c040be904535bad3c81a6ffc19a55013ac150a24d9b0\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"27005386\" in 1.44673355s" Apr 16 23:56:04.254618 containerd[1624]: time="2026-04-16T23:56:04.254589242Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:51b83c5cb2f791f72696c040be904535bad3c81a6ffc19a55013ac150a24d9b0\"" Apr 16 23:56:04.255327 containerd[1624]: time="2026-04-16T23:56:04.255118048Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 16 23:56:05.550534 containerd[1624]: time="2026-04-16T23:56:05.550482248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:05.552175 containerd[1624]: time="2026-04-16T23:56:05.552140642Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=23297794" Apr 16 23:56:05.553519 containerd[1624]: time="2026-04-16T23:56:05.553469990Z" level=info msg="ImageCreate event name:\"sha256:df8bcecad66863646fb4016494163838761da38376bae5a7592e04041db8489a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:05.557020 containerd[1624]: time="2026-04-16T23:56:05.556993469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:05.558130 containerd[1624]: time="2026-04-16T23:56:05.558084997Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:df8bcecad66863646fb4016494163838761da38376bae5a7592e04041db8489a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"24804413\" in 1.302935856s" Apr 16 23:56:05.558130 containerd[1624]: time="2026-04-16T23:56:05.558126760Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:df8bcecad66863646fb4016494163838761da38376bae5a7592e04041db8489a\"" Apr 16 23:56:05.559000 containerd[1624]: time="2026-04-16T23:56:05.558799703Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 16 23:56:06.573929 containerd[1624]: time="2026-04-16T23:56:06.573282485Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:06.575177 containerd[1624]: time="2026-04-16T23:56:06.575143692Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=18141378" Apr 16 23:56:06.578288 containerd[1624]: time="2026-04-16T23:56:06.578231506Z" level=info msg="ImageCreate event name:\"sha256:8c8e25fd00e5c108fb9ab5490c25bfaeb0231b1c59f749dab4f5300f1c49995b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:06.582007 containerd[1624]: time="2026-04-16T23:56:06.581970682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:06.583555 containerd[1624]: time="2026-04-16T23:56:06.583527848Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:8c8e25fd00e5c108fb9ab5490c25bfaeb0231b1c59f749dab4f5300f1c49995b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"19648015\" in 1.024701992s" Apr 16 23:56:06.583646 containerd[1624]: time="2026-04-16T23:56:06.583632689Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:8c8e25fd00e5c108fb9ab5490c25bfaeb0231b1c59f749dab4f5300f1c49995b\"" Apr 16 23:56:06.584263 containerd[1624]: time="2026-04-16T23:56:06.584233411Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 16 23:56:07.519565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount113558368.mount: Deactivated successfully. Apr 16 23:56:07.775700 containerd[1624]: time="2026-04-16T23:56:07.775567791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:07.777076 containerd[1624]: time="2026-04-16T23:56:07.776952869Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=28040534" Apr 16 23:56:07.779358 containerd[1624]: time="2026-04-16T23:56:07.779301626Z" level=info msg="ImageCreate event name:\"sha256:7ce14d6fb1e5134a578d2aaa327fd701273e3d222b9b8d88054dd86b87a7dc36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:07.781974 containerd[1624]: time="2026-04-16T23:56:07.781928022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:07.782539 containerd[1624]: time="2026-04-16T23:56:07.782398501Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:7ce14d6fb1e5134a578d2aaa327fd701273e3d222b9b8d88054dd86b87a7dc36\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"28039527\" in 1.19813293s" Apr 16 23:56:07.782539 containerd[1624]: time="2026-04-16T23:56:07.782430141Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:7ce14d6fb1e5134a578d2aaa327fd701273e3d222b9b8d88054dd86b87a7dc36\"" Apr 16 23:56:07.783121 containerd[1624]: time="2026-04-16T23:56:07.783093220Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 16 23:56:08.328450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount389009568.mount: Deactivated successfully. Apr 16 23:56:09.152946 containerd[1624]: time="2026-04-16T23:56:09.152877626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:09.154855 containerd[1624]: time="2026-04-16T23:56:09.154799783Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152209" Apr 16 23:56:09.156837 containerd[1624]: time="2026-04-16T23:56:09.156797060Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:09.160338 containerd[1624]: time="2026-04-16T23:56:09.160269055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:09.161367 containerd[1624]: time="2026-04-16T23:56:09.161310013Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.377831313s" Apr 16 23:56:09.161403 containerd[1624]: time="2026-04-16T23:56:09.161367373Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Apr 16 23:56:09.161912 containerd[1624]: time="2026-04-16T23:56:09.161872973Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 16 23:56:09.585597 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4184805212.mount: Deactivated successfully. Apr 16 23:56:09.596965 containerd[1624]: time="2026-04-16T23:56:09.596865472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 23:56:09.598236 containerd[1624]: time="2026-04-16T23:56:09.598155631Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Apr 16 23:56:09.600356 containerd[1624]: time="2026-04-16T23:56:09.600262430Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 23:56:09.604021 containerd[1624]: time="2026-04-16T23:56:09.603942989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 23:56:09.604826 containerd[1624]: time="2026-04-16T23:56:09.604791948Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 442.889255ms" Apr 16 23:56:09.604860 containerd[1624]: time="2026-04-16T23:56:09.604821748Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Apr 16 23:56:09.605504 containerd[1624]: time="2026-04-16T23:56:09.605448748Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 16 23:56:10.099543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3730092383.mount: Deactivated successfully. Apr 16 23:56:11.001429 containerd[1624]: time="2026-04-16T23:56:11.001353133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:11.003416 containerd[1624]: time="2026-04-16T23:56:11.003392970Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=21886470" Apr 16 23:56:11.006312 containerd[1624]: time="2026-04-16T23:56:11.006248846Z" level=info msg="ImageCreate event name:\"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:11.015708 containerd[1624]: time="2026-04-16T23:56:11.015637913Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:11.017294 containerd[1624]: time="2026-04-16T23:56:11.017227350Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"21882972\" in 1.411728802s" Apr 16 23:56:11.017398 containerd[1624]: time="2026-04-16T23:56:11.017309910Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\"" Apr 16 23:56:11.456952 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 16 23:56:11.458653 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:56:11.584387 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:56:11.594644 (kubelet)[2345]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 23:56:11.631376 kubelet[2345]: E0416 23:56:11.631320 2345 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 23:56:11.634013 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 23:56:11.634172 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 23:56:11.634448 systemd[1]: kubelet.service: Consumed 135ms CPU time, 107.3M memory peak. Apr 16 23:56:16.092578 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:56:16.093060 systemd[1]: kubelet.service: Consumed 135ms CPU time, 107.3M memory peak. Apr 16 23:56:16.095092 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:56:16.112265 systemd[1]: Reload requested from client PID 2359 ('systemctl') (unit session-7.scope)... Apr 16 23:56:16.112281 systemd[1]: Reloading... Apr 16 23:56:16.187764 zram_generator::config[2405]: No configuration found. Apr 16 23:56:16.344535 systemd[1]: Reloading finished in 231 ms. Apr 16 23:56:16.387420 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 16 23:56:16.387507 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 16 23:56:16.387821 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:56:16.391486 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:56:16.518265 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:56:16.522812 (kubelet)[2449]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 23:56:16.559210 kubelet[2449]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 23:56:16.559522 kubelet[2449]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 23:56:16.559566 kubelet[2449]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 23:56:16.559716 kubelet[2449]: I0416 23:56:16.559682 2449 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 23:56:17.551072 kubelet[2449]: I0416 23:56:17.550271 2449 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 16 23:56:17.551072 kubelet[2449]: I0416 23:56:17.550302 2449 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 23:56:17.551072 kubelet[2449]: I0416 23:56:17.550513 2449 server.go:956] "Client rotation is on, will bootstrap in background" Apr 16 23:56:17.603868 kubelet[2449]: E0416 23:56:17.603814 2449 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.99:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 23:56:17.605170 kubelet[2449]: I0416 23:56:17.605137 2449 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 23:56:17.614946 kubelet[2449]: I0416 23:56:17.614921 2449 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 16 23:56:17.618818 kubelet[2449]: I0416 23:56:17.618780 2449 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 16 23:56:17.621962 kubelet[2449]: I0416 23:56:17.621906 2449 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 23:56:17.622129 kubelet[2449]: I0416 23:56:17.621952 2449 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-4-n-fcb502653b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 23:56:17.622129 kubelet[2449]: I0416 23:56:17.622124 2449 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 23:56:17.622129 kubelet[2449]: I0416 23:56:17.622134 2449 container_manager_linux.go:303] "Creating device plugin manager" Apr 16 23:56:17.623420 kubelet[2449]: I0416 23:56:17.623385 2449 state_mem.go:36] "Initialized new in-memory state store" Apr 16 23:56:17.629853 kubelet[2449]: I0416 23:56:17.629807 2449 kubelet.go:480] "Attempting to sync node with API server" Apr 16 23:56:17.629853 kubelet[2449]: I0416 23:56:17.629835 2449 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 23:56:17.629853 kubelet[2449]: I0416 23:56:17.629860 2449 kubelet.go:386] "Adding apiserver pod source" Apr 16 23:56:17.629968 kubelet[2449]: I0416 23:56:17.629878 2449 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 23:56:17.634999 kubelet[2449]: E0416 23:56:17.633919 2449 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.99:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 23:56:17.634999 kubelet[2449]: E0416 23:56:17.634632 2449 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-4-n-fcb502653b&limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 23:56:17.636098 kubelet[2449]: I0416 23:56:17.636081 2449 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 16 23:56:17.636896 kubelet[2449]: I0416 23:56:17.636868 2449 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 23:56:17.637050 kubelet[2449]: W0416 23:56:17.637030 2449 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 16 23:56:17.639322 kubelet[2449]: I0416 23:56:17.639306 2449 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 16 23:56:17.639369 kubelet[2449]: I0416 23:56:17.639350 2449 server.go:1289] "Started kubelet" Apr 16 23:56:17.639808 kubelet[2449]: I0416 23:56:17.639770 2449 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 23:56:17.639955 kubelet[2449]: I0416 23:56:17.639919 2449 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 23:56:17.640757 kubelet[2449]: I0416 23:56:17.640729 2449 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 23:56:17.645733 kubelet[2449]: I0416 23:56:17.645708 2449 server.go:317] "Adding debug handlers to kubelet server" Apr 16 23:56:17.647363 kubelet[2449]: I0416 23:56:17.647324 2449 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 23:56:17.647841 kubelet[2449]: I0416 23:56:17.647816 2449 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 23:56:17.648248 kubelet[2449]: E0416 23:56:17.648217 2449 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-fcb502653b\" not found" Apr 16 23:56:17.648310 kubelet[2449]: I0416 23:56:17.648258 2449 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 16 23:56:17.648451 kubelet[2449]: I0416 23:56:17.648432 2449 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 16 23:56:17.648526 kubelet[2449]: I0416 23:56:17.648510 2449 reconciler.go:26] "Reconciler: start to sync state" Apr 16 23:56:17.648981 kubelet[2449]: E0416 23:56:17.648953 2449 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 23:56:17.649230 kubelet[2449]: E0416 23:56:17.649204 2449 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-fcb502653b?timeout=10s\": dial tcp 10.0.0.99:6443: connect: connection refused" interval="200ms" Apr 16 23:56:17.649758 kubelet[2449]: I0416 23:56:17.649729 2449 factory.go:223] Registration of the systemd container factory successfully Apr 16 23:56:17.649843 kubelet[2449]: I0416 23:56:17.649821 2449 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 23:56:17.652138 kubelet[2449]: E0416 23:56:17.652110 2449 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 16 23:56:17.652632 kubelet[2449]: I0416 23:56:17.652613 2449 factory.go:223] Registration of the containerd container factory successfully Apr 16 23:56:17.656064 kubelet[2449]: E0416 23:56:17.652947 2449 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.99:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.99:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-2-4-n-fcb502653b.18a6fba252f55043 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-4-n-fcb502653b,UID:ci-4459-2-4-n-fcb502653b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-4-n-fcb502653b,},FirstTimestamp:2026-04-16 23:56:17.639321667 +0000 UTC m=+1.112857386,LastTimestamp:2026-04-16 23:56:17.639321667 +0000 UTC m=+1.112857386,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-4-n-fcb502653b,}" Apr 16 23:56:17.659748 kubelet[2449]: I0416 23:56:17.659712 2449 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 16 23:56:17.659748 kubelet[2449]: I0416 23:56:17.659737 2449 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 16 23:56:17.659748 kubelet[2449]: I0416 23:56:17.659755 2449 state_mem.go:36] "Initialized new in-memory state store" Apr 16 23:56:17.663653 kubelet[2449]: I0416 23:56:17.663585 2449 policy_none.go:49] "None policy: Start" Apr 16 23:56:17.663653 kubelet[2449]: I0416 23:56:17.663617 2449 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 16 23:56:17.663653 kubelet[2449]: I0416 23:56:17.663629 2449 state_mem.go:35] "Initializing new in-memory state store" Apr 16 23:56:17.670527 kubelet[2449]: I0416 23:56:17.670473 2449 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 16 23:56:17.672062 kubelet[2449]: I0416 23:56:17.672008 2449 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 16 23:56:17.672062 kubelet[2449]: I0416 23:56:17.672061 2449 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 16 23:56:17.672144 kubelet[2449]: I0416 23:56:17.672081 2449 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 23:56:17.672144 kubelet[2449]: I0416 23:56:17.672088 2449 kubelet.go:2436] "Starting kubelet main sync loop" Apr 16 23:56:17.672423 kubelet[2449]: E0416 23:56:17.672335 2449 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 23:56:17.673498 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 16 23:56:17.674158 kubelet[2449]: E0416 23:56:17.674120 2449 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 23:56:17.683894 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 16 23:56:17.687159 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 16 23:56:17.698394 kubelet[2449]: E0416 23:56:17.698346 2449 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 23:56:17.698676 kubelet[2449]: I0416 23:56:17.698643 2449 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 23:56:17.698707 kubelet[2449]: I0416 23:56:17.698665 2449 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 23:56:17.699639 kubelet[2449]: I0416 23:56:17.699603 2449 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 23:56:17.700442 kubelet[2449]: E0416 23:56:17.700405 2449 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 23:56:17.700532 kubelet[2449]: E0416 23:56:17.700447 2449 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-2-4-n-fcb502653b\" not found" Apr 16 23:56:17.785544 systemd[1]: Created slice kubepods-burstable-podffd4fc445f5286ea18a3631baf165ee8.slice - libcontainer container kubepods-burstable-podffd4fc445f5286ea18a3631baf165ee8.slice. Apr 16 23:56:17.795072 kubelet[2449]: E0416 23:56:17.795029 2449 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-fcb502653b\" not found" node="ci-4459-2-4-n-fcb502653b" Apr 16 23:56:17.798509 systemd[1]: Created slice kubepods-burstable-podcea8d98cb830329bc4960e42abf1314c.slice - libcontainer container kubepods-burstable-podcea8d98cb830329bc4960e42abf1314c.slice. Apr 16 23:56:17.800286 kubelet[2449]: I0416 23:56:17.800271 2449 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-fcb502653b" Apr 16 23:56:17.800965 kubelet[2449]: E0416 23:56:17.800710 2449 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-fcb502653b\" not found" node="ci-4459-2-4-n-fcb502653b" Apr 16 23:56:17.801831 kubelet[2449]: E0416 23:56:17.801318 2449 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.99:6443/api/v1/nodes\": dial tcp 10.0.0.99:6443: connect: connection refused" node="ci-4459-2-4-n-fcb502653b" Apr 16 23:56:17.802359 systemd[1]: Created slice kubepods-burstable-pod1c2bf432fcbd16ee209e9bee4a467629.slice - libcontainer container kubepods-burstable-pod1c2bf432fcbd16ee209e9bee4a467629.slice. Apr 16 23:56:17.804208 kubelet[2449]: E0416 23:56:17.804145 2449 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-fcb502653b\" not found" node="ci-4459-2-4-n-fcb502653b" Apr 16 23:56:17.849920 kubelet[2449]: E0416 23:56:17.849867 2449 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-fcb502653b?timeout=10s\": dial tcp 10.0.0.99:6443: connect: connection refused" interval="400ms" Apr 16 23:56:17.950525 kubelet[2449]: I0416 23:56:17.950288 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cea8d98cb830329bc4960e42abf1314c-ca-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-fcb502653b\" (UID: \"cea8d98cb830329bc4960e42abf1314c\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-fcb502653b" Apr 16 23:56:17.950525 kubelet[2449]: I0416 23:56:17.950331 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cea8d98cb830329bc4960e42abf1314c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-4-n-fcb502653b\" (UID: \"cea8d98cb830329bc4960e42abf1314c\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-fcb502653b" Apr 16 23:56:17.950525 kubelet[2449]: I0416 23:56:17.950353 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ffd4fc445f5286ea18a3631baf165ee8-ca-certs\") pod \"kube-apiserver-ci-4459-2-4-n-fcb502653b\" (UID: \"ffd4fc445f5286ea18a3631baf165ee8\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-fcb502653b" Apr 16 23:56:17.950525 kubelet[2449]: I0416 23:56:17.950369 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ffd4fc445f5286ea18a3631baf165ee8-k8s-certs\") pod \"kube-apiserver-ci-4459-2-4-n-fcb502653b\" (UID: \"ffd4fc445f5286ea18a3631baf165ee8\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-fcb502653b" Apr 16 23:56:17.950525 kubelet[2449]: I0416 23:56:17.950384 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cea8d98cb830329bc4960e42abf1314c-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-4-n-fcb502653b\" (UID: \"cea8d98cb830329bc4960e42abf1314c\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-fcb502653b" Apr 16 23:56:17.950731 kubelet[2449]: I0416 23:56:17.950408 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cea8d98cb830329bc4960e42abf1314c-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-fcb502653b\" (UID: \"cea8d98cb830329bc4960e42abf1314c\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-fcb502653b" Apr 16 23:56:17.950731 kubelet[2449]: I0416 23:56:17.950422 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cea8d98cb830329bc4960e42abf1314c-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-4-n-fcb502653b\" (UID: \"cea8d98cb830329bc4960e42abf1314c\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-fcb502653b" Apr 16 23:56:17.950731 kubelet[2449]: I0416 23:56:17.950436 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1c2bf432fcbd16ee209e9bee4a467629-kubeconfig\") pod \"kube-scheduler-ci-4459-2-4-n-fcb502653b\" (UID: \"1c2bf432fcbd16ee209e9bee4a467629\") " pod="kube-system/kube-scheduler-ci-4459-2-4-n-fcb502653b" Apr 16 23:56:17.950731 kubelet[2449]: I0416 23:56:17.950451 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ffd4fc445f5286ea18a3631baf165ee8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-4-n-fcb502653b\" (UID: \"ffd4fc445f5286ea18a3631baf165ee8\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-fcb502653b" Apr 16 23:56:18.004557 kubelet[2449]: I0416 23:56:18.004291 2449 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-fcb502653b" Apr 16 23:56:18.004862 kubelet[2449]: E0416 23:56:18.004828 2449 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.99:6443/api/v1/nodes\": dial tcp 10.0.0.99:6443: connect: connection refused" node="ci-4459-2-4-n-fcb502653b" Apr 16 23:56:18.097933 containerd[1624]: time="2026-04-16T23:56:18.097785335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-4-n-fcb502653b,Uid:ffd4fc445f5286ea18a3631baf165ee8,Namespace:kube-system,Attempt:0,}" Apr 16 23:56:18.102559 containerd[1624]: time="2026-04-16T23:56:18.102511087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-4-n-fcb502653b,Uid:cea8d98cb830329bc4960e42abf1314c,Namespace:kube-system,Attempt:0,}" Apr 16 23:56:18.105385 containerd[1624]: time="2026-04-16T23:56:18.105357403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-4-n-fcb502653b,Uid:1c2bf432fcbd16ee209e9bee4a467629,Namespace:kube-system,Attempt:0,}" Apr 16 23:56:18.139668 containerd[1624]: time="2026-04-16T23:56:18.139622228Z" level=info msg="connecting to shim 732ba33fa3dd7673917ffb6433dcd856a11ee751633efec09ad4c816a6d7abb6" address="unix:///run/containerd/s/30ae00e849707d340adf47077673b1e1de1221da4010374220d402bbd827271e" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:56:18.143032 containerd[1624]: time="2026-04-16T23:56:18.142878743Z" level=info msg="connecting to shim 4e483d1e2fabb0791e0265fbf38cf9449fa8c286e0c7956ae6d20aff111bb20b" address="unix:///run/containerd/s/38db5c9b2f6ae2ea7fa77f9d3e5004898a9704debee8e3dc428afc7d6f0fb067" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:56:18.158908 containerd[1624]: time="2026-04-16T23:56:18.158838837Z" level=info msg="connecting to shim 426635444568733c08bd6b2628b2bcea7247033201f3f01f6b2cce1bb07cce7e" address="unix:///run/containerd/s/90ddf14663eba449761f223be919fc82893b9637b59c36bac2302a565b3c3240" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:56:18.171221 systemd[1]: Started cri-containerd-732ba33fa3dd7673917ffb6433dcd856a11ee751633efec09ad4c816a6d7abb6.scope - libcontainer container 732ba33fa3dd7673917ffb6433dcd856a11ee751633efec09ad4c816a6d7abb6. Apr 16 23:56:18.174809 systemd[1]: Started cri-containerd-4e483d1e2fabb0791e0265fbf38cf9449fa8c286e0c7956ae6d20aff111bb20b.scope - libcontainer container 4e483d1e2fabb0791e0265fbf38cf9449fa8c286e0c7956ae6d20aff111bb20b. Apr 16 23:56:18.179676 systemd[1]: Started cri-containerd-426635444568733c08bd6b2628b2bcea7247033201f3f01f6b2cce1bb07cce7e.scope - libcontainer container 426635444568733c08bd6b2628b2bcea7247033201f3f01f6b2cce1bb07cce7e. Apr 16 23:56:18.215929 containerd[1624]: time="2026-04-16T23:56:18.215879189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-4-n-fcb502653b,Uid:cea8d98cb830329bc4960e42abf1314c,Namespace:kube-system,Attempt:0,} returns sandbox id \"732ba33fa3dd7673917ffb6433dcd856a11ee751633efec09ad4c816a6d7abb6\"" Apr 16 23:56:18.220224 containerd[1624]: time="2026-04-16T23:56:18.220121343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-4-n-fcb502653b,Uid:ffd4fc445f5286ea18a3631baf165ee8,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e483d1e2fabb0791e0265fbf38cf9449fa8c286e0c7956ae6d20aff111bb20b\"" Apr 16 23:56:18.222395 containerd[1624]: time="2026-04-16T23:56:18.222360940Z" level=info msg="CreateContainer within sandbox \"732ba33fa3dd7673917ffb6433dcd856a11ee751633efec09ad4c816a6d7abb6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 16 23:56:18.223968 containerd[1624]: time="2026-04-16T23:56:18.223937218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-4-n-fcb502653b,Uid:1c2bf432fcbd16ee209e9bee4a467629,Namespace:kube-system,Attempt:0,} returns sandbox id \"426635444568733c08bd6b2628b2bcea7247033201f3f01f6b2cce1bb07cce7e\"" Apr 16 23:56:18.225597 containerd[1624]: time="2026-04-16T23:56:18.225569095Z" level=info msg="CreateContainer within sandbox \"4e483d1e2fabb0791e0265fbf38cf9449fa8c286e0c7956ae6d20aff111bb20b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 16 23:56:18.228795 containerd[1624]: time="2026-04-16T23:56:18.228746571Z" level=info msg="CreateContainer within sandbox \"426635444568733c08bd6b2628b2bcea7247033201f3f01f6b2cce1bb07cce7e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 16 23:56:18.238306 containerd[1624]: time="2026-04-16T23:56:18.238277957Z" level=info msg="Container 739288ab053da5eb86dfd3b9cd2f95167245f28328e5bd68d5789c7fb41de29c: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:56:18.247464 containerd[1624]: time="2026-04-16T23:56:18.247436624Z" level=info msg="CreateContainer within sandbox \"732ba33fa3dd7673917ffb6433dcd856a11ee751633efec09ad4c816a6d7abb6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"739288ab053da5eb86dfd3b9cd2f95167245f28328e5bd68d5789c7fb41de29c\"" Apr 16 23:56:18.247955 containerd[1624]: time="2026-04-16T23:56:18.247886944Z" level=info msg="Container 09fb47bcdf9b889034575a9e167be7db11928fefbae187daf3673e9109645289: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:56:18.248716 containerd[1624]: time="2026-04-16T23:56:18.248688022Z" level=info msg="StartContainer for \"739288ab053da5eb86dfd3b9cd2f95167245f28328e5bd68d5789c7fb41de29c\"" Apr 16 23:56:18.249945 containerd[1624]: time="2026-04-16T23:56:18.249917821Z" level=info msg="connecting to shim 739288ab053da5eb86dfd3b9cd2f95167245f28328e5bd68d5789c7fb41de29c" address="unix:///run/containerd/s/30ae00e849707d340adf47077673b1e1de1221da4010374220d402bbd827271e" protocol=ttrpc version=3 Apr 16 23:56:18.250961 kubelet[2449]: E0416 23:56:18.250920 2449 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-fcb502653b?timeout=10s\": dial tcp 10.0.0.99:6443: connect: connection refused" interval="800ms" Apr 16 23:56:18.253088 containerd[1624]: time="2026-04-16T23:56:18.252428777Z" level=info msg="Container d8877585f386354d1d7ecd59e43ce6a271a575e12d8d3fa31c9659ce8539283b: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:56:18.260652 containerd[1624]: time="2026-04-16T23:56:18.260565285Z" level=info msg="CreateContainer within sandbox \"4e483d1e2fabb0791e0265fbf38cf9449fa8c286e0c7956ae6d20aff111bb20b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"09fb47bcdf9b889034575a9e167be7db11928fefbae187daf3673e9109645289\"" Apr 16 23:56:18.261869 containerd[1624]: time="2026-04-16T23:56:18.261840404Z" level=info msg="StartContainer for \"09fb47bcdf9b889034575a9e167be7db11928fefbae187daf3673e9109645289\"" Apr 16 23:56:18.264195 containerd[1624]: time="2026-04-16T23:56:18.264144120Z" level=info msg="connecting to shim 09fb47bcdf9b889034575a9e167be7db11928fefbae187daf3673e9109645289" address="unix:///run/containerd/s/38db5c9b2f6ae2ea7fa77f9d3e5004898a9704debee8e3dc428afc7d6f0fb067" protocol=ttrpc version=3 Apr 16 23:56:18.266204 systemd[1]: Started cri-containerd-739288ab053da5eb86dfd3b9cd2f95167245f28328e5bd68d5789c7fb41de29c.scope - libcontainer container 739288ab053da5eb86dfd3b9cd2f95167245f28328e5bd68d5789c7fb41de29c. Apr 16 23:56:18.267485 containerd[1624]: time="2026-04-16T23:56:18.267452396Z" level=info msg="CreateContainer within sandbox \"426635444568733c08bd6b2628b2bcea7247033201f3f01f6b2cce1bb07cce7e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d8877585f386354d1d7ecd59e43ce6a271a575e12d8d3fa31c9659ce8539283b\"" Apr 16 23:56:18.268790 containerd[1624]: time="2026-04-16T23:56:18.268522514Z" level=info msg="StartContainer for \"d8877585f386354d1d7ecd59e43ce6a271a575e12d8d3fa31c9659ce8539283b\"" Apr 16 23:56:18.271325 containerd[1624]: time="2026-04-16T23:56:18.271293190Z" level=info msg="connecting to shim d8877585f386354d1d7ecd59e43ce6a271a575e12d8d3fa31c9659ce8539283b" address="unix:///run/containerd/s/90ddf14663eba449761f223be919fc82893b9637b59c36bac2302a565b3c3240" protocol=ttrpc version=3 Apr 16 23:56:18.282951 systemd[1]: Started cri-containerd-09fb47bcdf9b889034575a9e167be7db11928fefbae187daf3673e9109645289.scope - libcontainer container 09fb47bcdf9b889034575a9e167be7db11928fefbae187daf3673e9109645289. Apr 16 23:56:18.294193 systemd[1]: Started cri-containerd-d8877585f386354d1d7ecd59e43ce6a271a575e12d8d3fa31c9659ce8539283b.scope - libcontainer container d8877585f386354d1d7ecd59e43ce6a271a575e12d8d3fa31c9659ce8539283b. Apr 16 23:56:18.319851 containerd[1624]: time="2026-04-16T23:56:18.319780401Z" level=info msg="StartContainer for \"739288ab053da5eb86dfd3b9cd2f95167245f28328e5bd68d5789c7fb41de29c\" returns successfully" Apr 16 23:56:18.332769 containerd[1624]: time="2026-04-16T23:56:18.332703662Z" level=info msg="StartContainer for \"09fb47bcdf9b889034575a9e167be7db11928fefbae187daf3673e9109645289\" returns successfully" Apr 16 23:56:18.340056 containerd[1624]: time="2026-04-16T23:56:18.339988252Z" level=info msg="StartContainer for \"d8877585f386354d1d7ecd59e43ce6a271a575e12d8d3fa31c9659ce8539283b\" returns successfully" Apr 16 23:56:18.407749 kubelet[2449]: I0416 23:56:18.407581 2449 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-fcb502653b" Apr 16 23:56:18.408299 kubelet[2449]: E0416 23:56:18.408265 2449 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.99:6443/api/v1/nodes\": dial tcp 10.0.0.99:6443: connect: connection refused" node="ci-4459-2-4-n-fcb502653b" Apr 16 23:56:18.680338 kubelet[2449]: E0416 23:56:18.680191 2449 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-fcb502653b\" not found" node="ci-4459-2-4-n-fcb502653b" Apr 16 23:56:18.684156 kubelet[2449]: E0416 23:56:18.684131 2449 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-fcb502653b\" not found" node="ci-4459-2-4-n-fcb502653b" Apr 16 23:56:18.684760 kubelet[2449]: E0416 23:56:18.684740 2449 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-fcb502653b\" not found" node="ci-4459-2-4-n-fcb502653b" Apr 16 23:56:19.210025 kubelet[2449]: I0416 23:56:19.209979 2449 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-fcb502653b" Apr 16 23:56:19.364506 kubelet[2449]: E0416 23:56:19.364463 2449 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459-2-4-n-fcb502653b\" not found" node="ci-4459-2-4-n-fcb502653b" Apr 16 23:56:19.510875 kubelet[2449]: I0416 23:56:19.510660 2449 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-4-n-fcb502653b" Apr 16 23:56:19.510875 kubelet[2449]: E0416 23:56:19.510707 2449 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4459-2-4-n-fcb502653b\": node \"ci-4459-2-4-n-fcb502653b\" not found" Apr 16 23:56:19.550000 kubelet[2449]: I0416 23:56:19.549951 2449 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-4-n-fcb502653b" Apr 16 23:56:19.555539 kubelet[2449]: E0416 23:56:19.555474 2449 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-4-n-fcb502653b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-2-4-n-fcb502653b" Apr 16 23:56:19.555539 kubelet[2449]: I0416 23:56:19.555505 2449 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-fcb502653b" Apr 16 23:56:19.558225 kubelet[2449]: E0416 23:56:19.557541 2449 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-4-n-fcb502653b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-fcb502653b" Apr 16 23:56:19.558225 kubelet[2449]: I0416 23:56:19.557577 2449 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-4-n-fcb502653b" Apr 16 23:56:19.559778 kubelet[2449]: E0416 23:56:19.559713 2449 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-4-n-fcb502653b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-2-4-n-fcb502653b" Apr 16 23:56:19.633403 kubelet[2449]: I0416 23:56:19.633364 2449 apiserver.go:52] "Watching apiserver" Apr 16 23:56:19.649236 kubelet[2449]: I0416 23:56:19.649164 2449 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 16 23:56:19.684992 kubelet[2449]: I0416 23:56:19.684956 2449 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-4-n-fcb502653b" Apr 16 23:56:19.685310 kubelet[2449]: I0416 23:56:19.685141 2449 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-4-n-fcb502653b" Apr 16 23:56:19.687619 kubelet[2449]: E0416 23:56:19.687575 2449 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-4-n-fcb502653b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-2-4-n-fcb502653b" Apr 16 23:56:19.688589 kubelet[2449]: E0416 23:56:19.688556 2449 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-4-n-fcb502653b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-2-4-n-fcb502653b" Apr 16 23:56:20.686724 kubelet[2449]: I0416 23:56:20.686504 2449 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-4-n-fcb502653b" Apr 16 23:56:21.496787 systemd[1]: Reload requested from client PID 2734 ('systemctl') (unit session-7.scope)... Apr 16 23:56:21.496802 systemd[1]: Reloading... Apr 16 23:56:21.563101 zram_generator::config[2777]: No configuration found. Apr 16 23:56:21.739598 systemd[1]: Reloading finished in 242 ms. Apr 16 23:56:21.764577 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:56:21.764762 kubelet[2449]: I0416 23:56:21.764570 2449 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 23:56:21.786787 systemd[1]: kubelet.service: Deactivated successfully. Apr 16 23:56:21.788092 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:56:21.788146 systemd[1]: kubelet.service: Consumed 1.452s CPU time, 129M memory peak. Apr 16 23:56:21.790335 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:56:21.946631 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:56:21.959393 (kubelet)[2822]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 23:56:22.002062 kubelet[2822]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 23:56:22.002062 kubelet[2822]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 23:56:22.002062 kubelet[2822]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 23:56:22.002062 kubelet[2822]: I0416 23:56:22.001960 2822 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 23:56:22.007403 kubelet[2822]: I0416 23:56:22.007359 2822 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 16 23:56:22.007403 kubelet[2822]: I0416 23:56:22.007394 2822 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 23:56:22.007615 kubelet[2822]: I0416 23:56:22.007596 2822 server.go:956] "Client rotation is on, will bootstrap in background" Apr 16 23:56:22.008780 kubelet[2822]: I0416 23:56:22.008763 2822 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 16 23:56:22.013869 kubelet[2822]: I0416 23:56:22.013492 2822 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 23:56:22.018184 kubelet[2822]: I0416 23:56:22.018108 2822 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 16 23:56:22.020890 kubelet[2822]: I0416 23:56:22.020852 2822 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 16 23:56:22.021217 kubelet[2822]: I0416 23:56:22.021191 2822 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 23:56:22.021420 kubelet[2822]: I0416 23:56:22.021278 2822 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-4-n-fcb502653b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 23:56:22.021538 kubelet[2822]: I0416 23:56:22.021526 2822 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 23:56:22.021586 kubelet[2822]: I0416 23:56:22.021579 2822 container_manager_linux.go:303] "Creating device plugin manager" Apr 16 23:56:22.021675 kubelet[2822]: I0416 23:56:22.021667 2822 state_mem.go:36] "Initialized new in-memory state store" Apr 16 23:56:22.021887 kubelet[2822]: I0416 23:56:22.021877 2822 kubelet.go:480] "Attempting to sync node with API server" Apr 16 23:56:22.022506 kubelet[2822]: I0416 23:56:22.022482 2822 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 23:56:22.022642 kubelet[2822]: I0416 23:56:22.022630 2822 kubelet.go:386] "Adding apiserver pod source" Apr 16 23:56:22.022705 kubelet[2822]: I0416 23:56:22.022696 2822 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 23:56:22.023965 kubelet[2822]: I0416 23:56:22.023856 2822 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 16 23:56:22.026415 kubelet[2822]: I0416 23:56:22.025565 2822 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 23:56:22.033832 kubelet[2822]: I0416 23:56:22.033797 2822 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 16 23:56:22.035910 kubelet[2822]: I0416 23:56:22.034747 2822 server.go:1289] "Started kubelet" Apr 16 23:56:22.035910 kubelet[2822]: I0416 23:56:22.035069 2822 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 23:56:22.035910 kubelet[2822]: I0416 23:56:22.035315 2822 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 23:56:22.035910 kubelet[2822]: I0416 23:56:22.035368 2822 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 23:56:22.035910 kubelet[2822]: I0416 23:56:22.035766 2822 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 23:56:22.038280 kubelet[2822]: I0416 23:56:22.038251 2822 server.go:317] "Adding debug handlers to kubelet server" Apr 16 23:56:22.043297 kubelet[2822]: I0416 23:56:22.043260 2822 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 23:56:22.046015 kubelet[2822]: E0416 23:56:22.045974 2822 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 16 23:56:22.046153 kubelet[2822]: I0416 23:56:22.046134 2822 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 16 23:56:22.046343 kubelet[2822]: E0416 23:56:22.046327 2822 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-fcb502653b\" not found" Apr 16 23:56:22.046424 kubelet[2822]: I0416 23:56:22.046414 2822 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 16 23:56:22.046559 kubelet[2822]: I0416 23:56:22.046550 2822 reconciler.go:26] "Reconciler: start to sync state" Apr 16 23:56:22.048842 kubelet[2822]: I0416 23:56:22.048780 2822 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 16 23:56:22.050396 kubelet[2822]: I0416 23:56:22.050355 2822 factory.go:223] Registration of the systemd container factory successfully Apr 16 23:56:22.050448 kubelet[2822]: I0416 23:56:22.050429 2822 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 16 23:56:22.050487 kubelet[2822]: I0416 23:56:22.050453 2822 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 16 23:56:22.050487 kubelet[2822]: I0416 23:56:22.050473 2822 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 23:56:22.050487 kubelet[2822]: I0416 23:56:22.050483 2822 kubelet.go:2436] "Starting kubelet main sync loop" Apr 16 23:56:22.050545 kubelet[2822]: I0416 23:56:22.050488 2822 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 23:56:22.050545 kubelet[2822]: E0416 23:56:22.050522 2822 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 23:56:22.055853 kubelet[2822]: I0416 23:56:22.055828 2822 factory.go:223] Registration of the containerd container factory successfully Apr 16 23:56:22.086073 kubelet[2822]: I0416 23:56:22.086030 2822 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 16 23:56:22.086073 kubelet[2822]: I0416 23:56:22.086077 2822 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 16 23:56:22.086202 kubelet[2822]: I0416 23:56:22.086101 2822 state_mem.go:36] "Initialized new in-memory state store" Apr 16 23:56:22.086248 kubelet[2822]: I0416 23:56:22.086226 2822 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 16 23:56:22.086287 kubelet[2822]: I0416 23:56:22.086244 2822 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 16 23:56:22.086287 kubelet[2822]: I0416 23:56:22.086261 2822 policy_none.go:49] "None policy: Start" Apr 16 23:56:22.086287 kubelet[2822]: I0416 23:56:22.086271 2822 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 16 23:56:22.086287 kubelet[2822]: I0416 23:56:22.086279 2822 state_mem.go:35] "Initializing new in-memory state store" Apr 16 23:56:22.086385 kubelet[2822]: I0416 23:56:22.086370 2822 state_mem.go:75] "Updated machine memory state" Apr 16 23:56:22.090420 kubelet[2822]: E0416 23:56:22.090382 2822 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 23:56:22.090586 kubelet[2822]: I0416 23:56:22.090569 2822 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 23:56:22.090626 kubelet[2822]: I0416 23:56:22.090591 2822 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 23:56:22.091137 kubelet[2822]: I0416 23:56:22.091109 2822 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 23:56:22.092074 kubelet[2822]: E0416 23:56:22.091863 2822 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 23:56:22.151963 kubelet[2822]: I0416 23:56:22.151705 2822 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-4-n-fcb502653b" Apr 16 23:56:22.151963 kubelet[2822]: I0416 23:56:22.151821 2822 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-4-n-fcb502653b" Apr 16 23:56:22.151963 kubelet[2822]: I0416 23:56:22.151854 2822 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-fcb502653b" Apr 16 23:56:22.161389 kubelet[2822]: E0416 23:56:22.161359 2822 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-4-n-fcb502653b\" already exists" pod="kube-system/kube-scheduler-ci-4459-2-4-n-fcb502653b" Apr 16 23:56:22.194313 kubelet[2822]: I0416 23:56:22.194284 2822 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-fcb502653b" Apr 16 23:56:22.202415 kubelet[2822]: I0416 23:56:22.202386 2822 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459-2-4-n-fcb502653b" Apr 16 23:56:22.202494 kubelet[2822]: I0416 23:56:22.202466 2822 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-4-n-fcb502653b" Apr 16 23:56:22.348449 kubelet[2822]: I0416 23:56:22.348343 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ffd4fc445f5286ea18a3631baf165ee8-ca-certs\") pod \"kube-apiserver-ci-4459-2-4-n-fcb502653b\" (UID: \"ffd4fc445f5286ea18a3631baf165ee8\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-fcb502653b" Apr 16 23:56:22.348449 kubelet[2822]: I0416 23:56:22.348387 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ffd4fc445f5286ea18a3631baf165ee8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-4-n-fcb502653b\" (UID: \"ffd4fc445f5286ea18a3631baf165ee8\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-fcb502653b" Apr 16 23:56:22.348449 kubelet[2822]: I0416 23:56:22.348410 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cea8d98cb830329bc4960e42abf1314c-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-4-n-fcb502653b\" (UID: \"cea8d98cb830329bc4960e42abf1314c\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-fcb502653b" Apr 16 23:56:22.348449 kubelet[2822]: I0416 23:56:22.348428 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1c2bf432fcbd16ee209e9bee4a467629-kubeconfig\") pod \"kube-scheduler-ci-4459-2-4-n-fcb502653b\" (UID: \"1c2bf432fcbd16ee209e9bee4a467629\") " pod="kube-system/kube-scheduler-ci-4459-2-4-n-fcb502653b" Apr 16 23:56:22.348609 kubelet[2822]: I0416 23:56:22.348457 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ffd4fc445f5286ea18a3631baf165ee8-k8s-certs\") pod \"kube-apiserver-ci-4459-2-4-n-fcb502653b\" (UID: \"ffd4fc445f5286ea18a3631baf165ee8\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-fcb502653b" Apr 16 23:56:22.348609 kubelet[2822]: I0416 23:56:22.348476 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cea8d98cb830329bc4960e42abf1314c-ca-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-fcb502653b\" (UID: \"cea8d98cb830329bc4960e42abf1314c\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-fcb502653b" Apr 16 23:56:22.348609 kubelet[2822]: I0416 23:56:22.348492 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cea8d98cb830329bc4960e42abf1314c-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-4-n-fcb502653b\" (UID: \"cea8d98cb830329bc4960e42abf1314c\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-fcb502653b" Apr 16 23:56:22.348609 kubelet[2822]: I0416 23:56:22.348506 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cea8d98cb830329bc4960e42abf1314c-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-fcb502653b\" (UID: \"cea8d98cb830329bc4960e42abf1314c\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-fcb502653b" Apr 16 23:56:22.348609 kubelet[2822]: I0416 23:56:22.348522 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cea8d98cb830329bc4960e42abf1314c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-4-n-fcb502653b\" (UID: \"cea8d98cb830329bc4960e42abf1314c\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-fcb502653b" Apr 16 23:56:22.518682 update_engine[1607]: I20260416 23:56:22.518555 1607 update_attempter.cc:509] Updating boot flags... Apr 16 23:56:23.023713 kubelet[2822]: I0416 23:56:23.023060 2822 apiserver.go:52] "Watching apiserver" Apr 16 23:56:23.046777 kubelet[2822]: I0416 23:56:23.046742 2822 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 16 23:56:23.073563 kubelet[2822]: I0416 23:56:23.073511 2822 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-4-n-fcb502653b" Apr 16 23:56:23.081975 kubelet[2822]: E0416 23:56:23.081936 2822 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-4-n-fcb502653b\" already exists" pod="kube-system/kube-apiserver-ci-4459-2-4-n-fcb502653b" Apr 16 23:56:23.103564 kubelet[2822]: I0416 23:56:23.103504 2822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-2-4-n-fcb502653b" podStartSLOduration=1.103487636 podStartE2EDuration="1.103487636s" podCreationTimestamp="2026-04-16 23:56:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:56:23.103085076 +0000 UTC m=+1.140019610" watchObservedRunningTime="2026-04-16 23:56:23.103487636 +0000 UTC m=+1.140422130" Apr 16 23:56:23.103750 kubelet[2822]: I0416 23:56:23.103628 2822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-2-4-n-fcb502653b" podStartSLOduration=3.103620515 podStartE2EDuration="3.103620515s" podCreationTimestamp="2026-04-16 23:56:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:56:23.091904332 +0000 UTC m=+1.128838826" watchObservedRunningTime="2026-04-16 23:56:23.103620515 +0000 UTC m=+1.140555009" Apr 16 23:56:23.122781 kubelet[2822]: I0416 23:56:23.122529 2822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-fcb502653b" podStartSLOduration=1.122512728 podStartE2EDuration="1.122512728s" podCreationTimestamp="2026-04-16 23:56:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:56:23.112785862 +0000 UTC m=+1.149720436" watchObservedRunningTime="2026-04-16 23:56:23.122512728 +0000 UTC m=+1.159447222" Apr 16 23:56:27.044090 kubelet[2822]: I0416 23:56:27.044013 2822 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 16 23:56:27.044785 kubelet[2822]: I0416 23:56:27.044536 2822 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 16 23:56:27.044869 containerd[1624]: time="2026-04-16T23:56:27.044322954Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 16 23:56:27.699797 systemd[1]: Created slice kubepods-besteffort-pod4dd5a0fa_dc3a_4bf7_82b1_a68e852d201a.slice - libcontainer container kubepods-besteffort-pod4dd5a0fa_dc3a_4bf7_82b1_a68e852d201a.slice. Apr 16 23:56:27.780333 kubelet[2822]: I0416 23:56:27.780252 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4dd5a0fa-dc3a-4bf7-82b1-a68e852d201a-xtables-lock\") pod \"kube-proxy-2v7tp\" (UID: \"4dd5a0fa-dc3a-4bf7-82b1-a68e852d201a\") " pod="kube-system/kube-proxy-2v7tp" Apr 16 23:56:27.780634 kubelet[2822]: I0416 23:56:27.780511 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4dd5a0fa-dc3a-4bf7-82b1-a68e852d201a-kube-proxy\") pod \"kube-proxy-2v7tp\" (UID: \"4dd5a0fa-dc3a-4bf7-82b1-a68e852d201a\") " pod="kube-system/kube-proxy-2v7tp" Apr 16 23:56:27.780634 kubelet[2822]: I0416 23:56:27.780538 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4dd5a0fa-dc3a-4bf7-82b1-a68e852d201a-lib-modules\") pod \"kube-proxy-2v7tp\" (UID: \"4dd5a0fa-dc3a-4bf7-82b1-a68e852d201a\") " pod="kube-system/kube-proxy-2v7tp" Apr 16 23:56:27.780634 kubelet[2822]: I0416 23:56:27.780595 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2l6n\" (UniqueName: \"kubernetes.io/projected/4dd5a0fa-dc3a-4bf7-82b1-a68e852d201a-kube-api-access-s2l6n\") pod \"kube-proxy-2v7tp\" (UID: \"4dd5a0fa-dc3a-4bf7-82b1-a68e852d201a\") " pod="kube-system/kube-proxy-2v7tp" Apr 16 23:56:28.010052 containerd[1624]: time="2026-04-16T23:56:28.010005771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2v7tp,Uid:4dd5a0fa-dc3a-4bf7-82b1-a68e852d201a,Namespace:kube-system,Attempt:0,}" Apr 16 23:56:28.033681 containerd[1624]: time="2026-04-16T23:56:28.033637137Z" level=info msg="connecting to shim d411303398646a32ac5fb8f8da749cc3ffdfeb94f1c2953c2f7226c73813505f" address="unix:///run/containerd/s/e7f7ea2c25b5cb5e865d73751673d777037c01cb4b1a599969a0540ddad493bd" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:56:28.055353 systemd[1]: Started cri-containerd-d411303398646a32ac5fb8f8da749cc3ffdfeb94f1c2953c2f7226c73813505f.scope - libcontainer container d411303398646a32ac5fb8f8da749cc3ffdfeb94f1c2953c2f7226c73813505f. Apr 16 23:56:28.079296 containerd[1624]: time="2026-04-16T23:56:28.079258512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2v7tp,Uid:4dd5a0fa-dc3a-4bf7-82b1-a68e852d201a,Namespace:kube-system,Attempt:0,} returns sandbox id \"d411303398646a32ac5fb8f8da749cc3ffdfeb94f1c2953c2f7226c73813505f\"" Apr 16 23:56:28.085866 containerd[1624]: time="2026-04-16T23:56:28.085834343Z" level=info msg="CreateContainer within sandbox \"d411303398646a32ac5fb8f8da749cc3ffdfeb94f1c2953c2f7226c73813505f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 16 23:56:28.099078 containerd[1624]: time="2026-04-16T23:56:28.098273565Z" level=info msg="Container 30c59c724709e4a0bf77e5f327a8acafa30f62ce924769f55046830d694bdb58: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:56:28.111454 containerd[1624]: time="2026-04-16T23:56:28.111404786Z" level=info msg="CreateContainer within sandbox \"d411303398646a32ac5fb8f8da749cc3ffdfeb94f1c2953c2f7226c73813505f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"30c59c724709e4a0bf77e5f327a8acafa30f62ce924769f55046830d694bdb58\"" Apr 16 23:56:28.112136 containerd[1624]: time="2026-04-16T23:56:28.112111145Z" level=info msg="StartContainer for \"30c59c724709e4a0bf77e5f327a8acafa30f62ce924769f55046830d694bdb58\"" Apr 16 23:56:28.113457 containerd[1624]: time="2026-04-16T23:56:28.113428543Z" level=info msg="connecting to shim 30c59c724709e4a0bf77e5f327a8acafa30f62ce924769f55046830d694bdb58" address="unix:///run/containerd/s/e7f7ea2c25b5cb5e865d73751673d777037c01cb4b1a599969a0540ddad493bd" protocol=ttrpc version=3 Apr 16 23:56:28.134217 systemd[1]: Started cri-containerd-30c59c724709e4a0bf77e5f327a8acafa30f62ce924769f55046830d694bdb58.scope - libcontainer container 30c59c724709e4a0bf77e5f327a8acafa30f62ce924769f55046830d694bdb58. Apr 16 23:56:28.204895 containerd[1624]: time="2026-04-16T23:56:28.204855772Z" level=info msg="StartContainer for \"30c59c724709e4a0bf77e5f327a8acafa30f62ce924769f55046830d694bdb58\" returns successfully" Apr 16 23:56:28.254433 systemd[1]: Created slice kubepods-besteffort-pod9463c683_bead_4e7e_b2e3_777720a3bf2c.slice - libcontainer container kubepods-besteffort-pod9463c683_bead_4e7e_b2e3_777720a3bf2c.slice. Apr 16 23:56:28.283180 kubelet[2822]: I0416 23:56:28.282962 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjb28\" (UniqueName: \"kubernetes.io/projected/9463c683-bead-4e7e-b2e3-777720a3bf2c-kube-api-access-zjb28\") pod \"tigera-operator-6bf85f8dd-mjzz5\" (UID: \"9463c683-bead-4e7e-b2e3-777720a3bf2c\") " pod="tigera-operator/tigera-operator-6bf85f8dd-mjzz5" Apr 16 23:56:28.283180 kubelet[2822]: I0416 23:56:28.283004 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9463c683-bead-4e7e-b2e3-777720a3bf2c-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-mjzz5\" (UID: \"9463c683-bead-4e7e-b2e3-777720a3bf2c\") " pod="tigera-operator/tigera-operator-6bf85f8dd-mjzz5" Apr 16 23:56:28.558061 containerd[1624]: time="2026-04-16T23:56:28.557902067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-mjzz5,Uid:9463c683-bead-4e7e-b2e3-777720a3bf2c,Namespace:tigera-operator,Attempt:0,}" Apr 16 23:56:28.584832 containerd[1624]: time="2026-04-16T23:56:28.584732748Z" level=info msg="connecting to shim 5533caad9fde56aa0dbdda2b39fc82bde0f1f535c31eadfe8f141aad21b22d40" address="unix:///run/containerd/s/d3c8637f82b5443925e11261e741844f29d2ed9c791bd6d0e29bd9f8d1d734de" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:56:28.605210 systemd[1]: Started cri-containerd-5533caad9fde56aa0dbdda2b39fc82bde0f1f535c31eadfe8f141aad21b22d40.scope - libcontainer container 5533caad9fde56aa0dbdda2b39fc82bde0f1f535c31eadfe8f141aad21b22d40. Apr 16 23:56:28.636014 containerd[1624]: time="2026-04-16T23:56:28.635975475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-mjzz5,Uid:9463c683-bead-4e7e-b2e3-777720a3bf2c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5533caad9fde56aa0dbdda2b39fc82bde0f1f535c31eadfe8f141aad21b22d40\"" Apr 16 23:56:28.637568 containerd[1624]: time="2026-04-16T23:56:28.637468153Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 16 23:56:29.099268 kubelet[2822]: I0416 23:56:29.098975 2822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2v7tp" podStartSLOduration=2.098960652 podStartE2EDuration="2.098960652s" podCreationTimestamp="2026-04-16 23:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:56:29.098200053 +0000 UTC m=+7.135134547" watchObservedRunningTime="2026-04-16 23:56:29.098960652 +0000 UTC m=+7.135895146" Apr 16 23:56:30.313572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1976672255.mount: Deactivated successfully. Apr 16 23:56:30.811475 containerd[1624]: time="2026-04-16T23:56:30.811404759Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:30.814085 containerd[1624]: time="2026-04-16T23:56:30.814033435Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Apr 16 23:56:30.815388 containerd[1624]: time="2026-04-16T23:56:30.815355033Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:30.818630 containerd[1624]: time="2026-04-16T23:56:30.818587869Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:30.819437 containerd[1624]: time="2026-04-16T23:56:30.819403148Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 2.181854836s" Apr 16 23:56:30.819472 containerd[1624]: time="2026-04-16T23:56:30.819434948Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Apr 16 23:56:30.824824 containerd[1624]: time="2026-04-16T23:56:30.824793860Z" level=info msg="CreateContainer within sandbox \"5533caad9fde56aa0dbdda2b39fc82bde0f1f535c31eadfe8f141aad21b22d40\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 16 23:56:30.838341 containerd[1624]: time="2026-04-16T23:56:30.838265521Z" level=info msg="Container 9a4b2c43c21806eaa8e12140f9762819f4e0bbd2a1c4126527d78129150098c7: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:56:30.847206 containerd[1624]: time="2026-04-16T23:56:30.847083348Z" level=info msg="CreateContainer within sandbox \"5533caad9fde56aa0dbdda2b39fc82bde0f1f535c31eadfe8f141aad21b22d40\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9a4b2c43c21806eaa8e12140f9762819f4e0bbd2a1c4126527d78129150098c7\"" Apr 16 23:56:30.847515 containerd[1624]: time="2026-04-16T23:56:30.847484067Z" level=info msg="StartContainer for \"9a4b2c43c21806eaa8e12140f9762819f4e0bbd2a1c4126527d78129150098c7\"" Apr 16 23:56:30.848659 containerd[1624]: time="2026-04-16T23:56:30.848442986Z" level=info msg="connecting to shim 9a4b2c43c21806eaa8e12140f9762819f4e0bbd2a1c4126527d78129150098c7" address="unix:///run/containerd/s/d3c8637f82b5443925e11261e741844f29d2ed9c791bd6d0e29bd9f8d1d734de" protocol=ttrpc version=3 Apr 16 23:56:30.871183 systemd[1]: Started cri-containerd-9a4b2c43c21806eaa8e12140f9762819f4e0bbd2a1c4126527d78129150098c7.scope - libcontainer container 9a4b2c43c21806eaa8e12140f9762819f4e0bbd2a1c4126527d78129150098c7. Apr 16 23:56:30.895871 containerd[1624]: time="2026-04-16T23:56:30.895797638Z" level=info msg="StartContainer for \"9a4b2c43c21806eaa8e12140f9762819f4e0bbd2a1c4126527d78129150098c7\" returns successfully" Apr 16 23:56:36.067438 sudo[1864]: pam_unix(sudo:session): session closed for user root Apr 16 23:56:36.082344 sshd[1863]: Connection closed by 50.85.169.122 port 39408 Apr 16 23:56:36.082247 sshd-session[1860]: pam_unix(sshd:session): session closed for user core Apr 16 23:56:36.085320 systemd[1]: sshd@6-10.0.0.99:22-50.85.169.122:39408.service: Deactivated successfully. Apr 16 23:56:36.088120 systemd[1]: session-7.scope: Deactivated successfully. Apr 16 23:56:36.090351 systemd[1]: session-7.scope: Consumed 7.242s CPU time, 222.6M memory peak. Apr 16 23:56:36.092688 systemd-logind[1603]: Session 7 logged out. Waiting for processes to exit. Apr 16 23:56:36.094572 systemd-logind[1603]: Removed session 7. Apr 16 23:56:36.514896 kubelet[2822]: I0416 23:56:36.514834 2822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-mjzz5" podStartSLOduration=6.331767398 podStartE2EDuration="8.514819912s" podCreationTimestamp="2026-04-16 23:56:28 +0000 UTC" firstStartedPulling="2026-04-16 23:56:28.637153313 +0000 UTC m=+6.674087767" lastFinishedPulling="2026-04-16 23:56:30.820205787 +0000 UTC m=+8.857140281" observedRunningTime="2026-04-16 23:56:31.105006619 +0000 UTC m=+9.141941113" watchObservedRunningTime="2026-04-16 23:56:36.514819912 +0000 UTC m=+14.551754406" Apr 16 23:56:42.779450 systemd[1]: Created slice kubepods-besteffort-podfbbc0a03_4f38_469a_93f8_2ba963eabc9e.slice - libcontainer container kubepods-besteffort-podfbbc0a03_4f38_469a_93f8_2ba963eabc9e.slice. Apr 16 23:56:42.828698 systemd[1]: Created slice kubepods-besteffort-podd3ad6e05_7a16_46e1_be02_5891fd673b34.slice - libcontainer container kubepods-besteffort-podd3ad6e05_7a16_46e1_be02_5891fd673b34.slice. Apr 16 23:56:42.865726 kubelet[2822]: I0416 23:56:42.865691 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/fbbc0a03-4f38-469a-93f8-2ba963eabc9e-typha-certs\") pod \"calico-typha-6fc7945d-5rp8b\" (UID: \"fbbc0a03-4f38-469a-93f8-2ba963eabc9e\") " pod="calico-system/calico-typha-6fc7945d-5rp8b" Apr 16 23:56:42.865726 kubelet[2822]: I0416 23:56:42.865729 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2wtd\" (UniqueName: \"kubernetes.io/projected/fbbc0a03-4f38-469a-93f8-2ba963eabc9e-kube-api-access-f2wtd\") pod \"calico-typha-6fc7945d-5rp8b\" (UID: \"fbbc0a03-4f38-469a-93f8-2ba963eabc9e\") " pod="calico-system/calico-typha-6fc7945d-5rp8b" Apr 16 23:56:42.866121 kubelet[2822]: I0416 23:56:42.865747 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d3ad6e05-7a16-46e1-be02-5891fd673b34-cni-bin-dir\") pod \"calico-node-v4dpx\" (UID: \"d3ad6e05-7a16-46e1-be02-5891fd673b34\") " pod="calico-system/calico-node-v4dpx" Apr 16 23:56:42.866121 kubelet[2822]: I0416 23:56:42.865762 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d3ad6e05-7a16-46e1-be02-5891fd673b34-cni-log-dir\") pod \"calico-node-v4dpx\" (UID: \"d3ad6e05-7a16-46e1-be02-5891fd673b34\") " pod="calico-system/calico-node-v4dpx" Apr 16 23:56:42.866121 kubelet[2822]: I0416 23:56:42.865891 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/d3ad6e05-7a16-46e1-be02-5891fd673b34-sys-fs\") pod \"calico-node-v4dpx\" (UID: \"d3ad6e05-7a16-46e1-be02-5891fd673b34\") " pod="calico-system/calico-node-v4dpx" Apr 16 23:56:42.866121 kubelet[2822]: I0416 23:56:42.865966 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d3ad6e05-7a16-46e1-be02-5891fd673b34-lib-modules\") pod \"calico-node-v4dpx\" (UID: \"d3ad6e05-7a16-46e1-be02-5891fd673b34\") " pod="calico-system/calico-node-v4dpx" Apr 16 23:56:42.866211 kubelet[2822]: I0416 23:56:42.866023 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/d3ad6e05-7a16-46e1-be02-5891fd673b34-nodeproc\") pod \"calico-node-v4dpx\" (UID: \"d3ad6e05-7a16-46e1-be02-5891fd673b34\") " pod="calico-system/calico-node-v4dpx" Apr 16 23:56:42.866233 kubelet[2822]: I0416 23:56:42.866207 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d3ad6e05-7a16-46e1-be02-5891fd673b34-policysync\") pod \"calico-node-v4dpx\" (UID: \"d3ad6e05-7a16-46e1-be02-5891fd673b34\") " pod="calico-system/calico-node-v4dpx" Apr 16 23:56:42.866255 kubelet[2822]: I0416 23:56:42.866237 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qs6x\" (UniqueName: \"kubernetes.io/projected/d3ad6e05-7a16-46e1-be02-5891fd673b34-kube-api-access-2qs6x\") pod \"calico-node-v4dpx\" (UID: \"d3ad6e05-7a16-46e1-be02-5891fd673b34\") " pod="calico-system/calico-node-v4dpx" Apr 16 23:56:42.866276 kubelet[2822]: I0416 23:56:42.866258 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/d3ad6e05-7a16-46e1-be02-5891fd673b34-bpffs\") pod \"calico-node-v4dpx\" (UID: \"d3ad6e05-7a16-46e1-be02-5891fd673b34\") " pod="calico-system/calico-node-v4dpx" Apr 16 23:56:42.866297 kubelet[2822]: I0416 23:56:42.866274 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d3ad6e05-7a16-46e1-be02-5891fd673b34-flexvol-driver-host\") pod \"calico-node-v4dpx\" (UID: \"d3ad6e05-7a16-46e1-be02-5891fd673b34\") " pod="calico-system/calico-node-v4dpx" Apr 16 23:56:42.866297 kubelet[2822]: I0416 23:56:42.866289 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d3ad6e05-7a16-46e1-be02-5891fd673b34-node-certs\") pod \"calico-node-v4dpx\" (UID: \"d3ad6e05-7a16-46e1-be02-5891fd673b34\") " pod="calico-system/calico-node-v4dpx" Apr 16 23:56:42.866338 kubelet[2822]: I0416 23:56:42.866318 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d3ad6e05-7a16-46e1-be02-5891fd673b34-var-lib-calico\") pod \"calico-node-v4dpx\" (UID: \"d3ad6e05-7a16-46e1-be02-5891fd673b34\") " pod="calico-system/calico-node-v4dpx" Apr 16 23:56:42.866359 kubelet[2822]: I0416 23:56:42.866344 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbbc0a03-4f38-469a-93f8-2ba963eabc9e-tigera-ca-bundle\") pod \"calico-typha-6fc7945d-5rp8b\" (UID: \"fbbc0a03-4f38-469a-93f8-2ba963eabc9e\") " pod="calico-system/calico-typha-6fc7945d-5rp8b" Apr 16 23:56:42.866380 kubelet[2822]: I0416 23:56:42.866366 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d3ad6e05-7a16-46e1-be02-5891fd673b34-xtables-lock\") pod \"calico-node-v4dpx\" (UID: \"d3ad6e05-7a16-46e1-be02-5891fd673b34\") " pod="calico-system/calico-node-v4dpx" Apr 16 23:56:42.866401 kubelet[2822]: I0416 23:56:42.866382 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d3ad6e05-7a16-46e1-be02-5891fd673b34-cni-net-dir\") pod \"calico-node-v4dpx\" (UID: \"d3ad6e05-7a16-46e1-be02-5891fd673b34\") " pod="calico-system/calico-node-v4dpx" Apr 16 23:56:42.866401 kubelet[2822]: I0416 23:56:42.866396 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3ad6e05-7a16-46e1-be02-5891fd673b34-tigera-ca-bundle\") pod \"calico-node-v4dpx\" (UID: \"d3ad6e05-7a16-46e1-be02-5891fd673b34\") " pod="calico-system/calico-node-v4dpx" Apr 16 23:56:42.866443 kubelet[2822]: I0416 23:56:42.866411 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d3ad6e05-7a16-46e1-be02-5891fd673b34-var-run-calico\") pod \"calico-node-v4dpx\" (UID: \"d3ad6e05-7a16-46e1-be02-5891fd673b34\") " pod="calico-system/calico-node-v4dpx" Apr 16 23:56:42.933106 kubelet[2822]: E0416 23:56:42.932561 2822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cnqx8" podUID="83c6c081-1afd-4376-901d-205c3cf0fa07" Apr 16 23:56:42.967554 kubelet[2822]: I0416 23:56:42.967515 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsqdf\" (UniqueName: \"kubernetes.io/projected/83c6c081-1afd-4376-901d-205c3cf0fa07-kube-api-access-hsqdf\") pod \"csi-node-driver-cnqx8\" (UID: \"83c6c081-1afd-4376-901d-205c3cf0fa07\") " pod="calico-system/csi-node-driver-cnqx8" Apr 16 23:56:42.967692 kubelet[2822]: I0416 23:56:42.967630 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/83c6c081-1afd-4376-901d-205c3cf0fa07-kubelet-dir\") pod \"csi-node-driver-cnqx8\" (UID: \"83c6c081-1afd-4376-901d-205c3cf0fa07\") " pod="calico-system/csi-node-driver-cnqx8" Apr 16 23:56:42.967692 kubelet[2822]: I0416 23:56:42.967649 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/83c6c081-1afd-4376-901d-205c3cf0fa07-varrun\") pod \"csi-node-driver-cnqx8\" (UID: \"83c6c081-1afd-4376-901d-205c3cf0fa07\") " pod="calico-system/csi-node-driver-cnqx8" Apr 16 23:56:42.967767 kubelet[2822]: I0416 23:56:42.967703 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/83c6c081-1afd-4376-901d-205c3cf0fa07-registration-dir\") pod \"csi-node-driver-cnqx8\" (UID: \"83c6c081-1afd-4376-901d-205c3cf0fa07\") " pod="calico-system/csi-node-driver-cnqx8" Apr 16 23:56:42.967767 kubelet[2822]: I0416 23:56:42.967720 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/83c6c081-1afd-4376-901d-205c3cf0fa07-socket-dir\") pod \"csi-node-driver-cnqx8\" (UID: \"83c6c081-1afd-4376-901d-205c3cf0fa07\") " pod="calico-system/csi-node-driver-cnqx8" Apr 16 23:56:42.969871 kubelet[2822]: E0416 23:56:42.969289 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:42.969871 kubelet[2822]: W0416 23:56:42.969668 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:42.969871 kubelet[2822]: E0416 23:56:42.969700 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:42.970239 kubelet[2822]: E0416 23:56:42.970215 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:42.970469 kubelet[2822]: W0416 23:56:42.970437 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:42.970469 kubelet[2822]: E0416 23:56:42.970467 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:42.973214 kubelet[2822]: E0416 23:56:42.973190 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:42.973297 kubelet[2822]: W0416 23:56:42.973283 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:42.973364 kubelet[2822]: E0416 23:56:42.973353 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:42.978927 kubelet[2822]: E0416 23:56:42.978887 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:42.978927 kubelet[2822]: W0416 23:56:42.978907 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:42.978927 kubelet[2822]: E0416 23:56:42.978921 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:42.981761 kubelet[2822]: E0416 23:56:42.981742 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:42.981855 kubelet[2822]: W0416 23:56:42.981841 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:42.981911 kubelet[2822]: E0416 23:56:42.981901 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:42.982167 kubelet[2822]: E0416 23:56:42.982154 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:42.982232 kubelet[2822]: W0416 23:56:42.982221 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:42.982293 kubelet[2822]: E0416 23:56:42.982283 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:42.990321 kubelet[2822]: E0416 23:56:42.990302 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:42.990521 kubelet[2822]: W0416 23:56:42.990484 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:42.990636 kubelet[2822]: E0416 23:56:42.990583 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:43.069676 kubelet[2822]: E0416 23:56:43.069199 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:43.069676 kubelet[2822]: W0416 23:56:43.069411 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:43.069676 kubelet[2822]: E0416 23:56:43.069434 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:43.070422 kubelet[2822]: E0416 23:56:43.070313 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:43.070677 kubelet[2822]: W0416 23:56:43.070597 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:43.070820 kubelet[2822]: E0416 23:56:43.070807 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:43.071756 kubelet[2822]: E0416 23:56:43.071637 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:43.071866 kubelet[2822]: W0416 23:56:43.071833 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:43.071866 kubelet[2822]: E0416 23:56:43.071853 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:43.072257 kubelet[2822]: E0416 23:56:43.072188 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:43.072257 kubelet[2822]: W0416 23:56:43.072202 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:43.072257 kubelet[2822]: E0416 23:56:43.072215 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:43.072629 kubelet[2822]: E0416 23:56:43.072592 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:43.072629 kubelet[2822]: W0416 23:56:43.072605 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:43.072629 kubelet[2822]: E0416 23:56:43.072615 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:43.073175 kubelet[2822]: E0416 23:56:43.073015 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:43.073175 kubelet[2822]: W0416 23:56:43.073031 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:43.073175 kubelet[2822]: E0416 23:56:43.073052 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:43.073502 kubelet[2822]: E0416 23:56:43.073479 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:43.073571 kubelet[2822]: W0416 23:56:43.073559 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:43.073637 kubelet[2822]: E0416 23:56:43.073625 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:43.074179 kubelet[2822]: E0416 23:56:43.074137 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:43.074179 kubelet[2822]: W0416 23:56:43.074152 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:43.074293 kubelet[2822]: E0416 23:56:43.074280 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:43.074528 kubelet[2822]: E0416 23:56:43.074515 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:43.074591 kubelet[2822]: W0416 23:56:43.074580 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:43.074653 kubelet[2822]: E0416 23:56:43.074642 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:43.074869 kubelet[2822]: E0416 23:56:43.074856 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:43.074952 kubelet[2822]: W0416 23:56:43.074927 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:43.074952 kubelet[2822]: E0416 23:56:43.074942 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:43.075320 kubelet[2822]: E0416 23:56:43.075204 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:43.075320 kubelet[2822]: W0416 23:56:43.075216 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:43.075320 kubelet[2822]: E0416 23:56:43.075225 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:43.075502 kubelet[2822]: E0416 23:56:43.075489 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:43.075563 kubelet[2822]: W0416 23:56:43.075553 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:43.075614 kubelet[2822]: E0416 23:56:43.075604 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:43.075883 kubelet[2822]: E0416 23:56:43.075841 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:43.075883 kubelet[2822]: W0416 23:56:43.075857 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:43.075883 kubelet[2822]: E0416 23:56:43.075869 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:43.076233 kubelet[2822]: E0416 23:56:43.076196 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:43.076233 kubelet[2822]: W0416 23:56:43.076209 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:43.076233 kubelet[2822]: E0416 23:56:43.076219 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:43.076572 kubelet[2822]: E0416 23:56:43.076507 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:43.076572 kubelet[2822]: W0416 23:56:43.076519 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:43.076572 kubelet[2822]: E0416 23:56:43.076530 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:43.076820 kubelet[2822]: E0416 23:56:43.076808 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:43.076994 kubelet[2822]: W0416 23:56:43.076875 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:43.076994 kubelet[2822]: E0416 23:56:43.076890 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:43.077263 kubelet[2822]: E0416 23:56:43.077196 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:43.077263 kubelet[2822]: W0416 23:56:43.077208 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:43.077263 kubelet[2822]: E0416 23:56:43.077218 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:43.077576 kubelet[2822]: E0416 23:56:43.077563 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:43.077729 kubelet[2822]: W0416 23:56:43.077637 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:43.077729 kubelet[2822]: E0416 23:56:43.077653 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:43.077935 kubelet[2822]: E0416 23:56:43.077925 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:43.078001 kubelet[2822]: W0416 23:56:43.077990 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:43.078099 kubelet[2822]: E0416 23:56:43.078087 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:43.078361 kubelet[2822]: E0416 23:56:43.078349 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:43.078490 kubelet[2822]: W0416 23:56:43.078422 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:43.078490 kubelet[2822]: E0416 23:56:43.078439 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:43.078690 kubelet[2822]: E0416 23:56:43.078678 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:43.078754 kubelet[2822]: W0416 23:56:43.078743 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:43.078819 kubelet[2822]: E0416 23:56:43.078806 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:43.079064 kubelet[2822]: E0416 23:56:43.079034 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:43.079247 kubelet[2822]: W0416 23:56:43.079137 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:43.079247 kubelet[2822]: E0416 23:56:43.079180 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:43.079549 kubelet[2822]: E0416 23:56:43.079458 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:43.079549 kubelet[2822]: W0416 23:56:43.079471 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:43.079549 kubelet[2822]: E0416 23:56:43.079481 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:43.079814 kubelet[2822]: E0416 23:56:43.079802 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:43.079883 kubelet[2822]: W0416 23:56:43.079872 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:43.079934 kubelet[2822]: E0416 23:56:43.079924 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:43.080474 kubelet[2822]: E0416 23:56:43.080211 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:43.080474 kubelet[2822]: W0416 23:56:43.080223 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:43.080474 kubelet[2822]: E0416 23:56:43.080233 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:43.084267 containerd[1624]: time="2026-04-16T23:56:43.084230427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6fc7945d-5rp8b,Uid:fbbc0a03-4f38-469a-93f8-2ba963eabc9e,Namespace:calico-system,Attempt:0,}" Apr 16 23:56:43.093700 kubelet[2822]: E0416 23:56:43.093629 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:43.093700 kubelet[2822]: W0416 23:56:43.093645 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:43.093700 kubelet[2822]: E0416 23:56:43.093661 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:43.112002 containerd[1624]: time="2026-04-16T23:56:43.111948907Z" level=info msg="connecting to shim 1511b02c29be839eee84342d51f30941788e4b2b968f83248bb6ec5cd3beeff5" address="unix:///run/containerd/s/813e17ca5bfd061a2df6fa2ee1b4fc897b39d9797d762acc41419640a3bf16c8" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:56:43.132590 containerd[1624]: time="2026-04-16T23:56:43.132187358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v4dpx,Uid:d3ad6e05-7a16-46e1-be02-5891fd673b34,Namespace:calico-system,Attempt:0,}" Apr 16 23:56:43.134190 systemd[1]: Started cri-containerd-1511b02c29be839eee84342d51f30941788e4b2b968f83248bb6ec5cd3beeff5.scope - libcontainer container 1511b02c29be839eee84342d51f30941788e4b2b968f83248bb6ec5cd3beeff5. Apr 16 23:56:43.161230 containerd[1624]: time="2026-04-16T23:56:43.161142317Z" level=info msg="connecting to shim 708e3e7b9bdb04eca538541d3540149784665fda43cfb24274803c7ab1a5f460" address="unix:///run/containerd/s/6f58918a0ef3275aa72269afd8258ddfa883d30c020b070a378c2285423103d1" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:56:43.170820 containerd[1624]: time="2026-04-16T23:56:43.170779903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6fc7945d-5rp8b,Uid:fbbc0a03-4f38-469a-93f8-2ba963eabc9e,Namespace:calico-system,Attempt:0,} returns sandbox id \"1511b02c29be839eee84342d51f30941788e4b2b968f83248bb6ec5cd3beeff5\"" Apr 16 23:56:43.175357 containerd[1624]: time="2026-04-16T23:56:43.174952777Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 16 23:56:43.183203 systemd[1]: Started cri-containerd-708e3e7b9bdb04eca538541d3540149784665fda43cfb24274803c7ab1a5f460.scope - libcontainer container 708e3e7b9bdb04eca538541d3540149784665fda43cfb24274803c7ab1a5f460. Apr 16 23:56:43.206568 containerd[1624]: time="2026-04-16T23:56:43.206528652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v4dpx,Uid:d3ad6e05-7a16-46e1-be02-5891fd673b34,Namespace:calico-system,Attempt:0,} returns sandbox id \"708e3e7b9bdb04eca538541d3540149784665fda43cfb24274803c7ab1a5f460\"" Apr 16 23:56:44.720839 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1552052388.mount: Deactivated successfully. Apr 16 23:56:45.050974 kubelet[2822]: E0416 23:56:45.050912 2822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cnqx8" podUID="83c6c081-1afd-4376-901d-205c3cf0fa07" Apr 16 23:56:45.362724 containerd[1624]: time="2026-04-16T23:56:45.362580205Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:45.364106 containerd[1624]: time="2026-04-16T23:56:45.364061723Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Apr 16 23:56:45.365868 containerd[1624]: time="2026-04-16T23:56:45.365819320Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:45.368974 containerd[1624]: time="2026-04-16T23:56:45.368920196Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:45.369810 containerd[1624]: time="2026-04-16T23:56:45.369771834Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 2.194784457s" Apr 16 23:56:45.369810 containerd[1624]: time="2026-04-16T23:56:45.369805234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Apr 16 23:56:45.370998 containerd[1624]: time="2026-04-16T23:56:45.370970873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 16 23:56:45.380729 containerd[1624]: time="2026-04-16T23:56:45.380685899Z" level=info msg="CreateContainer within sandbox \"1511b02c29be839eee84342d51f30941788e4b2b968f83248bb6ec5cd3beeff5\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 16 23:56:45.393109 containerd[1624]: time="2026-04-16T23:56:45.393070161Z" level=info msg="Container 778d13ddf114e95540e7558b3e639996a0f974275d29c468c4c81d5aa26b7cf3: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:56:45.408500 containerd[1624]: time="2026-04-16T23:56:45.408381419Z" level=info msg="CreateContainer within sandbox \"1511b02c29be839eee84342d51f30941788e4b2b968f83248bb6ec5cd3beeff5\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"778d13ddf114e95540e7558b3e639996a0f974275d29c468c4c81d5aa26b7cf3\"" Apr 16 23:56:45.408868 containerd[1624]: time="2026-04-16T23:56:45.408845298Z" level=info msg="StartContainer for \"778d13ddf114e95540e7558b3e639996a0f974275d29c468c4c81d5aa26b7cf3\"" Apr 16 23:56:45.410002 containerd[1624]: time="2026-04-16T23:56:45.409977617Z" level=info msg="connecting to shim 778d13ddf114e95540e7558b3e639996a0f974275d29c468c4c81d5aa26b7cf3" address="unix:///run/containerd/s/813e17ca5bfd061a2df6fa2ee1b4fc897b39d9797d762acc41419640a3bf16c8" protocol=ttrpc version=3 Apr 16 23:56:45.439204 systemd[1]: Started cri-containerd-778d13ddf114e95540e7558b3e639996a0f974275d29c468c4c81d5aa26b7cf3.scope - libcontainer container 778d13ddf114e95540e7558b3e639996a0f974275d29c468c4c81d5aa26b7cf3. Apr 16 23:56:45.482524 containerd[1624]: time="2026-04-16T23:56:45.482490433Z" level=info msg="StartContainer for \"778d13ddf114e95540e7558b3e639996a0f974275d29c468c4c81d5aa26b7cf3\" returns successfully" Apr 16 23:56:46.140563 kubelet[2822]: I0416 23:56:46.140213 2822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6fc7945d-5rp8b" podStartSLOduration=1.943886316 podStartE2EDuration="4.140195531s" podCreationTimestamp="2026-04-16 23:56:42 +0000 UTC" firstStartedPulling="2026-04-16 23:56:43.174324578 +0000 UTC m=+21.211259072" lastFinishedPulling="2026-04-16 23:56:45.370633793 +0000 UTC m=+23.407568287" observedRunningTime="2026-04-16 23:56:46.138983213 +0000 UTC m=+24.175917707" watchObservedRunningTime="2026-04-16 23:56:46.140195531 +0000 UTC m=+24.177130065" Apr 16 23:56:46.180487 kubelet[2822]: E0416 23:56:46.180450 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:46.180487 kubelet[2822]: W0416 23:56:46.180481 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:46.180628 kubelet[2822]: E0416 23:56:46.180502 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:46.180688 kubelet[2822]: E0416 23:56:46.180671 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:46.180724 kubelet[2822]: W0416 23:56:46.180684 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:46.180751 kubelet[2822]: E0416 23:56:46.180726 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:46.180931 kubelet[2822]: E0416 23:56:46.180917 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:46.180931 kubelet[2822]: W0416 23:56:46.180928 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:46.180990 kubelet[2822]: E0416 23:56:46.180937 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:46.181158 kubelet[2822]: E0416 23:56:46.181143 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:46.181158 kubelet[2822]: W0416 23:56:46.181155 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:46.181200 kubelet[2822]: E0416 23:56:46.181166 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:46.181355 kubelet[2822]: E0416 23:56:46.181342 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:46.181355 kubelet[2822]: W0416 23:56:46.181353 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:46.181415 kubelet[2822]: E0416 23:56:46.181366 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:46.181520 kubelet[2822]: E0416 23:56:46.181508 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:46.181520 kubelet[2822]: W0416 23:56:46.181518 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:46.181571 kubelet[2822]: E0416 23:56:46.181526 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:46.181672 kubelet[2822]: E0416 23:56:46.181660 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:46.181672 kubelet[2822]: W0416 23:56:46.181671 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:46.181725 kubelet[2822]: E0416 23:56:46.181679 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:46.181823 kubelet[2822]: E0416 23:56:46.181811 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:46.181823 kubelet[2822]: W0416 23:56:46.181821 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:46.181874 kubelet[2822]: E0416 23:56:46.181828 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:46.182002 kubelet[2822]: E0416 23:56:46.181988 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:46.182002 kubelet[2822]: W0416 23:56:46.182000 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:46.182065 kubelet[2822]: E0416 23:56:46.182012 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:46.182152 kubelet[2822]: E0416 23:56:46.182140 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:46.182181 kubelet[2822]: W0416 23:56:46.182150 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:46.182181 kubelet[2822]: E0416 23:56:46.182167 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:46.182300 kubelet[2822]: E0416 23:56:46.182288 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:46.182300 kubelet[2822]: W0416 23:56:46.182298 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:46.182347 kubelet[2822]: E0416 23:56:46.182313 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:46.182451 kubelet[2822]: E0416 23:56:46.182441 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:46.182475 kubelet[2822]: W0416 23:56:46.182451 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:46.182475 kubelet[2822]: E0416 23:56:46.182466 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:46.182613 kubelet[2822]: E0416 23:56:46.182601 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:46.182637 kubelet[2822]: W0416 23:56:46.182612 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:46.182637 kubelet[2822]: E0416 23:56:46.182620 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:46.182786 kubelet[2822]: E0416 23:56:46.182769 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:46.182807 kubelet[2822]: W0416 23:56:46.182786 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:46.182807 kubelet[2822]: E0416 23:56:46.182794 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:46.182925 kubelet[2822]: E0416 23:56:46.182915 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:46.182946 kubelet[2822]: W0416 23:56:46.182925 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:46.182946 kubelet[2822]: E0416 23:56:46.182932 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:46.194776 kubelet[2822]: E0416 23:56:46.194729 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:46.194776 kubelet[2822]: W0416 23:56:46.194745 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:46.194776 kubelet[2822]: E0416 23:56:46.194761 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:46.194972 kubelet[2822]: E0416 23:56:46.194945 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:46.194972 kubelet[2822]: W0416 23:56:46.194958 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:46.194972 kubelet[2822]: E0416 23:56:46.194967 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:46.195239 kubelet[2822]: E0416 23:56:46.195205 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:46.195239 kubelet[2822]: W0416 23:56:46.195221 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:46.195310 kubelet[2822]: E0416 23:56:46.195231 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:46.195471 kubelet[2822]: E0416 23:56:46.195452 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:46.195471 kubelet[2822]: W0416 23:56:46.195470 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:46.195526 kubelet[2822]: E0416 23:56:46.195483 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:46.195644 kubelet[2822]: E0416 23:56:46.195632 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:46.195644 kubelet[2822]: W0416 23:56:46.195643 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:46.195752 kubelet[2822]: E0416 23:56:46.195651 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:46.195814 kubelet[2822]: E0416 23:56:46.195800 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:46.195814 kubelet[2822]: W0416 23:56:46.195810 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:46.195858 kubelet[2822]: E0416 23:56:46.195818 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:46.195999 kubelet[2822]: E0416 23:56:46.195986 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:46.195999 kubelet[2822]: W0416 23:56:46.195996 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:46.196071 kubelet[2822]: E0416 23:56:46.196004 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:46.196306 kubelet[2822]: E0416 23:56:46.196266 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:46.196306 kubelet[2822]: W0416 23:56:46.196291 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:46.196306 kubelet[2822]: E0416 23:56:46.196304 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:46.196515 kubelet[2822]: E0416 23:56:46.196502 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:46.196542 kubelet[2822]: W0416 23:56:46.196514 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:46.196542 kubelet[2822]: E0416 23:56:46.196524 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:46.196680 kubelet[2822]: E0416 23:56:46.196668 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:46.196702 kubelet[2822]: W0416 23:56:46.196679 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:46.196702 kubelet[2822]: E0416 23:56:46.196688 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:46.196826 kubelet[2822]: E0416 23:56:46.196816 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:46.196851 kubelet[2822]: W0416 23:56:46.196826 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:46.196851 kubelet[2822]: E0416 23:56:46.196836 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:46.197018 kubelet[2822]: E0416 23:56:46.197005 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:46.197018 kubelet[2822]: W0416 23:56:46.197016 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:46.197084 kubelet[2822]: E0416 23:56:46.197024 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:46.197193 kubelet[2822]: E0416 23:56:46.197180 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:46.197193 kubelet[2822]: W0416 23:56:46.197190 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:46.197242 kubelet[2822]: E0416 23:56:46.197199 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:46.197452 kubelet[2822]: E0416 23:56:46.197419 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:46.197452 kubelet[2822]: W0416 23:56:46.197436 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:46.197452 kubelet[2822]: E0416 23:56:46.197448 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:46.197583 kubelet[2822]: E0416 23:56:46.197572 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:46.197583 kubelet[2822]: W0416 23:56:46.197582 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:46.197631 kubelet[2822]: E0416 23:56:46.197589 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:46.197754 kubelet[2822]: E0416 23:56:46.197743 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:46.197779 kubelet[2822]: W0416 23:56:46.197754 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:46.197779 kubelet[2822]: E0416 23:56:46.197761 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:46.198018 kubelet[2822]: E0416 23:56:46.198003 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:46.198048 kubelet[2822]: W0416 23:56:46.198018 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:46.198048 kubelet[2822]: E0416 23:56:46.198029 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:46.198223 kubelet[2822]: E0416 23:56:46.198208 2822 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:46.198223 kubelet[2822]: W0416 23:56:46.198221 2822 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:46.198265 kubelet[2822]: E0416 23:56:46.198229 2822 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:46.933364 containerd[1624]: time="2026-04-16T23:56:46.933286436Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:46.934821 containerd[1624]: time="2026-04-16T23:56:46.934776314Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Apr 16 23:56:46.936373 containerd[1624]: time="2026-04-16T23:56:46.936307872Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:46.940081 containerd[1624]: time="2026-04-16T23:56:46.940005866Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:46.941070 containerd[1624]: time="2026-04-16T23:56:46.940747425Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.569702512s" Apr 16 23:56:46.941070 containerd[1624]: time="2026-04-16T23:56:46.940781305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Apr 16 23:56:46.946733 containerd[1624]: time="2026-04-16T23:56:46.946698457Z" level=info msg="CreateContainer within sandbox \"708e3e7b9bdb04eca538541d3540149784665fda43cfb24274803c7ab1a5f460\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 16 23:56:46.957090 containerd[1624]: time="2026-04-16T23:56:46.956121123Z" level=info msg="Container 0407e11eb2f99254ddc14be5beb69676ca0ed37b6f54348863e899a873ff7f01: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:56:46.971609 containerd[1624]: time="2026-04-16T23:56:46.971533261Z" level=info msg="CreateContainer within sandbox \"708e3e7b9bdb04eca538541d3540149784665fda43cfb24274803c7ab1a5f460\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0407e11eb2f99254ddc14be5beb69676ca0ed37b6f54348863e899a873ff7f01\"" Apr 16 23:56:46.972313 containerd[1624]: time="2026-04-16T23:56:46.972287700Z" level=info msg="StartContainer for \"0407e11eb2f99254ddc14be5beb69676ca0ed37b6f54348863e899a873ff7f01\"" Apr 16 23:56:46.973775 containerd[1624]: time="2026-04-16T23:56:46.973750338Z" level=info msg="connecting to shim 0407e11eb2f99254ddc14be5beb69676ca0ed37b6f54348863e899a873ff7f01" address="unix:///run/containerd/s/6f58918a0ef3275aa72269afd8258ddfa883d30c020b070a378c2285423103d1" protocol=ttrpc version=3 Apr 16 23:56:46.992315 systemd[1]: Started cri-containerd-0407e11eb2f99254ddc14be5beb69676ca0ed37b6f54348863e899a873ff7f01.scope - libcontainer container 0407e11eb2f99254ddc14be5beb69676ca0ed37b6f54348863e899a873ff7f01. Apr 16 23:56:47.047369 containerd[1624]: time="2026-04-16T23:56:47.047263153Z" level=info msg="StartContainer for \"0407e11eb2f99254ddc14be5beb69676ca0ed37b6f54348863e899a873ff7f01\" returns successfully" Apr 16 23:56:47.050918 kubelet[2822]: E0416 23:56:47.050854 2822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cnqx8" podUID="83c6c081-1afd-4376-901d-205c3cf0fa07" Apr 16 23:56:47.058475 systemd[1]: cri-containerd-0407e11eb2f99254ddc14be5beb69676ca0ed37b6f54348863e899a873ff7f01.scope: Deactivated successfully. Apr 16 23:56:47.063348 containerd[1624]: time="2026-04-16T23:56:47.063297330Z" level=info msg="received container exit event container_id:\"0407e11eb2f99254ddc14be5beb69676ca0ed37b6f54348863e899a873ff7f01\" id:\"0407e11eb2f99254ddc14be5beb69676ca0ed37b6f54348863e899a873ff7f01\" pid:3488 exited_at:{seconds:1776383807 nanos:62802530}" Apr 16 23:56:47.082835 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0407e11eb2f99254ddc14be5beb69676ca0ed37b6f54348863e899a873ff7f01-rootfs.mount: Deactivated successfully. Apr 16 23:56:47.134270 kubelet[2822]: I0416 23:56:47.134214 2822 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 23:56:49.050920 kubelet[2822]: E0416 23:56:49.050858 2822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cnqx8" podUID="83c6c081-1afd-4376-901d-205c3cf0fa07" Apr 16 23:56:51.051153 kubelet[2822]: E0416 23:56:51.051009 2822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cnqx8" podUID="83c6c081-1afd-4376-901d-205c3cf0fa07" Apr 16 23:56:51.143480 containerd[1624]: time="2026-04-16T23:56:51.143353888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 16 23:56:53.051857 kubelet[2822]: E0416 23:56:53.051781 2822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cnqx8" podUID="83c6c081-1afd-4376-901d-205c3cf0fa07" Apr 16 23:56:55.051036 kubelet[2822]: E0416 23:56:55.050961 2822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cnqx8" podUID="83c6c081-1afd-4376-901d-205c3cf0fa07" Apr 16 23:56:57.051067 kubelet[2822]: E0416 23:56:57.050981 2822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cnqx8" podUID="83c6c081-1afd-4376-901d-205c3cf0fa07" Apr 16 23:56:58.232746 kubelet[2822]: I0416 23:56:58.231650 2822 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 23:56:59.050805 kubelet[2822]: E0416 23:56:59.050752 2822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cnqx8" podUID="83c6c081-1afd-4376-901d-205c3cf0fa07" Apr 16 23:56:59.796550 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1987249075.mount: Deactivated successfully. Apr 16 23:56:59.832998 containerd[1624]: time="2026-04-16T23:56:59.832628248Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:59.834793 containerd[1624]: time="2026-04-16T23:56:59.834765365Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Apr 16 23:56:59.837190 containerd[1624]: time="2026-04-16T23:56:59.837130282Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:59.840577 containerd[1624]: time="2026-04-16T23:56:59.840545437Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:59.841305 containerd[1624]: time="2026-04-16T23:56:59.841170276Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 8.697690228s" Apr 16 23:56:59.841305 containerd[1624]: time="2026-04-16T23:56:59.841196076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Apr 16 23:56:59.846407 containerd[1624]: time="2026-04-16T23:56:59.846377628Z" level=info msg="CreateContainer within sandbox \"708e3e7b9bdb04eca538541d3540149784665fda43cfb24274803c7ab1a5f460\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 16 23:56:59.863504 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2245918427.mount: Deactivated successfully. Apr 16 23:56:59.864181 containerd[1624]: time="2026-04-16T23:56:59.863522324Z" level=info msg="Container 06a9b40707003d30411375a35aa8a90c9ddc88d402502dc72ab87067fe7d9bc8: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:56:59.879097 containerd[1624]: time="2026-04-16T23:56:59.879059262Z" level=info msg="CreateContainer within sandbox \"708e3e7b9bdb04eca538541d3540149784665fda43cfb24274803c7ab1a5f460\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"06a9b40707003d30411375a35aa8a90c9ddc88d402502dc72ab87067fe7d9bc8\"" Apr 16 23:56:59.879881 containerd[1624]: time="2026-04-16T23:56:59.879834381Z" level=info msg="StartContainer for \"06a9b40707003d30411375a35aa8a90c9ddc88d402502dc72ab87067fe7d9bc8\"" Apr 16 23:56:59.881657 containerd[1624]: time="2026-04-16T23:56:59.881627178Z" level=info msg="connecting to shim 06a9b40707003d30411375a35aa8a90c9ddc88d402502dc72ab87067fe7d9bc8" address="unix:///run/containerd/s/6f58918a0ef3275aa72269afd8258ddfa883d30c020b070a378c2285423103d1" protocol=ttrpc version=3 Apr 16 23:56:59.901329 systemd[1]: Started cri-containerd-06a9b40707003d30411375a35aa8a90c9ddc88d402502dc72ab87067fe7d9bc8.scope - libcontainer container 06a9b40707003d30411375a35aa8a90c9ddc88d402502dc72ab87067fe7d9bc8. Apr 16 23:56:59.961888 containerd[1624]: time="2026-04-16T23:56:59.961788023Z" level=info msg="StartContainer for \"06a9b40707003d30411375a35aa8a90c9ddc88d402502dc72ab87067fe7d9bc8\" returns successfully" Apr 16 23:57:00.056359 systemd[1]: cri-containerd-06a9b40707003d30411375a35aa8a90c9ddc88d402502dc72ab87067fe7d9bc8.scope: Deactivated successfully. Apr 16 23:57:00.059208 containerd[1624]: time="2026-04-16T23:57:00.059071044Z" level=info msg="received container exit event container_id:\"06a9b40707003d30411375a35aa8a90c9ddc88d402502dc72ab87067fe7d9bc8\" id:\"06a9b40707003d30411375a35aa8a90c9ddc88d402502dc72ab87067fe7d9bc8\" pid:3550 exited_at:{seconds:1776383820 nanos:58884804}" Apr 16 23:57:00.796539 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06a9b40707003d30411375a35aa8a90c9ddc88d402502dc72ab87067fe7d9bc8-rootfs.mount: Deactivated successfully. Apr 16 23:57:01.051301 kubelet[2822]: E0416 23:57:01.051123 2822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cnqx8" podUID="83c6c081-1afd-4376-901d-205c3cf0fa07" Apr 16 23:57:03.051248 kubelet[2822]: E0416 23:57:03.051111 2822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cnqx8" podUID="83c6c081-1afd-4376-901d-205c3cf0fa07" Apr 16 23:57:04.174807 containerd[1624]: time="2026-04-16T23:57:04.174765792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 16 23:57:05.051312 kubelet[2822]: E0416 23:57:05.051252 2822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cnqx8" podUID="83c6c081-1afd-4376-901d-205c3cf0fa07" Apr 16 23:57:07.051752 kubelet[2822]: E0416 23:57:07.051653 2822 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cnqx8" podUID="83c6c081-1afd-4376-901d-205c3cf0fa07" Apr 16 23:57:07.442265 containerd[1624]: time="2026-04-16T23:57:07.442151634Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:57:07.444213 containerd[1624]: time="2026-04-16T23:57:07.444172791Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Apr 16 23:57:07.446747 containerd[1624]: time="2026-04-16T23:57:07.446683228Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:57:07.449618 containerd[1624]: time="2026-04-16T23:57:07.449496504Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:57:07.450238 containerd[1624]: time="2026-04-16T23:57:07.450193503Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 3.274841952s" Apr 16 23:57:07.450238 containerd[1624]: time="2026-04-16T23:57:07.450234262Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Apr 16 23:57:07.455672 containerd[1624]: time="2026-04-16T23:57:07.455624935Z" level=info msg="CreateContainer within sandbox \"708e3e7b9bdb04eca538541d3540149784665fda43cfb24274803c7ab1a5f460\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 16 23:57:07.468070 containerd[1624]: time="2026-04-16T23:57:07.467032198Z" level=info msg="Container cfd732f127362bdad19dd8df65947d13a3b559a63dc5b136899daed84c163ba4: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:57:07.479399 containerd[1624]: time="2026-04-16T23:57:07.479355261Z" level=info msg="CreateContainer within sandbox \"708e3e7b9bdb04eca538541d3540149784665fda43cfb24274803c7ab1a5f460\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"cfd732f127362bdad19dd8df65947d13a3b559a63dc5b136899daed84c163ba4\"" Apr 16 23:57:07.480473 containerd[1624]: time="2026-04-16T23:57:07.480450219Z" level=info msg="StartContainer for \"cfd732f127362bdad19dd8df65947d13a3b559a63dc5b136899daed84c163ba4\"" Apr 16 23:57:07.482412 containerd[1624]: time="2026-04-16T23:57:07.482285377Z" level=info msg="connecting to shim cfd732f127362bdad19dd8df65947d13a3b559a63dc5b136899daed84c163ba4" address="unix:///run/containerd/s/6f58918a0ef3275aa72269afd8258ddfa883d30c020b070a378c2285423103d1" protocol=ttrpc version=3 Apr 16 23:57:07.504203 systemd[1]: Started cri-containerd-cfd732f127362bdad19dd8df65947d13a3b559a63dc5b136899daed84c163ba4.scope - libcontainer container cfd732f127362bdad19dd8df65947d13a3b559a63dc5b136899daed84c163ba4. Apr 16 23:57:07.569413 containerd[1624]: time="2026-04-16T23:57:07.569343492Z" level=info msg="StartContainer for \"cfd732f127362bdad19dd8df65947d13a3b559a63dc5b136899daed84c163ba4\" returns successfully" Apr 16 23:57:08.880973 containerd[1624]: time="2026-04-16T23:57:08.880875654Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 16 23:57:08.883468 kubelet[2822]: I0416 23:57:08.883416 2822 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 16 23:57:08.883601 systemd[1]: cri-containerd-cfd732f127362bdad19dd8df65947d13a3b559a63dc5b136899daed84c163ba4.scope: Deactivated successfully. Apr 16 23:57:08.883880 systemd[1]: cri-containerd-cfd732f127362bdad19dd8df65947d13a3b559a63dc5b136899daed84c163ba4.scope: Consumed 493ms CPU time, 196.1M memory peak, 171.3M written to disk. Apr 16 23:57:08.886520 containerd[1624]: time="2026-04-16T23:57:08.886486366Z" level=info msg="received container exit event container_id:\"cfd732f127362bdad19dd8df65947d13a3b559a63dc5b136899daed84c163ba4\" id:\"cfd732f127362bdad19dd8df65947d13a3b559a63dc5b136899daed84c163ba4\" pid:3609 exited_at:{seconds:1776383828 nanos:886243927}" Apr 16 23:57:08.913889 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cfd732f127362bdad19dd8df65947d13a3b559a63dc5b136899daed84c163ba4-rootfs.mount: Deactivated successfully. Apr 16 23:57:09.757571 kubelet[2822]: I0416 23:57:09.757469 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g6tn\" (UniqueName: \"kubernetes.io/projected/9f322798-d216-4e6e-a073-43fa8544e291-kube-api-access-4g6tn\") pod \"calico-apiserver-7f4f89866d-h5fkl\" (UID: \"9f322798-d216-4e6e-a073-43fa8544e291\") " pod="calico-system/calico-apiserver-7f4f89866d-h5fkl" Apr 16 23:57:09.757571 kubelet[2822]: I0416 23:57:09.757522 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9f322798-d216-4e6e-a073-43fa8544e291-calico-apiserver-certs\") pod \"calico-apiserver-7f4f89866d-h5fkl\" (UID: \"9f322798-d216-4e6e-a073-43fa8544e291\") " pod="calico-system/calico-apiserver-7f4f89866d-h5fkl" Apr 16 23:57:09.858712 kubelet[2822]: E0416 23:57:09.858648 2822 secret.go:189] Couldn't get secret calico-system/calico-apiserver-certs: object "calico-system"/"calico-apiserver-certs" not registered Apr 16 23:57:09.859021 kubelet[2822]: E0416 23:57:09.858887 2822 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f322798-d216-4e6e-a073-43fa8544e291-calico-apiserver-certs podName:9f322798-d216-4e6e-a073-43fa8544e291 nodeName:}" failed. No retries permitted until 2026-04-16 23:57:10.358862334 +0000 UTC m=+48.395796828 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/9f322798-d216-4e6e-a073-43fa8544e291-calico-apiserver-certs") pod "calico-apiserver-7f4f89866d-h5fkl" (UID: "9f322798-d216-4e6e-a073-43fa8544e291") : object "calico-system"/"calico-apiserver-certs" not registered Apr 16 23:57:10.079319 systemd[1]: Created slice kubepods-burstable-podec22351f_e1f7_41e4_a9b6_aff2fc21e395.slice - libcontainer container kubepods-burstable-podec22351f_e1f7_41e4_a9b6_aff2fc21e395.slice. Apr 16 23:57:10.087476 systemd[1]: Created slice kubepods-besteffort-pod9f322798_d216_4e6e_a073_43fa8544e291.slice - libcontainer container kubepods-besteffort-pod9f322798_d216_4e6e_a073_43fa8544e291.slice. Apr 16 23:57:10.160902 kubelet[2822]: I0416 23:57:10.160800 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndrp2\" (UniqueName: \"kubernetes.io/projected/ec22351f-e1f7-41e4-a9b6-aff2fc21e395-kube-api-access-ndrp2\") pod \"coredns-674b8bbfcf-z7m5k\" (UID: \"ec22351f-e1f7-41e4-a9b6-aff2fc21e395\") " pod="kube-system/coredns-674b8bbfcf-z7m5k" Apr 16 23:57:10.161549 kubelet[2822]: I0416 23:57:10.160940 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec22351f-e1f7-41e4-a9b6-aff2fc21e395-config-volume\") pod \"coredns-674b8bbfcf-z7m5k\" (UID: \"ec22351f-e1f7-41e4-a9b6-aff2fc21e395\") " pod="kube-system/coredns-674b8bbfcf-z7m5k" Apr 16 23:57:10.279525 systemd[1]: Created slice kubepods-besteffort-poddb4fc8b7_8052_4c81_8eb8_55979d26b8c5.slice - libcontainer container kubepods-besteffort-poddb4fc8b7_8052_4c81_8eb8_55979d26b8c5.slice. Apr 16 23:57:10.361992 kubelet[2822]: I0416 23:57:10.361888 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdc5g\" (UniqueName: \"kubernetes.io/projected/db4fc8b7-8052-4c81-8eb8-55979d26b8c5-kube-api-access-pdc5g\") pod \"calico-kube-controllers-7dc5f88bc4-fqkzx\" (UID: \"db4fc8b7-8052-4c81-8eb8-55979d26b8c5\") " pod="calico-system/calico-kube-controllers-7dc5f88bc4-fqkzx" Apr 16 23:57:10.361992 kubelet[2822]: I0416 23:57:10.361960 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db4fc8b7-8052-4c81-8eb8-55979d26b8c5-tigera-ca-bundle\") pod \"calico-kube-controllers-7dc5f88bc4-fqkzx\" (UID: \"db4fc8b7-8052-4c81-8eb8-55979d26b8c5\") " pod="calico-system/calico-kube-controllers-7dc5f88bc4-fqkzx" Apr 16 23:57:10.362747 systemd[1]: Created slice kubepods-besteffort-pod83c6c081_1afd_4376_901d_205c3cf0fa07.slice - libcontainer container kubepods-besteffort-pod83c6c081_1afd_4376_901d_205c3cf0fa07.slice. Apr 16 23:57:10.368001 containerd[1624]: time="2026-04-16T23:57:10.367149807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cnqx8,Uid:83c6c081-1afd-4376-901d-205c3cf0fa07,Namespace:calico-system,Attempt:0,}" Apr 16 23:57:10.377514 systemd[1]: Created slice kubepods-besteffort-pod04f7910c_7afc_459c_8ff2_3be7624489d1.slice - libcontainer container kubepods-besteffort-pod04f7910c_7afc_459c_8ff2_3be7624489d1.slice. Apr 16 23:57:10.385775 systemd[1]: Created slice kubepods-besteffort-pod097e3f74_60ef_493b_90d6_68ecb1761186.slice - libcontainer container kubepods-besteffort-pod097e3f74_60ef_493b_90d6_68ecb1761186.slice. Apr 16 23:57:10.390435 containerd[1624]: time="2026-04-16T23:57:10.389874014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z7m5k,Uid:ec22351f-e1f7-41e4-a9b6-aff2fc21e395,Namespace:kube-system,Attempt:0,}" Apr 16 23:57:10.390743 containerd[1624]: time="2026-04-16T23:57:10.390609253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f4f89866d-h5fkl,Uid:9f322798-d216-4e6e-a073-43fa8544e291,Namespace:calico-system,Attempt:0,}" Apr 16 23:57:10.400275 systemd[1]: Created slice kubepods-burstable-pod49437803_415c_470c_9754_0a650aa11c56.slice - libcontainer container kubepods-burstable-pod49437803_415c_470c_9754_0a650aa11c56.slice. Apr 16 23:57:10.412460 systemd[1]: Created slice kubepods-besteffort-pod4d7b14f0_1b63_46cd_97b4_9c93cc2c054f.slice - libcontainer container kubepods-besteffort-pod4d7b14f0_1b63_46cd_97b4_9c93cc2c054f.slice. Apr 16 23:57:10.462706 kubelet[2822]: I0416 23:57:10.462645 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/097e3f74-60ef-493b-90d6-68ecb1761186-whisker-ca-bundle\") pod \"whisker-895c4f675-g692r\" (UID: \"097e3f74-60ef-493b-90d6-68ecb1761186\") " pod="calico-system/whisker-895c4f675-g692r" Apr 16 23:57:10.462706 kubelet[2822]: I0416 23:57:10.462700 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjcdc\" (UniqueName: \"kubernetes.io/projected/04f7910c-7afc-459c-8ff2-3be7624489d1-kube-api-access-mjcdc\") pod \"goldmane-5b85766d88-t64m4\" (UID: \"04f7910c-7afc-459c-8ff2-3be7624489d1\") " pod="calico-system/goldmane-5b85766d88-t64m4" Apr 16 23:57:10.462861 kubelet[2822]: I0416 23:57:10.462734 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/097e3f74-60ef-493b-90d6-68ecb1761186-nginx-config\") pod \"whisker-895c4f675-g692r\" (UID: \"097e3f74-60ef-493b-90d6-68ecb1761186\") " pod="calico-system/whisker-895c4f675-g692r" Apr 16 23:57:10.462861 kubelet[2822]: I0416 23:57:10.462751 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj4zw\" (UniqueName: \"kubernetes.io/projected/097e3f74-60ef-493b-90d6-68ecb1761186-kube-api-access-vj4zw\") pod \"whisker-895c4f675-g692r\" (UID: \"097e3f74-60ef-493b-90d6-68ecb1761186\") " pod="calico-system/whisker-895c4f675-g692r" Apr 16 23:57:10.462861 kubelet[2822]: I0416 23:57:10.462768 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04f7910c-7afc-459c-8ff2-3be7624489d1-config\") pod \"goldmane-5b85766d88-t64m4\" (UID: \"04f7910c-7afc-459c-8ff2-3be7624489d1\") " pod="calico-system/goldmane-5b85766d88-t64m4" Apr 16 23:57:10.462861 kubelet[2822]: I0416 23:57:10.462783 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04f7910c-7afc-459c-8ff2-3be7624489d1-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-t64m4\" (UID: \"04f7910c-7afc-459c-8ff2-3be7624489d1\") " pod="calico-system/goldmane-5b85766d88-t64m4" Apr 16 23:57:10.462861 kubelet[2822]: I0416 23:57:10.462798 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/04f7910c-7afc-459c-8ff2-3be7624489d1-goldmane-key-pair\") pod \"goldmane-5b85766d88-t64m4\" (UID: \"04f7910c-7afc-459c-8ff2-3be7624489d1\") " pod="calico-system/goldmane-5b85766d88-t64m4" Apr 16 23:57:10.462972 kubelet[2822]: I0416 23:57:10.462820 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4d7b14f0-1b63-46cd-97b4-9c93cc2c054f-calico-apiserver-certs\") pod \"calico-apiserver-7f4f89866d-dtrtx\" (UID: \"4d7b14f0-1b63-46cd-97b4-9c93cc2c054f\") " pod="calico-system/calico-apiserver-7f4f89866d-dtrtx" Apr 16 23:57:10.462972 kubelet[2822]: I0416 23:57:10.462836 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd225\" (UniqueName: \"kubernetes.io/projected/4d7b14f0-1b63-46cd-97b4-9c93cc2c054f-kube-api-access-dd225\") pod \"calico-apiserver-7f4f89866d-dtrtx\" (UID: \"4d7b14f0-1b63-46cd-97b4-9c93cc2c054f\") " pod="calico-system/calico-apiserver-7f4f89866d-dtrtx" Apr 16 23:57:10.462972 kubelet[2822]: I0416 23:57:10.462854 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/097e3f74-60ef-493b-90d6-68ecb1761186-whisker-backend-key-pair\") pod \"whisker-895c4f675-g692r\" (UID: \"097e3f74-60ef-493b-90d6-68ecb1761186\") " pod="calico-system/whisker-895c4f675-g692r" Apr 16 23:57:10.462972 kubelet[2822]: I0416 23:57:10.462889 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49437803-415c-470c-9754-0a650aa11c56-config-volume\") pod \"coredns-674b8bbfcf-h2ts7\" (UID: \"49437803-415c-470c-9754-0a650aa11c56\") " pod="kube-system/coredns-674b8bbfcf-h2ts7" Apr 16 23:57:10.462972 kubelet[2822]: I0416 23:57:10.462904 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nthj2\" (UniqueName: \"kubernetes.io/projected/49437803-415c-470c-9754-0a650aa11c56-kube-api-access-nthj2\") pod \"coredns-674b8bbfcf-h2ts7\" (UID: \"49437803-415c-470c-9754-0a650aa11c56\") " pod="kube-system/coredns-674b8bbfcf-h2ts7" Apr 16 23:57:10.470627 containerd[1624]: time="2026-04-16T23:57:10.470574698Z" level=error msg="Failed to destroy network for sandbox \"2f511403e83248dfa58a0ee8951f967af0139b771369314302d094e749bcde39\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:57:10.473164 containerd[1624]: time="2026-04-16T23:57:10.473108935Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f4f89866d-h5fkl,Uid:9f322798-d216-4e6e-a073-43fa8544e291,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f511403e83248dfa58a0ee8951f967af0139b771369314302d094e749bcde39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:57:10.473511 kubelet[2822]: E0416 23:57:10.473461 2822 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f511403e83248dfa58a0ee8951f967af0139b771369314302d094e749bcde39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:57:10.473577 kubelet[2822]: E0416 23:57:10.473534 2822 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f511403e83248dfa58a0ee8951f967af0139b771369314302d094e749bcde39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7f4f89866d-h5fkl" Apr 16 23:57:10.473577 kubelet[2822]: E0416 23:57:10.473555 2822 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f511403e83248dfa58a0ee8951f967af0139b771369314302d094e749bcde39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7f4f89866d-h5fkl" Apr 16 23:57:10.473672 kubelet[2822]: E0416 23:57:10.473606 2822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f4f89866d-h5fkl_calico-system(9f322798-d216-4e6e-a073-43fa8544e291)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f4f89866d-h5fkl_calico-system(9f322798-d216-4e6e-a073-43fa8544e291)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2f511403e83248dfa58a0ee8951f967af0139b771369314302d094e749bcde39\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7f4f89866d-h5fkl" podUID="9f322798-d216-4e6e-a073-43fa8544e291" Apr 16 23:57:10.477885 containerd[1624]: time="2026-04-16T23:57:10.477846728Z" level=error msg="Failed to destroy network for sandbox \"d2312c23e5d9b48a31111b6ef78b61edd17b2d327f9a4d05c13c043c816a78ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:57:10.478724 containerd[1624]: time="2026-04-16T23:57:10.478693367Z" level=error msg="Failed to destroy network for sandbox \"2597503bf3227202e69af4d916314cc5860ec842c1134f715025905cb092df26\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:57:10.480087 containerd[1624]: time="2026-04-16T23:57:10.480035485Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cnqx8,Uid:83c6c081-1afd-4376-901d-205c3cf0fa07,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2312c23e5d9b48a31111b6ef78b61edd17b2d327f9a4d05c13c043c816a78ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:57:10.480655 kubelet[2822]: E0416 23:57:10.480274 2822 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2312c23e5d9b48a31111b6ef78b61edd17b2d327f9a4d05c13c043c816a78ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:57:10.480655 kubelet[2822]: E0416 23:57:10.480350 2822 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2312c23e5d9b48a31111b6ef78b61edd17b2d327f9a4d05c13c043c816a78ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cnqx8" Apr 16 23:57:10.480655 kubelet[2822]: E0416 23:57:10.480372 2822 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2312c23e5d9b48a31111b6ef78b61edd17b2d327f9a4d05c13c043c816a78ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cnqx8" Apr 16 23:57:10.480792 kubelet[2822]: E0416 23:57:10.480422 2822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cnqx8_calico-system(83c6c081-1afd-4376-901d-205c3cf0fa07)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cnqx8_calico-system(83c6c081-1afd-4376-901d-205c3cf0fa07)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d2312c23e5d9b48a31111b6ef78b61edd17b2d327f9a4d05c13c043c816a78ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cnqx8" podUID="83c6c081-1afd-4376-901d-205c3cf0fa07" Apr 16 23:57:10.481599 containerd[1624]: time="2026-04-16T23:57:10.481554603Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z7m5k,Uid:ec22351f-e1f7-41e4-a9b6-aff2fc21e395,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2597503bf3227202e69af4d916314cc5860ec842c1134f715025905cb092df26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:57:10.481766 kubelet[2822]: E0416 23:57:10.481728 2822 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2597503bf3227202e69af4d916314cc5860ec842c1134f715025905cb092df26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:57:10.481806 kubelet[2822]: E0416 23:57:10.481777 2822 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2597503bf3227202e69af4d916314cc5860ec842c1134f715025905cb092df26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-z7m5k" Apr 16 23:57:10.481806 kubelet[2822]: E0416 23:57:10.481795 2822 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2597503bf3227202e69af4d916314cc5860ec842c1134f715025905cb092df26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-z7m5k" Apr 16 23:57:10.481881 kubelet[2822]: E0416 23:57:10.481841 2822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-z7m5k_kube-system(ec22351f-e1f7-41e4-a9b6-aff2fc21e395)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-z7m5k_kube-system(ec22351f-e1f7-41e4-a9b6-aff2fc21e395)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2597503bf3227202e69af4d916314cc5860ec842c1134f715025905cb092df26\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-z7m5k" podUID="ec22351f-e1f7-41e4-a9b6-aff2fc21e395" Apr 16 23:57:10.584059 containerd[1624]: time="2026-04-16T23:57:10.583914656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7dc5f88bc4-fqkzx,Uid:db4fc8b7-8052-4c81-8eb8-55979d26b8c5,Namespace:calico-system,Attempt:0,}" Apr 16 23:57:10.627549 containerd[1624]: time="2026-04-16T23:57:10.627415954Z" level=error msg="Failed to destroy network for sandbox \"57bf7170488d60bcbea96bc62f20ca96648dc7b1b2af92b500f4e8463a3b6795\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:57:10.629522 containerd[1624]: time="2026-04-16T23:57:10.629447471Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7dc5f88bc4-fqkzx,Uid:db4fc8b7-8052-4c81-8eb8-55979d26b8c5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"57bf7170488d60bcbea96bc62f20ca96648dc7b1b2af92b500f4e8463a3b6795\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:57:10.629773 kubelet[2822]: E0416 23:57:10.629721 2822 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57bf7170488d60bcbea96bc62f20ca96648dc7b1b2af92b500f4e8463a3b6795\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:57:10.629837 kubelet[2822]: E0416 23:57:10.629797 2822 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57bf7170488d60bcbea96bc62f20ca96648dc7b1b2af92b500f4e8463a3b6795\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7dc5f88bc4-fqkzx" Apr 16 23:57:10.629837 kubelet[2822]: E0416 23:57:10.629832 2822 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57bf7170488d60bcbea96bc62f20ca96648dc7b1b2af92b500f4e8463a3b6795\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7dc5f88bc4-fqkzx" Apr 16 23:57:10.630144 kubelet[2822]: E0416 23:57:10.629882 2822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7dc5f88bc4-fqkzx_calico-system(db4fc8b7-8052-4c81-8eb8-55979d26b8c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7dc5f88bc4-fqkzx_calico-system(db4fc8b7-8052-4c81-8eb8-55979d26b8c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"57bf7170488d60bcbea96bc62f20ca96648dc7b1b2af92b500f4e8463a3b6795\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7dc5f88bc4-fqkzx" podUID="db4fc8b7-8052-4c81-8eb8-55979d26b8c5" Apr 16 23:57:10.681487 containerd[1624]: time="2026-04-16T23:57:10.681426437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-t64m4,Uid:04f7910c-7afc-459c-8ff2-3be7624489d1,Namespace:calico-system,Attempt:0,}" Apr 16 23:57:10.693028 containerd[1624]: time="2026-04-16T23:57:10.692388861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-895c4f675-g692r,Uid:097e3f74-60ef-493b-90d6-68ecb1761186,Namespace:calico-system,Attempt:0,}" Apr 16 23:57:10.709561 containerd[1624]: time="2026-04-16T23:57:10.709510036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h2ts7,Uid:49437803-415c-470c-9754-0a650aa11c56,Namespace:kube-system,Attempt:0,}" Apr 16 23:57:10.716285 containerd[1624]: time="2026-04-16T23:57:10.716240107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f4f89866d-dtrtx,Uid:4d7b14f0-1b63-46cd-97b4-9c93cc2c054f,Namespace:calico-system,Attempt:0,}" Apr 16 23:57:10.732901 containerd[1624]: time="2026-04-16T23:57:10.732844723Z" level=error msg="Failed to destroy network for sandbox \"9e7022b0655da855dc1a5ef19f7e9e7829235be340b4fd3c88013d67d6ee94c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:57:10.734940 containerd[1624]: time="2026-04-16T23:57:10.734874320Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-t64m4,Uid:04f7910c-7afc-459c-8ff2-3be7624489d1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e7022b0655da855dc1a5ef19f7e9e7829235be340b4fd3c88013d67d6ee94c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:57:10.735196 kubelet[2822]: E0416 23:57:10.735155 2822 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e7022b0655da855dc1a5ef19f7e9e7829235be340b4fd3c88013d67d6ee94c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:57:10.735266 kubelet[2822]: E0416 23:57:10.735212 2822 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e7022b0655da855dc1a5ef19f7e9e7829235be340b4fd3c88013d67d6ee94c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-t64m4" Apr 16 23:57:10.735266 kubelet[2822]: E0416 23:57:10.735234 2822 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e7022b0655da855dc1a5ef19f7e9e7829235be340b4fd3c88013d67d6ee94c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-t64m4" Apr 16 23:57:10.735315 kubelet[2822]: E0416 23:57:10.735280 2822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-t64m4_calico-system(04f7910c-7afc-459c-8ff2-3be7624489d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-t64m4_calico-system(04f7910c-7afc-459c-8ff2-3be7624489d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9e7022b0655da855dc1a5ef19f7e9e7829235be340b4fd3c88013d67d6ee94c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-t64m4" podUID="04f7910c-7afc-459c-8ff2-3be7624489d1" Apr 16 23:57:10.743281 containerd[1624]: time="2026-04-16T23:57:10.743220188Z" level=error msg="Failed to destroy network for sandbox \"98ae263f4f24c86c65b98e9fb566a963287921f7d4c1f682dcefb50677295c57\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:57:10.745589 containerd[1624]: time="2026-04-16T23:57:10.745535985Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-895c4f675-g692r,Uid:097e3f74-60ef-493b-90d6-68ecb1761186,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"98ae263f4f24c86c65b98e9fb566a963287921f7d4c1f682dcefb50677295c57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:57:10.745816 kubelet[2822]: E0416 23:57:10.745783 2822 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98ae263f4f24c86c65b98e9fb566a963287921f7d4c1f682dcefb50677295c57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:57:10.745868 kubelet[2822]: E0416 23:57:10.745844 2822 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98ae263f4f24c86c65b98e9fb566a963287921f7d4c1f682dcefb50677295c57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-895c4f675-g692r" Apr 16 23:57:10.745868 kubelet[2822]: E0416 23:57:10.745863 2822 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98ae263f4f24c86c65b98e9fb566a963287921f7d4c1f682dcefb50677295c57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-895c4f675-g692r" Apr 16 23:57:10.745929 kubelet[2822]: E0416 23:57:10.745904 2822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-895c4f675-g692r_calico-system(097e3f74-60ef-493b-90d6-68ecb1761186)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-895c4f675-g692r_calico-system(097e3f74-60ef-493b-90d6-68ecb1761186)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"98ae263f4f24c86c65b98e9fb566a963287921f7d4c1f682dcefb50677295c57\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-895c4f675-g692r" podUID="097e3f74-60ef-493b-90d6-68ecb1761186" Apr 16 23:57:10.775666 containerd[1624]: time="2026-04-16T23:57:10.775608342Z" level=error msg="Failed to destroy network for sandbox \"49b09734d9a54b0e7bac849a9c14127679eb8ce781bd1c59fe9167d012aa17bb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:57:10.776746 containerd[1624]: time="2026-04-16T23:57:10.776652460Z" level=error msg="Failed to destroy network for sandbox \"bc1518c574801a97b1eb29067e9017df60c3f9d09fded31ee7bdcdb00de49fdf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:57:10.777649 containerd[1624]: time="2026-04-16T23:57:10.777616059Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h2ts7,Uid:49437803-415c-470c-9754-0a650aa11c56,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"49b09734d9a54b0e7bac849a9c14127679eb8ce781bd1c59fe9167d012aa17bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:57:10.777978 kubelet[2822]: E0416 23:57:10.777943 2822 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49b09734d9a54b0e7bac849a9c14127679eb8ce781bd1c59fe9167d012aa17bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:57:10.778036 kubelet[2822]: E0416 23:57:10.778002 2822 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49b09734d9a54b0e7bac849a9c14127679eb8ce781bd1c59fe9167d012aa17bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-h2ts7" Apr 16 23:57:10.778107 kubelet[2822]: E0416 23:57:10.778036 2822 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49b09734d9a54b0e7bac849a9c14127679eb8ce781bd1c59fe9167d012aa17bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-h2ts7" Apr 16 23:57:10.778135 kubelet[2822]: E0416 23:57:10.778099 2822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-h2ts7_kube-system(49437803-415c-470c-9754-0a650aa11c56)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-h2ts7_kube-system(49437803-415c-470c-9754-0a650aa11c56)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"49b09734d9a54b0e7bac849a9c14127679eb8ce781bd1c59fe9167d012aa17bb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-h2ts7" podUID="49437803-415c-470c-9754-0a650aa11c56" Apr 16 23:57:10.780526 containerd[1624]: time="2026-04-16T23:57:10.780443135Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f4f89866d-dtrtx,Uid:4d7b14f0-1b63-46cd-97b4-9c93cc2c054f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc1518c574801a97b1eb29067e9017df60c3f9d09fded31ee7bdcdb00de49fdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:57:10.780676 kubelet[2822]: E0416 23:57:10.780640 2822 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc1518c574801a97b1eb29067e9017df60c3f9d09fded31ee7bdcdb00de49fdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:57:10.780727 kubelet[2822]: E0416 23:57:10.780690 2822 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc1518c574801a97b1eb29067e9017df60c3f9d09fded31ee7bdcdb00de49fdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7f4f89866d-dtrtx" Apr 16 23:57:10.780727 kubelet[2822]: E0416 23:57:10.780712 2822 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc1518c574801a97b1eb29067e9017df60c3f9d09fded31ee7bdcdb00de49fdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7f4f89866d-dtrtx" Apr 16 23:57:10.780783 kubelet[2822]: E0416 23:57:10.780759 2822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f4f89866d-dtrtx_calico-system(4d7b14f0-1b63-46cd-97b4-9c93cc2c054f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f4f89866d-dtrtx_calico-system(4d7b14f0-1b63-46cd-97b4-9c93cc2c054f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bc1518c574801a97b1eb29067e9017df60c3f9d09fded31ee7bdcdb00de49fdf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7f4f89866d-dtrtx" podUID="4d7b14f0-1b63-46cd-97b4-9c93cc2c054f" Apr 16 23:57:11.208100 containerd[1624]: time="2026-04-16T23:57:11.208056923Z" level=info msg="CreateContainer within sandbox \"708e3e7b9bdb04eca538541d3540149784665fda43cfb24274803c7ab1a5f460\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 16 23:57:11.223666 containerd[1624]: time="2026-04-16T23:57:11.223595100Z" level=info msg="Container 71bfc2e8b12cd0f8c35f461446d84aff05457af8011b31968e28a91510b74eb8: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:57:11.236825 containerd[1624]: time="2026-04-16T23:57:11.236769362Z" level=info msg="CreateContainer within sandbox \"708e3e7b9bdb04eca538541d3540149784665fda43cfb24274803c7ab1a5f460\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"71bfc2e8b12cd0f8c35f461446d84aff05457af8011b31968e28a91510b74eb8\"" Apr 16 23:57:11.237314 containerd[1624]: time="2026-04-16T23:57:11.237267881Z" level=info msg="StartContainer for \"71bfc2e8b12cd0f8c35f461446d84aff05457af8011b31968e28a91510b74eb8\"" Apr 16 23:57:11.238915 containerd[1624]: time="2026-04-16T23:57:11.238886359Z" level=info msg="connecting to shim 71bfc2e8b12cd0f8c35f461446d84aff05457af8011b31968e28a91510b74eb8" address="unix:///run/containerd/s/6f58918a0ef3275aa72269afd8258ddfa883d30c020b070a378c2285423103d1" protocol=ttrpc version=3 Apr 16 23:57:11.254211 systemd[1]: Started cri-containerd-71bfc2e8b12cd0f8c35f461446d84aff05457af8011b31968e28a91510b74eb8.scope - libcontainer container 71bfc2e8b12cd0f8c35f461446d84aff05457af8011b31968e28a91510b74eb8. Apr 16 23:57:11.282698 systemd[1]: run-netns-cni\x2d5b11cf3d\x2dd832\x2d70f9\x2da1ae\x2dfe0241b44459.mount: Deactivated successfully. Apr 16 23:57:11.282790 systemd[1]: run-netns-cni\x2da166a732\x2d46f9\x2dcc7f\x2d126d\x2d2c234ab8e4e0.mount: Deactivated successfully. Apr 16 23:57:11.282831 systemd[1]: run-netns-cni\x2d4f17f066\x2dd26b\x2dff6e\x2d3e2b\x2d073d352a59e5.mount: Deactivated successfully. Apr 16 23:57:11.322517 containerd[1624]: time="2026-04-16T23:57:11.322473719Z" level=info msg="StartContainer for \"71bfc2e8b12cd0f8c35f461446d84aff05457af8011b31968e28a91510b74eb8\" returns successfully" Apr 16 23:57:11.472316 kubelet[2822]: I0416 23:57:11.471677 2822 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/097e3f74-60ef-493b-90d6-68ecb1761186-whisker-backend-key-pair\") pod \"097e3f74-60ef-493b-90d6-68ecb1761186\" (UID: \"097e3f74-60ef-493b-90d6-68ecb1761186\") " Apr 16 23:57:11.472316 kubelet[2822]: I0416 23:57:11.471745 2822 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/097e3f74-60ef-493b-90d6-68ecb1761186-whisker-ca-bundle\") pod \"097e3f74-60ef-493b-90d6-68ecb1761186\" (UID: \"097e3f74-60ef-493b-90d6-68ecb1761186\") " Apr 16 23:57:11.472316 kubelet[2822]: I0416 23:57:11.471763 2822 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/097e3f74-60ef-493b-90d6-68ecb1761186-nginx-config\") pod \"097e3f74-60ef-493b-90d6-68ecb1761186\" (UID: \"097e3f74-60ef-493b-90d6-68ecb1761186\") " Apr 16 23:57:11.472316 kubelet[2822]: I0416 23:57:11.471793 2822 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vj4zw\" (UniqueName: \"kubernetes.io/projected/097e3f74-60ef-493b-90d6-68ecb1761186-kube-api-access-vj4zw\") pod \"097e3f74-60ef-493b-90d6-68ecb1761186\" (UID: \"097e3f74-60ef-493b-90d6-68ecb1761186\") " Apr 16 23:57:11.472316 kubelet[2822]: I0416 23:57:11.472128 2822 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/097e3f74-60ef-493b-90d6-68ecb1761186-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "097e3f74-60ef-493b-90d6-68ecb1761186" (UID: "097e3f74-60ef-493b-90d6-68ecb1761186"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 23:57:11.473411 kubelet[2822]: I0416 23:57:11.472313 2822 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/097e3f74-60ef-493b-90d6-68ecb1761186-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "097e3f74-60ef-493b-90d6-68ecb1761186" (UID: "097e3f74-60ef-493b-90d6-68ecb1761186"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 23:57:11.476734 kubelet[2822]: I0416 23:57:11.476686 2822 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/097e3f74-60ef-493b-90d6-68ecb1761186-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "097e3f74-60ef-493b-90d6-68ecb1761186" (UID: "097e3f74-60ef-493b-90d6-68ecb1761186"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 23:57:11.478067 kubelet[2822]: I0416 23:57:11.476922 2822 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/097e3f74-60ef-493b-90d6-68ecb1761186-kube-api-access-vj4zw" (OuterVolumeSpecName: "kube-api-access-vj4zw") pod "097e3f74-60ef-493b-90d6-68ecb1761186" (UID: "097e3f74-60ef-493b-90d6-68ecb1761186"). InnerVolumeSpecName "kube-api-access-vj4zw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 23:57:11.477550 systemd[1]: var-lib-kubelet-pods-097e3f74\x2d60ef\x2d493b\x2d90d6\x2d68ecb1761186-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvj4zw.mount: Deactivated successfully. Apr 16 23:57:11.477641 systemd[1]: var-lib-kubelet-pods-097e3f74\x2d60ef\x2d493b\x2d90d6\x2d68ecb1761186-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 16 23:57:11.572371 kubelet[2822]: I0416 23:57:11.572327 2822 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/097e3f74-60ef-493b-90d6-68ecb1761186-whisker-ca-bundle\") on node \"ci-4459-2-4-n-fcb502653b\" DevicePath \"\"" Apr 16 23:57:11.572371 kubelet[2822]: I0416 23:57:11.572360 2822 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/097e3f74-60ef-493b-90d6-68ecb1761186-nginx-config\") on node \"ci-4459-2-4-n-fcb502653b\" DevicePath \"\"" Apr 16 23:57:11.572371 kubelet[2822]: I0416 23:57:11.572371 2822 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vj4zw\" (UniqueName: \"kubernetes.io/projected/097e3f74-60ef-493b-90d6-68ecb1761186-kube-api-access-vj4zw\") on node \"ci-4459-2-4-n-fcb502653b\" DevicePath \"\"" Apr 16 23:57:11.572371 kubelet[2822]: I0416 23:57:11.572381 2822 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/097e3f74-60ef-493b-90d6-68ecb1761186-whisker-backend-key-pair\") on node \"ci-4459-2-4-n-fcb502653b\" DevicePath \"\"" Apr 16 23:57:12.058126 systemd[1]: Removed slice kubepods-besteffort-pod097e3f74_60ef_493b_90d6_68ecb1761186.slice - libcontainer container kubepods-besteffort-pod097e3f74_60ef_493b_90d6_68ecb1761186.slice. Apr 16 23:57:12.230877 kubelet[2822]: I0416 23:57:12.230805 2822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-v4dpx" podStartSLOduration=5.986858047 podStartE2EDuration="30.230787938s" podCreationTimestamp="2026-04-16 23:56:42 +0000 UTC" firstStartedPulling="2026-04-16 23:56:43.20761397 +0000 UTC m=+21.244548464" lastFinishedPulling="2026-04-16 23:57:07.451543861 +0000 UTC m=+45.488478355" observedRunningTime="2026-04-16 23:57:12.218721156 +0000 UTC m=+50.255655650" watchObservedRunningTime="2026-04-16 23:57:12.230787938 +0000 UTC m=+50.267722432" Apr 16 23:57:12.287592 systemd[1]: Created slice kubepods-besteffort-podf13b2269_dd6d_4b4f_b37b_15e1ef65aaf1.slice - libcontainer container kubepods-besteffort-podf13b2269_dd6d_4b4f_b37b_15e1ef65aaf1.slice. Apr 16 23:57:12.377158 kubelet[2822]: I0416 23:57:12.376813 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/f13b2269-dd6d-4b4f-b37b-15e1ef65aaf1-nginx-config\") pod \"whisker-6bdf54f55b-4624x\" (UID: \"f13b2269-dd6d-4b4f-b37b-15e1ef65aaf1\") " pod="calico-system/whisker-6bdf54f55b-4624x" Apr 16 23:57:12.377158 kubelet[2822]: I0416 23:57:12.376892 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f13b2269-dd6d-4b4f-b37b-15e1ef65aaf1-whisker-backend-key-pair\") pod \"whisker-6bdf54f55b-4624x\" (UID: \"f13b2269-dd6d-4b4f-b37b-15e1ef65aaf1\") " pod="calico-system/whisker-6bdf54f55b-4624x" Apr 16 23:57:12.377158 kubelet[2822]: I0416 23:57:12.376915 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f13b2269-dd6d-4b4f-b37b-15e1ef65aaf1-whisker-ca-bundle\") pod \"whisker-6bdf54f55b-4624x\" (UID: \"f13b2269-dd6d-4b4f-b37b-15e1ef65aaf1\") " pod="calico-system/whisker-6bdf54f55b-4624x" Apr 16 23:57:12.377158 kubelet[2822]: I0416 23:57:12.376938 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5zjl\" (UniqueName: \"kubernetes.io/projected/f13b2269-dd6d-4b4f-b37b-15e1ef65aaf1-kube-api-access-k5zjl\") pod \"whisker-6bdf54f55b-4624x\" (UID: \"f13b2269-dd6d-4b4f-b37b-15e1ef65aaf1\") " pod="calico-system/whisker-6bdf54f55b-4624x" Apr 16 23:57:12.591570 containerd[1624]: time="2026-04-16T23:57:12.591518942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bdf54f55b-4624x,Uid:f13b2269-dd6d-4b4f-b37b-15e1ef65aaf1,Namespace:calico-system,Attempt:0,}" Apr 16 23:57:12.814705 systemd-networkd[1494]: calic5c3ee32ffe: Link UP Apr 16 23:57:12.816749 systemd-networkd[1494]: calic5c3ee32ffe: Gained carrier Apr 16 23:57:12.831946 containerd[1624]: 2026-04-16 23:57:12.631 [ERROR][4026] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 16 23:57:12.831946 containerd[1624]: 2026-04-16 23:57:12.663 [INFO][4026] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--fcb502653b-k8s-whisker--6bdf54f55b--4624x-eth0 whisker-6bdf54f55b- calico-system f13b2269-dd6d-4b4f-b37b-15e1ef65aaf1 924 0 2026-04-16 23:57:12 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6bdf54f55b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459-2-4-n-fcb502653b whisker-6bdf54f55b-4624x eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic5c3ee32ffe [] [] }} ContainerID="30af21fd9969f7f683eb9113ca7ec5042ed2b7efc6c6249a6903468a6318322b" Namespace="calico-system" Pod="whisker-6bdf54f55b-4624x" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-whisker--6bdf54f55b--4624x-" Apr 16 23:57:12.831946 containerd[1624]: 2026-04-16 23:57:12.663 [INFO][4026] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="30af21fd9969f7f683eb9113ca7ec5042ed2b7efc6c6249a6903468a6318322b" Namespace="calico-system" Pod="whisker-6bdf54f55b-4624x" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-whisker--6bdf54f55b--4624x-eth0" Apr 16 23:57:12.831946 containerd[1624]: 2026-04-16 23:57:12.747 [INFO][4112] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="30af21fd9969f7f683eb9113ca7ec5042ed2b7efc6c6249a6903468a6318322b" HandleID="k8s-pod-network.30af21fd9969f7f683eb9113ca7ec5042ed2b7efc6c6249a6903468a6318322b" Workload="ci--4459--2--4--n--fcb502653b-k8s-whisker--6bdf54f55b--4624x-eth0" Apr 16 23:57:12.832183 containerd[1624]: 2026-04-16 23:57:12.759 [INFO][4112] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="30af21fd9969f7f683eb9113ca7ec5042ed2b7efc6c6249a6903468a6318322b" HandleID="k8s-pod-network.30af21fd9969f7f683eb9113ca7ec5042ed2b7efc6c6249a6903468a6318322b" Workload="ci--4459--2--4--n--fcb502653b-k8s-whisker--6bdf54f55b--4624x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400020e800), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-n-fcb502653b", "pod":"whisker-6bdf54f55b-4624x", "timestamp":"2026-04-16 23:57:12.747270639 +0000 UTC"}, Hostname:"ci-4459-2-4-n-fcb502653b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002214a0)} Apr 16 23:57:12.832183 containerd[1624]: 2026-04-16 23:57:12.759 [INFO][4112] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:57:12.832183 containerd[1624]: 2026-04-16 23:57:12.759 [INFO][4112] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:57:12.832183 containerd[1624]: 2026-04-16 23:57:12.759 [INFO][4112] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-fcb502653b' Apr 16 23:57:12.832183 containerd[1624]: 2026-04-16 23:57:12.762 [INFO][4112] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.30af21fd9969f7f683eb9113ca7ec5042ed2b7efc6c6249a6903468a6318322b" host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:12.832183 containerd[1624]: 2026-04-16 23:57:12.772 [INFO][4112] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:12.832183 containerd[1624]: 2026-04-16 23:57:12.777 [INFO][4112] ipam/ipam.go 526: Trying affinity for 192.168.76.192/26 host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:12.832183 containerd[1624]: 2026-04-16 23:57:12.779 [INFO][4112] ipam/ipam.go 160: Attempting to load block cidr=192.168.76.192/26 host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:12.832183 containerd[1624]: 2026-04-16 23:57:12.781 [INFO][4112] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.76.192/26 host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:12.832371 containerd[1624]: 2026-04-16 23:57:12.781 [INFO][4112] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.76.192/26 handle="k8s-pod-network.30af21fd9969f7f683eb9113ca7ec5042ed2b7efc6c6249a6903468a6318322b" host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:12.832371 containerd[1624]: 2026-04-16 23:57:12.783 [INFO][4112] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.30af21fd9969f7f683eb9113ca7ec5042ed2b7efc6c6249a6903468a6318322b Apr 16 23:57:12.832371 containerd[1624]: 2026-04-16 23:57:12.790 [INFO][4112] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.76.192/26 handle="k8s-pod-network.30af21fd9969f7f683eb9113ca7ec5042ed2b7efc6c6249a6903468a6318322b" host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:12.832371 containerd[1624]: 2026-04-16 23:57:12.798 [INFO][4112] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.76.193/26] block=192.168.76.192/26 handle="k8s-pod-network.30af21fd9969f7f683eb9113ca7ec5042ed2b7efc6c6249a6903468a6318322b" host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:12.832371 containerd[1624]: 2026-04-16 23:57:12.798 [INFO][4112] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.76.193/26] handle="k8s-pod-network.30af21fd9969f7f683eb9113ca7ec5042ed2b7efc6c6249a6903468a6318322b" host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:12.832371 containerd[1624]: 2026-04-16 23:57:12.798 [INFO][4112] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:57:12.832371 containerd[1624]: 2026-04-16 23:57:12.798 [INFO][4112] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.76.193/26] IPv6=[] ContainerID="30af21fd9969f7f683eb9113ca7ec5042ed2b7efc6c6249a6903468a6318322b" HandleID="k8s-pod-network.30af21fd9969f7f683eb9113ca7ec5042ed2b7efc6c6249a6903468a6318322b" Workload="ci--4459--2--4--n--fcb502653b-k8s-whisker--6bdf54f55b--4624x-eth0" Apr 16 23:57:12.832494 containerd[1624]: 2026-04-16 23:57:12.801 [INFO][4026] cni-plugin/k8s.go 418: Populated endpoint ContainerID="30af21fd9969f7f683eb9113ca7ec5042ed2b7efc6c6249a6903468a6318322b" Namespace="calico-system" Pod="whisker-6bdf54f55b-4624x" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-whisker--6bdf54f55b--4624x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--fcb502653b-k8s-whisker--6bdf54f55b--4624x-eth0", GenerateName:"whisker-6bdf54f55b-", Namespace:"calico-system", SelfLink:"", UID:"f13b2269-dd6d-4b4f-b37b-15e1ef65aaf1", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 57, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6bdf54f55b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-fcb502653b", ContainerID:"", Pod:"whisker-6bdf54f55b-4624x", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.76.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic5c3ee32ffe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:57:12.832494 containerd[1624]: 2026-04-16 23:57:12.801 [INFO][4026] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.193/32] ContainerID="30af21fd9969f7f683eb9113ca7ec5042ed2b7efc6c6249a6903468a6318322b" Namespace="calico-system" Pod="whisker-6bdf54f55b-4624x" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-whisker--6bdf54f55b--4624x-eth0" Apr 16 23:57:12.832557 containerd[1624]: 2026-04-16 23:57:12.801 [INFO][4026] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic5c3ee32ffe ContainerID="30af21fd9969f7f683eb9113ca7ec5042ed2b7efc6c6249a6903468a6318322b" Namespace="calico-system" Pod="whisker-6bdf54f55b-4624x" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-whisker--6bdf54f55b--4624x-eth0" Apr 16 23:57:12.832557 containerd[1624]: 2026-04-16 23:57:12.817 [INFO][4026] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="30af21fd9969f7f683eb9113ca7ec5042ed2b7efc6c6249a6903468a6318322b" Namespace="calico-system" Pod="whisker-6bdf54f55b-4624x" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-whisker--6bdf54f55b--4624x-eth0" Apr 16 23:57:12.832595 containerd[1624]: 2026-04-16 23:57:12.817 [INFO][4026] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="30af21fd9969f7f683eb9113ca7ec5042ed2b7efc6c6249a6903468a6318322b" Namespace="calico-system" Pod="whisker-6bdf54f55b-4624x" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-whisker--6bdf54f55b--4624x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--fcb502653b-k8s-whisker--6bdf54f55b--4624x-eth0", GenerateName:"whisker-6bdf54f55b-", Namespace:"calico-system", SelfLink:"", UID:"f13b2269-dd6d-4b4f-b37b-15e1ef65aaf1", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 57, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6bdf54f55b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-fcb502653b", ContainerID:"30af21fd9969f7f683eb9113ca7ec5042ed2b7efc6c6249a6903468a6318322b", Pod:"whisker-6bdf54f55b-4624x", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.76.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic5c3ee32ffe", MAC:"c2:33:ab:93:ee:3b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:57:12.832637 containerd[1624]: 2026-04-16 23:57:12.829 [INFO][4026] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="30af21fd9969f7f683eb9113ca7ec5042ed2b7efc6c6249a6903468a6318322b" Namespace="calico-system" Pod="whisker-6bdf54f55b-4624x" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-whisker--6bdf54f55b--4624x-eth0" Apr 16 23:57:12.866073 containerd[1624]: time="2026-04-16T23:57:12.864920831Z" level=info msg="connecting to shim 30af21fd9969f7f683eb9113ca7ec5042ed2b7efc6c6249a6903468a6318322b" address="unix:///run/containerd/s/deaf83c9893c8609475811834906e40c8845d5da069426c3e99e93a989fdb41b" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:57:12.887189 systemd[1]: Started cri-containerd-30af21fd9969f7f683eb9113ca7ec5042ed2b7efc6c6249a6903468a6318322b.scope - libcontainer container 30af21fd9969f7f683eb9113ca7ec5042ed2b7efc6c6249a6903468a6318322b. Apr 16 23:57:12.925615 containerd[1624]: time="2026-04-16T23:57:12.925458584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bdf54f55b-4624x,Uid:f13b2269-dd6d-4b4f-b37b-15e1ef65aaf1,Namespace:calico-system,Attempt:0,} returns sandbox id \"30af21fd9969f7f683eb9113ca7ec5042ed2b7efc6c6249a6903468a6318322b\"" Apr 16 23:57:12.927504 containerd[1624]: time="2026-04-16T23:57:12.927476461Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 16 23:57:13.188342 systemd-networkd[1494]: vxlan.calico: Link UP Apr 16 23:57:13.188348 systemd-networkd[1494]: vxlan.calico: Gained carrier Apr 16 23:57:13.997414 systemd-networkd[1494]: calic5c3ee32ffe: Gained IPv6LL Apr 16 23:57:14.054264 kubelet[2822]: I0416 23:57:14.054208 2822 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="097e3f74-60ef-493b-90d6-68ecb1761186" path="/var/lib/kubelet/pods/097e3f74-60ef-493b-90d6-68ecb1761186/volumes" Apr 16 23:57:14.626250 containerd[1624]: time="2026-04-16T23:57:14.626190309Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:57:14.627673 containerd[1624]: time="2026-04-16T23:57:14.627645667Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Apr 16 23:57:14.629851 containerd[1624]: time="2026-04-16T23:57:14.629474224Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:57:14.633576 containerd[1624]: time="2026-04-16T23:57:14.633351459Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:57:14.634120 containerd[1624]: time="2026-04-16T23:57:14.634096658Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.706589277s" Apr 16 23:57:14.634215 containerd[1624]: time="2026-04-16T23:57:14.634188578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Apr 16 23:57:14.639452 containerd[1624]: time="2026-04-16T23:57:14.639420890Z" level=info msg="CreateContainer within sandbox \"30af21fd9969f7f683eb9113ca7ec5042ed2b7efc6c6249a6903468a6318322b\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 16 23:57:14.651890 containerd[1624]: time="2026-04-16T23:57:14.651164073Z" level=info msg="Container b85b79eaf6661a224f96d5430283ae7338a6a9803fb334089b8874529fb6ca51: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:57:14.662037 containerd[1624]: time="2026-04-16T23:57:14.661985938Z" level=info msg="CreateContainer within sandbox \"30af21fd9969f7f683eb9113ca7ec5042ed2b7efc6c6249a6903468a6318322b\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"b85b79eaf6661a224f96d5430283ae7338a6a9803fb334089b8874529fb6ca51\"" Apr 16 23:57:14.662546 containerd[1624]: time="2026-04-16T23:57:14.662503057Z" level=info msg="StartContainer for \"b85b79eaf6661a224f96d5430283ae7338a6a9803fb334089b8874529fb6ca51\"" Apr 16 23:57:14.664514 containerd[1624]: time="2026-04-16T23:57:14.664461294Z" level=info msg="connecting to shim b85b79eaf6661a224f96d5430283ae7338a6a9803fb334089b8874529fb6ca51" address="unix:///run/containerd/s/deaf83c9893c8609475811834906e40c8845d5da069426c3e99e93a989fdb41b" protocol=ttrpc version=3 Apr 16 23:57:14.686197 systemd[1]: Started cri-containerd-b85b79eaf6661a224f96d5430283ae7338a6a9803fb334089b8874529fb6ca51.scope - libcontainer container b85b79eaf6661a224f96d5430283ae7338a6a9803fb334089b8874529fb6ca51. Apr 16 23:57:14.721363 containerd[1624]: time="2026-04-16T23:57:14.721309653Z" level=info msg="StartContainer for \"b85b79eaf6661a224f96d5430283ae7338a6a9803fb334089b8874529fb6ca51\" returns successfully" Apr 16 23:57:14.722524 containerd[1624]: time="2026-04-16T23:57:14.722484371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 16 23:57:14.829501 systemd-networkd[1494]: vxlan.calico: Gained IPv6LL Apr 16 23:57:16.771314 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2517733005.mount: Deactivated successfully. Apr 16 23:57:16.798491 containerd[1624]: time="2026-04-16T23:57:16.798420679Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:57:16.800508 containerd[1624]: time="2026-04-16T23:57:16.800455476Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Apr 16 23:57:16.802023 containerd[1624]: time="2026-04-16T23:57:16.801991314Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:57:16.804757 containerd[1624]: time="2026-04-16T23:57:16.804719710Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:57:16.805490 containerd[1624]: time="2026-04-16T23:57:16.805327909Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 2.082807018s" Apr 16 23:57:16.805490 containerd[1624]: time="2026-04-16T23:57:16.805361189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Apr 16 23:57:16.811635 containerd[1624]: time="2026-04-16T23:57:16.811606061Z" level=info msg="CreateContainer within sandbox \"30af21fd9969f7f683eb9113ca7ec5042ed2b7efc6c6249a6903468a6318322b\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 16 23:57:16.824117 containerd[1624]: time="2026-04-16T23:57:16.821428366Z" level=info msg="Container 157d9acb013894a0a8727c5c9d8ea692c5a722d06060c9affd994e6011a1c383: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:57:16.836199 containerd[1624]: time="2026-04-16T23:57:16.836164385Z" level=info msg="CreateContainer within sandbox \"30af21fd9969f7f683eb9113ca7ec5042ed2b7efc6c6249a6903468a6318322b\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"157d9acb013894a0a8727c5c9d8ea692c5a722d06060c9affd994e6011a1c383\"" Apr 16 23:57:16.836936 containerd[1624]: time="2026-04-16T23:57:16.836907144Z" level=info msg="StartContainer for \"157d9acb013894a0a8727c5c9d8ea692c5a722d06060c9affd994e6011a1c383\"" Apr 16 23:57:16.838047 containerd[1624]: time="2026-04-16T23:57:16.838013503Z" level=info msg="connecting to shim 157d9acb013894a0a8727c5c9d8ea692c5a722d06060c9affd994e6011a1c383" address="unix:///run/containerd/s/deaf83c9893c8609475811834906e40c8845d5da069426c3e99e93a989fdb41b" protocol=ttrpc version=3 Apr 16 23:57:16.853198 systemd[1]: Started cri-containerd-157d9acb013894a0a8727c5c9d8ea692c5a722d06060c9affd994e6011a1c383.scope - libcontainer container 157d9acb013894a0a8727c5c9d8ea692c5a722d06060c9affd994e6011a1c383. Apr 16 23:57:16.888583 containerd[1624]: time="2026-04-16T23:57:16.888546710Z" level=info msg="StartContainer for \"157d9acb013894a0a8727c5c9d8ea692c5a722d06060c9affd994e6011a1c383\" returns successfully" Apr 16 23:57:17.228469 kubelet[2822]: I0416 23:57:17.228124 2822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6bdf54f55b-4624x" podStartSLOduration=1.348611218 podStartE2EDuration="5.228108704s" podCreationTimestamp="2026-04-16 23:57:12 +0000 UTC" firstStartedPulling="2026-04-16 23:57:12.926843142 +0000 UTC m=+50.963777596" lastFinishedPulling="2026-04-16 23:57:16.806340628 +0000 UTC m=+54.843275082" observedRunningTime="2026-04-16 23:57:17.226935066 +0000 UTC m=+55.263869560" watchObservedRunningTime="2026-04-16 23:57:17.228108704 +0000 UTC m=+55.265043198" Apr 16 23:57:22.052332 containerd[1624]: time="2026-04-16T23:57:22.052259878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-t64m4,Uid:04f7910c-7afc-459c-8ff2-3be7624489d1,Namespace:calico-system,Attempt:0,}" Apr 16 23:57:22.052719 containerd[1624]: time="2026-04-16T23:57:22.052389398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f4f89866d-dtrtx,Uid:4d7b14f0-1b63-46cd-97b4-9c93cc2c054f,Namespace:calico-system,Attempt:0,}" Apr 16 23:57:22.171363 systemd-networkd[1494]: calid57a684c8fc: Link UP Apr 16 23:57:22.172230 systemd-networkd[1494]: calid57a684c8fc: Gained carrier Apr 16 23:57:22.185666 containerd[1624]: 2026-04-16 23:57:22.099 [INFO][4443] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--fcb502653b-k8s-calico--apiserver--7f4f89866d--dtrtx-eth0 calico-apiserver-7f4f89866d- calico-system 4d7b14f0-1b63-46cd-97b4-9c93cc2c054f 873 0 2026-04-16 23:56:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f4f89866d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-4-n-fcb502653b calico-apiserver-7f4f89866d-dtrtx eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calid57a684c8fc [] [] }} ContainerID="03d5c5dc3a0e626f1834dd7b7ef8ce2da470482427f20ec211ee20b8ba931b5d" Namespace="calico-system" Pod="calico-apiserver-7f4f89866d-dtrtx" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-calico--apiserver--7f4f89866d--dtrtx-" Apr 16 23:57:22.185666 containerd[1624]: 2026-04-16 23:57:22.100 [INFO][4443] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="03d5c5dc3a0e626f1834dd7b7ef8ce2da470482427f20ec211ee20b8ba931b5d" Namespace="calico-system" Pod="calico-apiserver-7f4f89866d-dtrtx" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-calico--apiserver--7f4f89866d--dtrtx-eth0" Apr 16 23:57:22.185666 containerd[1624]: 2026-04-16 23:57:22.127 [INFO][4472] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="03d5c5dc3a0e626f1834dd7b7ef8ce2da470482427f20ec211ee20b8ba931b5d" HandleID="k8s-pod-network.03d5c5dc3a0e626f1834dd7b7ef8ce2da470482427f20ec211ee20b8ba931b5d" Workload="ci--4459--2--4--n--fcb502653b-k8s-calico--apiserver--7f4f89866d--dtrtx-eth0" Apr 16 23:57:22.185847 containerd[1624]: 2026-04-16 23:57:22.138 [INFO][4472] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="03d5c5dc3a0e626f1834dd7b7ef8ce2da470482427f20ec211ee20b8ba931b5d" HandleID="k8s-pod-network.03d5c5dc3a0e626f1834dd7b7ef8ce2da470482427f20ec211ee20b8ba931b5d" Workload="ci--4459--2--4--n--fcb502653b-k8s-calico--apiserver--7f4f89866d--dtrtx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003f02c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-n-fcb502653b", "pod":"calico-apiserver-7f4f89866d-dtrtx", "timestamp":"2026-04-16 23:57:22.12782033 +0000 UTC"}, Hostname:"ci-4459-2-4-n-fcb502653b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003aa000)} Apr 16 23:57:22.185847 containerd[1624]: 2026-04-16 23:57:22.138 [INFO][4472] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:57:22.185847 containerd[1624]: 2026-04-16 23:57:22.138 [INFO][4472] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:57:22.185847 containerd[1624]: 2026-04-16 23:57:22.138 [INFO][4472] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-fcb502653b' Apr 16 23:57:22.185847 containerd[1624]: 2026-04-16 23:57:22.142 [INFO][4472] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.03d5c5dc3a0e626f1834dd7b7ef8ce2da470482427f20ec211ee20b8ba931b5d" host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:22.185847 containerd[1624]: 2026-04-16 23:57:22.146 [INFO][4472] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:22.185847 containerd[1624]: 2026-04-16 23:57:22.151 [INFO][4472] ipam/ipam.go 526: Trying affinity for 192.168.76.192/26 host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:22.185847 containerd[1624]: 2026-04-16 23:57:22.152 [INFO][4472] ipam/ipam.go 160: Attempting to load block cidr=192.168.76.192/26 host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:22.185847 containerd[1624]: 2026-04-16 23:57:22.154 [INFO][4472] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.76.192/26 host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:22.186446 containerd[1624]: 2026-04-16 23:57:22.155 [INFO][4472] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.76.192/26 handle="k8s-pod-network.03d5c5dc3a0e626f1834dd7b7ef8ce2da470482427f20ec211ee20b8ba931b5d" host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:22.186446 containerd[1624]: 2026-04-16 23:57:22.156 [INFO][4472] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.03d5c5dc3a0e626f1834dd7b7ef8ce2da470482427f20ec211ee20b8ba931b5d Apr 16 23:57:22.186446 containerd[1624]: 2026-04-16 23:57:22.160 [INFO][4472] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.76.192/26 handle="k8s-pod-network.03d5c5dc3a0e626f1834dd7b7ef8ce2da470482427f20ec211ee20b8ba931b5d" host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:22.186446 containerd[1624]: 2026-04-16 23:57:22.167 [INFO][4472] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.76.194/26] block=192.168.76.192/26 handle="k8s-pod-network.03d5c5dc3a0e626f1834dd7b7ef8ce2da470482427f20ec211ee20b8ba931b5d" host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:22.186446 containerd[1624]: 2026-04-16 23:57:22.167 [INFO][4472] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.76.194/26] handle="k8s-pod-network.03d5c5dc3a0e626f1834dd7b7ef8ce2da470482427f20ec211ee20b8ba931b5d" host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:22.186446 containerd[1624]: 2026-04-16 23:57:22.167 [INFO][4472] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:57:22.186446 containerd[1624]: 2026-04-16 23:57:22.167 [INFO][4472] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.76.194/26] IPv6=[] ContainerID="03d5c5dc3a0e626f1834dd7b7ef8ce2da470482427f20ec211ee20b8ba931b5d" HandleID="k8s-pod-network.03d5c5dc3a0e626f1834dd7b7ef8ce2da470482427f20ec211ee20b8ba931b5d" Workload="ci--4459--2--4--n--fcb502653b-k8s-calico--apiserver--7f4f89866d--dtrtx-eth0" Apr 16 23:57:22.187109 containerd[1624]: 2026-04-16 23:57:22.169 [INFO][4443] cni-plugin/k8s.go 418: Populated endpoint ContainerID="03d5c5dc3a0e626f1834dd7b7ef8ce2da470482427f20ec211ee20b8ba931b5d" Namespace="calico-system" Pod="calico-apiserver-7f4f89866d-dtrtx" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-calico--apiserver--7f4f89866d--dtrtx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--fcb502653b-k8s-calico--apiserver--7f4f89866d--dtrtx-eth0", GenerateName:"calico-apiserver-7f4f89866d-", Namespace:"calico-system", SelfLink:"", UID:"4d7b14f0-1b63-46cd-97b4-9c93cc2c054f", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 56, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f4f89866d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-fcb502653b", ContainerID:"", Pod:"calico-apiserver-7f4f89866d-dtrtx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calid57a684c8fc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:57:22.187189 containerd[1624]: 2026-04-16 23:57:22.169 [INFO][4443] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.194/32] ContainerID="03d5c5dc3a0e626f1834dd7b7ef8ce2da470482427f20ec211ee20b8ba931b5d" Namespace="calico-system" Pod="calico-apiserver-7f4f89866d-dtrtx" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-calico--apiserver--7f4f89866d--dtrtx-eth0" Apr 16 23:57:22.187189 containerd[1624]: 2026-04-16 23:57:22.169 [INFO][4443] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid57a684c8fc ContainerID="03d5c5dc3a0e626f1834dd7b7ef8ce2da470482427f20ec211ee20b8ba931b5d" Namespace="calico-system" Pod="calico-apiserver-7f4f89866d-dtrtx" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-calico--apiserver--7f4f89866d--dtrtx-eth0" Apr 16 23:57:22.187189 containerd[1624]: 2026-04-16 23:57:22.173 [INFO][4443] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="03d5c5dc3a0e626f1834dd7b7ef8ce2da470482427f20ec211ee20b8ba931b5d" Namespace="calico-system" Pod="calico-apiserver-7f4f89866d-dtrtx" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-calico--apiserver--7f4f89866d--dtrtx-eth0" Apr 16 23:57:22.187437 containerd[1624]: 2026-04-16 23:57:22.173 [INFO][4443] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="03d5c5dc3a0e626f1834dd7b7ef8ce2da470482427f20ec211ee20b8ba931b5d" Namespace="calico-system" Pod="calico-apiserver-7f4f89866d-dtrtx" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-calico--apiserver--7f4f89866d--dtrtx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--fcb502653b-k8s-calico--apiserver--7f4f89866d--dtrtx-eth0", GenerateName:"calico-apiserver-7f4f89866d-", Namespace:"calico-system", SelfLink:"", UID:"4d7b14f0-1b63-46cd-97b4-9c93cc2c054f", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 56, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f4f89866d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-fcb502653b", ContainerID:"03d5c5dc3a0e626f1834dd7b7ef8ce2da470482427f20ec211ee20b8ba931b5d", Pod:"calico-apiserver-7f4f89866d-dtrtx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calid57a684c8fc", MAC:"66:3d:95:4c:a3:7a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:57:22.187583 containerd[1624]: 2026-04-16 23:57:22.183 [INFO][4443] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="03d5c5dc3a0e626f1834dd7b7ef8ce2da470482427f20ec211ee20b8ba931b5d" Namespace="calico-system" Pod="calico-apiserver-7f4f89866d-dtrtx" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-calico--apiserver--7f4f89866d--dtrtx-eth0" Apr 16 23:57:22.218261 containerd[1624]: time="2026-04-16T23:57:22.218205320Z" level=info msg="connecting to shim 03d5c5dc3a0e626f1834dd7b7ef8ce2da470482427f20ec211ee20b8ba931b5d" address="unix:///run/containerd/s/de461837d8a7a27c8eca2100397cd055b0cef5c2f86312e0cb55cc8f1b128dea" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:57:22.239320 systemd[1]: Started cri-containerd-03d5c5dc3a0e626f1834dd7b7ef8ce2da470482427f20ec211ee20b8ba931b5d.scope - libcontainer container 03d5c5dc3a0e626f1834dd7b7ef8ce2da470482427f20ec211ee20b8ba931b5d. Apr 16 23:57:22.276054 systemd-networkd[1494]: cali269b254643b: Link UP Apr 16 23:57:22.278229 systemd-networkd[1494]: cali269b254643b: Gained carrier Apr 16 23:57:22.290393 containerd[1624]: time="2026-04-16T23:57:22.290344497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f4f89866d-dtrtx,Uid:4d7b14f0-1b63-46cd-97b4-9c93cc2c054f,Namespace:calico-system,Attempt:0,} returns sandbox id \"03d5c5dc3a0e626f1834dd7b7ef8ce2da470482427f20ec211ee20b8ba931b5d\"" Apr 16 23:57:22.291979 containerd[1624]: time="2026-04-16T23:57:22.291954935Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 16 23:57:22.292562 containerd[1624]: 2026-04-16 23:57:22.095 [INFO][4437] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--fcb502653b-k8s-goldmane--5b85766d88--t64m4-eth0 goldmane-5b85766d88- calico-system 04f7910c-7afc-459c-8ff2-3be7624489d1 869 0 2026-04-16 23:56:41 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459-2-4-n-fcb502653b goldmane-5b85766d88-t64m4 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali269b254643b [] [] }} ContainerID="10c46ef4fd5b869480045cc235fc450df17f1e5c73edec1ce8734427a3e22839" Namespace="calico-system" Pod="goldmane-5b85766d88-t64m4" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-goldmane--5b85766d88--t64m4-" Apr 16 23:57:22.292562 containerd[1624]: 2026-04-16 23:57:22.095 [INFO][4437] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="10c46ef4fd5b869480045cc235fc450df17f1e5c73edec1ce8734427a3e22839" Namespace="calico-system" Pod="goldmane-5b85766d88-t64m4" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-goldmane--5b85766d88--t64m4-eth0" Apr 16 23:57:22.292562 containerd[1624]: 2026-04-16 23:57:22.129 [INFO][4466] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="10c46ef4fd5b869480045cc235fc450df17f1e5c73edec1ce8734427a3e22839" HandleID="k8s-pod-network.10c46ef4fd5b869480045cc235fc450df17f1e5c73edec1ce8734427a3e22839" Workload="ci--4459--2--4--n--fcb502653b-k8s-goldmane--5b85766d88--t64m4-eth0" Apr 16 23:57:22.292719 containerd[1624]: 2026-04-16 23:57:22.143 [INFO][4466] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="10c46ef4fd5b869480045cc235fc450df17f1e5c73edec1ce8734427a3e22839" HandleID="k8s-pod-network.10c46ef4fd5b869480045cc235fc450df17f1e5c73edec1ce8734427a3e22839" Workload="ci--4459--2--4--n--fcb502653b-k8s-goldmane--5b85766d88--t64m4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c7d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-n-fcb502653b", "pod":"goldmane-5b85766d88-t64m4", "timestamp":"2026-04-16 23:57:22.129376608 +0000 UTC"}, Hostname:"ci-4459-2-4-n-fcb502653b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400038f600)} Apr 16 23:57:22.292719 containerd[1624]: 2026-04-16 23:57:22.143 [INFO][4466] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:57:22.292719 containerd[1624]: 2026-04-16 23:57:22.167 [INFO][4466] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:57:22.292719 containerd[1624]: 2026-04-16 23:57:22.167 [INFO][4466] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-fcb502653b' Apr 16 23:57:22.292719 containerd[1624]: 2026-04-16 23:57:22.243 [INFO][4466] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.10c46ef4fd5b869480045cc235fc450df17f1e5c73edec1ce8734427a3e22839" host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:22.292719 containerd[1624]: 2026-04-16 23:57:22.249 [INFO][4466] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:22.292719 containerd[1624]: 2026-04-16 23:57:22.254 [INFO][4466] ipam/ipam.go 526: Trying affinity for 192.168.76.192/26 host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:22.292719 containerd[1624]: 2026-04-16 23:57:22.255 [INFO][4466] ipam/ipam.go 160: Attempting to load block cidr=192.168.76.192/26 host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:22.292719 containerd[1624]: 2026-04-16 23:57:22.258 [INFO][4466] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.76.192/26 host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:22.292911 containerd[1624]: 2026-04-16 23:57:22.258 [INFO][4466] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.76.192/26 handle="k8s-pod-network.10c46ef4fd5b869480045cc235fc450df17f1e5c73edec1ce8734427a3e22839" host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:22.292911 containerd[1624]: 2026-04-16 23:57:22.259 [INFO][4466] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.10c46ef4fd5b869480045cc235fc450df17f1e5c73edec1ce8734427a3e22839 Apr 16 23:57:22.292911 containerd[1624]: 2026-04-16 23:57:22.264 [INFO][4466] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.76.192/26 handle="k8s-pod-network.10c46ef4fd5b869480045cc235fc450df17f1e5c73edec1ce8734427a3e22839" host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:22.292911 containerd[1624]: 2026-04-16 23:57:22.270 [INFO][4466] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.76.195/26] block=192.168.76.192/26 handle="k8s-pod-network.10c46ef4fd5b869480045cc235fc450df17f1e5c73edec1ce8734427a3e22839" host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:22.292911 containerd[1624]: 2026-04-16 23:57:22.271 [INFO][4466] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.76.195/26] handle="k8s-pod-network.10c46ef4fd5b869480045cc235fc450df17f1e5c73edec1ce8734427a3e22839" host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:22.292911 containerd[1624]: 2026-04-16 23:57:22.271 [INFO][4466] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:57:22.292911 containerd[1624]: 2026-04-16 23:57:22.271 [INFO][4466] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.76.195/26] IPv6=[] ContainerID="10c46ef4fd5b869480045cc235fc450df17f1e5c73edec1ce8734427a3e22839" HandleID="k8s-pod-network.10c46ef4fd5b869480045cc235fc450df17f1e5c73edec1ce8734427a3e22839" Workload="ci--4459--2--4--n--fcb502653b-k8s-goldmane--5b85766d88--t64m4-eth0" Apr 16 23:57:22.293081 containerd[1624]: 2026-04-16 23:57:22.273 [INFO][4437] cni-plugin/k8s.go 418: Populated endpoint ContainerID="10c46ef4fd5b869480045cc235fc450df17f1e5c73edec1ce8734427a3e22839" Namespace="calico-system" Pod="goldmane-5b85766d88-t64m4" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-goldmane--5b85766d88--t64m4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--fcb502653b-k8s-goldmane--5b85766d88--t64m4-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"04f7910c-7afc-459c-8ff2-3be7624489d1", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 56, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-fcb502653b", ContainerID:"", Pod:"goldmane-5b85766d88-t64m4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.76.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali269b254643b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:57:22.293144 containerd[1624]: 2026-04-16 23:57:22.273 [INFO][4437] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.195/32] ContainerID="10c46ef4fd5b869480045cc235fc450df17f1e5c73edec1ce8734427a3e22839" Namespace="calico-system" Pod="goldmane-5b85766d88-t64m4" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-goldmane--5b85766d88--t64m4-eth0" Apr 16 23:57:22.293144 containerd[1624]: 2026-04-16 23:57:22.273 [INFO][4437] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali269b254643b ContainerID="10c46ef4fd5b869480045cc235fc450df17f1e5c73edec1ce8734427a3e22839" Namespace="calico-system" Pod="goldmane-5b85766d88-t64m4" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-goldmane--5b85766d88--t64m4-eth0" Apr 16 23:57:22.293144 containerd[1624]: 2026-04-16 23:57:22.275 [INFO][4437] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="10c46ef4fd5b869480045cc235fc450df17f1e5c73edec1ce8734427a3e22839" Namespace="calico-system" Pod="goldmane-5b85766d88-t64m4" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-goldmane--5b85766d88--t64m4-eth0" Apr 16 23:57:22.293204 containerd[1624]: 2026-04-16 23:57:22.276 [INFO][4437] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="10c46ef4fd5b869480045cc235fc450df17f1e5c73edec1ce8734427a3e22839" Namespace="calico-system" Pod="goldmane-5b85766d88-t64m4" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-goldmane--5b85766d88--t64m4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--fcb502653b-k8s-goldmane--5b85766d88--t64m4-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"04f7910c-7afc-459c-8ff2-3be7624489d1", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 56, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-fcb502653b", ContainerID:"10c46ef4fd5b869480045cc235fc450df17f1e5c73edec1ce8734427a3e22839", Pod:"goldmane-5b85766d88-t64m4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.76.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali269b254643b", MAC:"e6:6f:d2:fe:53:40", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:57:22.293254 containerd[1624]: 2026-04-16 23:57:22.289 [INFO][4437] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="10c46ef4fd5b869480045cc235fc450df17f1e5c73edec1ce8734427a3e22839" Namespace="calico-system" Pod="goldmane-5b85766d88-t64m4" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-goldmane--5b85766d88--t64m4-eth0" Apr 16 23:57:22.327175 containerd[1624]: time="2026-04-16T23:57:22.326851805Z" level=info msg="connecting to shim 10c46ef4fd5b869480045cc235fc450df17f1e5c73edec1ce8734427a3e22839" address="unix:///run/containerd/s/cf902df878d99be2b13f6b531fee56e8055487ef9fc936c6e35d550bf494da9e" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:57:22.350217 systemd[1]: Started cri-containerd-10c46ef4fd5b869480045cc235fc450df17f1e5c73edec1ce8734427a3e22839.scope - libcontainer container 10c46ef4fd5b869480045cc235fc450df17f1e5c73edec1ce8734427a3e22839. Apr 16 23:57:22.383964 containerd[1624]: time="2026-04-16T23:57:22.383917963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-t64m4,Uid:04f7910c-7afc-459c-8ff2-3be7624489d1,Namespace:calico-system,Attempt:0,} returns sandbox id \"10c46ef4fd5b869480045cc235fc450df17f1e5c73edec1ce8734427a3e22839\"" Apr 16 23:57:23.341180 systemd-networkd[1494]: calid57a684c8fc: Gained IPv6LL Apr 16 23:57:23.533335 systemd-networkd[1494]: cali269b254643b: Gained IPv6LL Apr 16 23:57:24.052219 containerd[1624]: time="2026-04-16T23:57:24.052136895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z7m5k,Uid:ec22351f-e1f7-41e4-a9b6-aff2fc21e395,Namespace:kube-system,Attempt:0,}" Apr 16 23:57:24.052539 containerd[1624]: time="2026-04-16T23:57:24.052277535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h2ts7,Uid:49437803-415c-470c-9754-0a650aa11c56,Namespace:kube-system,Attempt:0,}" Apr 16 23:57:24.170374 systemd-networkd[1494]: cali7a6017392e9: Link UP Apr 16 23:57:24.172217 systemd-networkd[1494]: cali7a6017392e9: Gained carrier Apr 16 23:57:24.188285 containerd[1624]: 2026-04-16 23:57:24.100 [INFO][4654] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--fcb502653b-k8s-coredns--674b8bbfcf--h2ts7-eth0 coredns-674b8bbfcf- kube-system 49437803-415c-470c-9754-0a650aa11c56 872 0 2026-04-16 23:56:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-2-4-n-fcb502653b coredns-674b8bbfcf-h2ts7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7a6017392e9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fc4026fa78f362bdb63e5d8fe8b142a4b47de9ef5c19d4c8f93472af4a56c773" Namespace="kube-system" Pod="coredns-674b8bbfcf-h2ts7" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-coredns--674b8bbfcf--h2ts7-" Apr 16 23:57:24.188285 containerd[1624]: 2026-04-16 23:57:24.100 [INFO][4654] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fc4026fa78f362bdb63e5d8fe8b142a4b47de9ef5c19d4c8f93472af4a56c773" Namespace="kube-system" Pod="coredns-674b8bbfcf-h2ts7" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-coredns--674b8bbfcf--h2ts7-eth0" Apr 16 23:57:24.188285 containerd[1624]: 2026-04-16 23:57:24.124 [INFO][4675] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fc4026fa78f362bdb63e5d8fe8b142a4b47de9ef5c19d4c8f93472af4a56c773" HandleID="k8s-pod-network.fc4026fa78f362bdb63e5d8fe8b142a4b47de9ef5c19d4c8f93472af4a56c773" Workload="ci--4459--2--4--n--fcb502653b-k8s-coredns--674b8bbfcf--h2ts7-eth0" Apr 16 23:57:24.188776 containerd[1624]: 2026-04-16 23:57:24.135 [INFO][4675] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="fc4026fa78f362bdb63e5d8fe8b142a4b47de9ef5c19d4c8f93472af4a56c773" HandleID="k8s-pod-network.fc4026fa78f362bdb63e5d8fe8b142a4b47de9ef5c19d4c8f93472af4a56c773" Workload="ci--4459--2--4--n--fcb502653b-k8s-coredns--674b8bbfcf--h2ts7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400052ae60), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-2-4-n-fcb502653b", "pod":"coredns-674b8bbfcf-h2ts7", "timestamp":"2026-04-16 23:57:24.124918951 +0000 UTC"}, Hostname:"ci-4459-2-4-n-fcb502653b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002c31e0)} Apr 16 23:57:24.188776 containerd[1624]: 2026-04-16 23:57:24.135 [INFO][4675] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:57:24.188776 containerd[1624]: 2026-04-16 23:57:24.135 [INFO][4675] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:57:24.188776 containerd[1624]: 2026-04-16 23:57:24.135 [INFO][4675] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-fcb502653b' Apr 16 23:57:24.188776 containerd[1624]: 2026-04-16 23:57:24.138 [INFO][4675] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.fc4026fa78f362bdb63e5d8fe8b142a4b47de9ef5c19d4c8f93472af4a56c773" host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:24.188776 containerd[1624]: 2026-04-16 23:57:24.142 [INFO][4675] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:24.188776 containerd[1624]: 2026-04-16 23:57:24.147 [INFO][4675] ipam/ipam.go 526: Trying affinity for 192.168.76.192/26 host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:24.188776 containerd[1624]: 2026-04-16 23:57:24.148 [INFO][4675] ipam/ipam.go 160: Attempting to load block cidr=192.168.76.192/26 host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:24.188776 containerd[1624]: 2026-04-16 23:57:24.150 [INFO][4675] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.76.192/26 host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:24.188967 containerd[1624]: 2026-04-16 23:57:24.150 [INFO][4675] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.76.192/26 handle="k8s-pod-network.fc4026fa78f362bdb63e5d8fe8b142a4b47de9ef5c19d4c8f93472af4a56c773" host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:24.188967 containerd[1624]: 2026-04-16 23:57:24.152 [INFO][4675] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.fc4026fa78f362bdb63e5d8fe8b142a4b47de9ef5c19d4c8f93472af4a56c773 Apr 16 23:57:24.188967 containerd[1624]: 2026-04-16 23:57:24.155 [INFO][4675] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.76.192/26 handle="k8s-pod-network.fc4026fa78f362bdb63e5d8fe8b142a4b47de9ef5c19d4c8f93472af4a56c773" host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:24.188967 containerd[1624]: 2026-04-16 23:57:24.162 [INFO][4675] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.76.196/26] block=192.168.76.192/26 handle="k8s-pod-network.fc4026fa78f362bdb63e5d8fe8b142a4b47de9ef5c19d4c8f93472af4a56c773" host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:24.188967 containerd[1624]: 2026-04-16 23:57:24.162 [INFO][4675] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.76.196/26] handle="k8s-pod-network.fc4026fa78f362bdb63e5d8fe8b142a4b47de9ef5c19d4c8f93472af4a56c773" host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:24.188967 containerd[1624]: 2026-04-16 23:57:24.162 [INFO][4675] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:57:24.188967 containerd[1624]: 2026-04-16 23:57:24.162 [INFO][4675] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.76.196/26] IPv6=[] ContainerID="fc4026fa78f362bdb63e5d8fe8b142a4b47de9ef5c19d4c8f93472af4a56c773" HandleID="k8s-pod-network.fc4026fa78f362bdb63e5d8fe8b142a4b47de9ef5c19d4c8f93472af4a56c773" Workload="ci--4459--2--4--n--fcb502653b-k8s-coredns--674b8bbfcf--h2ts7-eth0" Apr 16 23:57:24.189175 containerd[1624]: 2026-04-16 23:57:24.165 [INFO][4654] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fc4026fa78f362bdb63e5d8fe8b142a4b47de9ef5c19d4c8f93472af4a56c773" Namespace="kube-system" Pod="coredns-674b8bbfcf-h2ts7" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-coredns--674b8bbfcf--h2ts7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--fcb502653b-k8s-coredns--674b8bbfcf--h2ts7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"49437803-415c-470c-9754-0a650aa11c56", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 56, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-fcb502653b", ContainerID:"", Pod:"coredns-674b8bbfcf-h2ts7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7a6017392e9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:57:24.189175 containerd[1624]: 2026-04-16 23:57:24.166 [INFO][4654] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.196/32] ContainerID="fc4026fa78f362bdb63e5d8fe8b142a4b47de9ef5c19d4c8f93472af4a56c773" Namespace="kube-system" Pod="coredns-674b8bbfcf-h2ts7" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-coredns--674b8bbfcf--h2ts7-eth0" Apr 16 23:57:24.189175 containerd[1624]: 2026-04-16 23:57:24.166 [INFO][4654] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7a6017392e9 ContainerID="fc4026fa78f362bdb63e5d8fe8b142a4b47de9ef5c19d4c8f93472af4a56c773" Namespace="kube-system" Pod="coredns-674b8bbfcf-h2ts7" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-coredns--674b8bbfcf--h2ts7-eth0" Apr 16 23:57:24.189175 containerd[1624]: 2026-04-16 23:57:24.172 [INFO][4654] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fc4026fa78f362bdb63e5d8fe8b142a4b47de9ef5c19d4c8f93472af4a56c773" Namespace="kube-system" Pod="coredns-674b8bbfcf-h2ts7" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-coredns--674b8bbfcf--h2ts7-eth0" Apr 16 23:57:24.189175 containerd[1624]: 2026-04-16 23:57:24.172 [INFO][4654] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fc4026fa78f362bdb63e5d8fe8b142a4b47de9ef5c19d4c8f93472af4a56c773" Namespace="kube-system" Pod="coredns-674b8bbfcf-h2ts7" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-coredns--674b8bbfcf--h2ts7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--fcb502653b-k8s-coredns--674b8bbfcf--h2ts7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"49437803-415c-470c-9754-0a650aa11c56", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 56, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-fcb502653b", ContainerID:"fc4026fa78f362bdb63e5d8fe8b142a4b47de9ef5c19d4c8f93472af4a56c773", Pod:"coredns-674b8bbfcf-h2ts7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7a6017392e9", MAC:"f2:8d:c7:88:30:cd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:57:24.189175 containerd[1624]: 2026-04-16 23:57:24.183 [INFO][4654] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fc4026fa78f362bdb63e5d8fe8b142a4b47de9ef5c19d4c8f93472af4a56c773" Namespace="kube-system" Pod="coredns-674b8bbfcf-h2ts7" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-coredns--674b8bbfcf--h2ts7-eth0" Apr 16 23:57:24.216709 containerd[1624]: time="2026-04-16T23:57:24.216659019Z" level=info msg="connecting to shim fc4026fa78f362bdb63e5d8fe8b142a4b47de9ef5c19d4c8f93472af4a56c773" address="unix:///run/containerd/s/72baec041118e5fe3c2aaee02cd947422b42d98f483194faba285e6d917d42d5" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:57:24.239228 systemd[1]: Started cri-containerd-fc4026fa78f362bdb63e5d8fe8b142a4b47de9ef5c19d4c8f93472af4a56c773.scope - libcontainer container fc4026fa78f362bdb63e5d8fe8b142a4b47de9ef5c19d4c8f93472af4a56c773. Apr 16 23:57:24.284856 systemd-networkd[1494]: calif10fac810ce: Link UP Apr 16 23:57:24.285837 systemd-networkd[1494]: calif10fac810ce: Gained carrier Apr 16 23:57:24.295210 containerd[1624]: time="2026-04-16T23:57:24.295133147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h2ts7,Uid:49437803-415c-470c-9754-0a650aa11c56,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc4026fa78f362bdb63e5d8fe8b142a4b47de9ef5c19d4c8f93472af4a56c773\"" Apr 16 23:57:24.304464 containerd[1624]: time="2026-04-16T23:57:24.304346254Z" level=info msg="CreateContainer within sandbox \"fc4026fa78f362bdb63e5d8fe8b142a4b47de9ef5c19d4c8f93472af4a56c773\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 16 23:57:24.309066 containerd[1624]: 2026-04-16 23:57:24.095 [INFO][4642] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--fcb502653b-k8s-coredns--674b8bbfcf--z7m5k-eth0 coredns-674b8bbfcf- kube-system ec22351f-e1f7-41e4-a9b6-aff2fc21e395 864 0 2026-04-16 23:56:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-2-4-n-fcb502653b coredns-674b8bbfcf-z7m5k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif10fac810ce [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3b51e5008d66aa52432269d72385ad94e3766a6d6be3b21d6624c3f46f39694e" Namespace="kube-system" Pod="coredns-674b8bbfcf-z7m5k" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-coredns--674b8bbfcf--z7m5k-" Apr 16 23:57:24.309066 containerd[1624]: 2026-04-16 23:57:24.096 [INFO][4642] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3b51e5008d66aa52432269d72385ad94e3766a6d6be3b21d6624c3f46f39694e" Namespace="kube-system" Pod="coredns-674b8bbfcf-z7m5k" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-coredns--674b8bbfcf--z7m5k-eth0" Apr 16 23:57:24.309066 containerd[1624]: 2026-04-16 23:57:24.127 [INFO][4673] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3b51e5008d66aa52432269d72385ad94e3766a6d6be3b21d6624c3f46f39694e" HandleID="k8s-pod-network.3b51e5008d66aa52432269d72385ad94e3766a6d6be3b21d6624c3f46f39694e" Workload="ci--4459--2--4--n--fcb502653b-k8s-coredns--674b8bbfcf--z7m5k-eth0" Apr 16 23:57:24.309066 containerd[1624]: 2026-04-16 23:57:24.139 [INFO][4673] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3b51e5008d66aa52432269d72385ad94e3766a6d6be3b21d6624c3f46f39694e" HandleID="k8s-pod-network.3b51e5008d66aa52432269d72385ad94e3766a6d6be3b21d6624c3f46f39694e" Workload="ci--4459--2--4--n--fcb502653b-k8s-coredns--674b8bbfcf--z7m5k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002eacf0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-2-4-n-fcb502653b", "pod":"coredns-674b8bbfcf-z7m5k", "timestamp":"2026-04-16 23:57:24.127277667 +0000 UTC"}, Hostname:"ci-4459-2-4-n-fcb502653b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003569a0)} Apr 16 23:57:24.309066 containerd[1624]: 2026-04-16 23:57:24.139 [INFO][4673] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:57:24.309066 containerd[1624]: 2026-04-16 23:57:24.164 [INFO][4673] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:57:24.309066 containerd[1624]: 2026-04-16 23:57:24.164 [INFO][4673] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-fcb502653b' Apr 16 23:57:24.309066 containerd[1624]: 2026-04-16 23:57:24.239 [INFO][4673] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3b51e5008d66aa52432269d72385ad94e3766a6d6be3b21d6624c3f46f39694e" host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:24.309066 containerd[1624]: 2026-04-16 23:57:24.244 [INFO][4673] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:24.309066 containerd[1624]: 2026-04-16 23:57:24.251 [INFO][4673] ipam/ipam.go 526: Trying affinity for 192.168.76.192/26 host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:24.309066 containerd[1624]: 2026-04-16 23:57:24.255 [INFO][4673] ipam/ipam.go 160: Attempting to load block cidr=192.168.76.192/26 host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:24.309066 containerd[1624]: 2026-04-16 23:57:24.258 [INFO][4673] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.76.192/26 host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:24.309066 containerd[1624]: 2026-04-16 23:57:24.258 [INFO][4673] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.76.192/26 handle="k8s-pod-network.3b51e5008d66aa52432269d72385ad94e3766a6d6be3b21d6624c3f46f39694e" host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:24.309066 containerd[1624]: 2026-04-16 23:57:24.260 [INFO][4673] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3b51e5008d66aa52432269d72385ad94e3766a6d6be3b21d6624c3f46f39694e Apr 16 23:57:24.309066 containerd[1624]: 2026-04-16 23:57:24.265 [INFO][4673] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.76.192/26 handle="k8s-pod-network.3b51e5008d66aa52432269d72385ad94e3766a6d6be3b21d6624c3f46f39694e" host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:24.309066 containerd[1624]: 2026-04-16 23:57:24.275 [INFO][4673] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.76.197/26] block=192.168.76.192/26 handle="k8s-pod-network.3b51e5008d66aa52432269d72385ad94e3766a6d6be3b21d6624c3f46f39694e" host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:24.309066 containerd[1624]: 2026-04-16 23:57:24.275 [INFO][4673] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.76.197/26] handle="k8s-pod-network.3b51e5008d66aa52432269d72385ad94e3766a6d6be3b21d6624c3f46f39694e" host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:24.309066 containerd[1624]: 2026-04-16 23:57:24.275 [INFO][4673] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:57:24.309066 containerd[1624]: 2026-04-16 23:57:24.275 [INFO][4673] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.76.197/26] IPv6=[] ContainerID="3b51e5008d66aa52432269d72385ad94e3766a6d6be3b21d6624c3f46f39694e" HandleID="k8s-pod-network.3b51e5008d66aa52432269d72385ad94e3766a6d6be3b21d6624c3f46f39694e" Workload="ci--4459--2--4--n--fcb502653b-k8s-coredns--674b8bbfcf--z7m5k-eth0" Apr 16 23:57:24.309605 containerd[1624]: 2026-04-16 23:57:24.279 [INFO][4642] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3b51e5008d66aa52432269d72385ad94e3766a6d6be3b21d6624c3f46f39694e" Namespace="kube-system" Pod="coredns-674b8bbfcf-z7m5k" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-coredns--674b8bbfcf--z7m5k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--fcb502653b-k8s-coredns--674b8bbfcf--z7m5k-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ec22351f-e1f7-41e4-a9b6-aff2fc21e395", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 56, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-fcb502653b", ContainerID:"", Pod:"coredns-674b8bbfcf-z7m5k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif10fac810ce", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:57:24.309605 containerd[1624]: 2026-04-16 23:57:24.280 [INFO][4642] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.197/32] ContainerID="3b51e5008d66aa52432269d72385ad94e3766a6d6be3b21d6624c3f46f39694e" Namespace="kube-system" Pod="coredns-674b8bbfcf-z7m5k" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-coredns--674b8bbfcf--z7m5k-eth0" Apr 16 23:57:24.309605 containerd[1624]: 2026-04-16 23:57:24.280 [INFO][4642] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif10fac810ce ContainerID="3b51e5008d66aa52432269d72385ad94e3766a6d6be3b21d6624c3f46f39694e" Namespace="kube-system" Pod="coredns-674b8bbfcf-z7m5k" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-coredns--674b8bbfcf--z7m5k-eth0" Apr 16 23:57:24.309605 containerd[1624]: 2026-04-16 23:57:24.286 [INFO][4642] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3b51e5008d66aa52432269d72385ad94e3766a6d6be3b21d6624c3f46f39694e" Namespace="kube-system" Pod="coredns-674b8bbfcf-z7m5k" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-coredns--674b8bbfcf--z7m5k-eth0" Apr 16 23:57:24.309605 containerd[1624]: 2026-04-16 23:57:24.286 [INFO][4642] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3b51e5008d66aa52432269d72385ad94e3766a6d6be3b21d6624c3f46f39694e" Namespace="kube-system" Pod="coredns-674b8bbfcf-z7m5k" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-coredns--674b8bbfcf--z7m5k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--fcb502653b-k8s-coredns--674b8bbfcf--z7m5k-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ec22351f-e1f7-41e4-a9b6-aff2fc21e395", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 56, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-fcb502653b", ContainerID:"3b51e5008d66aa52432269d72385ad94e3766a6d6be3b21d6624c3f46f39694e", Pod:"coredns-674b8bbfcf-z7m5k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif10fac810ce", MAC:"76:83:6d:08:da:d9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:57:24.309605 containerd[1624]: 2026-04-16 23:57:24.303 [INFO][4642] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3b51e5008d66aa52432269d72385ad94e3766a6d6be3b21d6624c3f46f39694e" Namespace="kube-system" Pod="coredns-674b8bbfcf-z7m5k" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-coredns--674b8bbfcf--z7m5k-eth0" Apr 16 23:57:24.329117 containerd[1624]: time="2026-04-16T23:57:24.329067859Z" level=info msg="Container 5390e6739508ccb54a3c5ffa7ea9e8745ccd85924b0d7e122d5f1183c64d1512: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:57:24.339372 containerd[1624]: time="2026-04-16T23:57:24.339318324Z" level=info msg="CreateContainer within sandbox \"fc4026fa78f362bdb63e5d8fe8b142a4b47de9ef5c19d4c8f93472af4a56c773\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5390e6739508ccb54a3c5ffa7ea9e8745ccd85924b0d7e122d5f1183c64d1512\"" Apr 16 23:57:24.339862 containerd[1624]: time="2026-04-16T23:57:24.339824443Z" level=info msg="StartContainer for \"5390e6739508ccb54a3c5ffa7ea9e8745ccd85924b0d7e122d5f1183c64d1512\"" Apr 16 23:57:24.341100 containerd[1624]: time="2026-04-16T23:57:24.341069081Z" level=info msg="connecting to shim 5390e6739508ccb54a3c5ffa7ea9e8745ccd85924b0d7e122d5f1183c64d1512" address="unix:///run/containerd/s/72baec041118e5fe3c2aaee02cd947422b42d98f483194faba285e6d917d42d5" protocol=ttrpc version=3 Apr 16 23:57:24.354405 containerd[1624]: time="2026-04-16T23:57:24.354287142Z" level=info msg="connecting to shim 3b51e5008d66aa52432269d72385ad94e3766a6d6be3b21d6624c3f46f39694e" address="unix:///run/containerd/s/1087c9584ba8f3c76ebf3d9333fd3234504ab9c4d1bd944011523d724a282607" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:57:24.363211 systemd[1]: Started cri-containerd-5390e6739508ccb54a3c5ffa7ea9e8745ccd85924b0d7e122d5f1183c64d1512.scope - libcontainer container 5390e6739508ccb54a3c5ffa7ea9e8745ccd85924b0d7e122d5f1183c64d1512. Apr 16 23:57:24.385136 systemd[1]: Started cri-containerd-3b51e5008d66aa52432269d72385ad94e3766a6d6be3b21d6624c3f46f39694e.scope - libcontainer container 3b51e5008d66aa52432269d72385ad94e3766a6d6be3b21d6624c3f46f39694e. Apr 16 23:57:24.417479 containerd[1624]: time="2026-04-16T23:57:24.417434132Z" level=info msg="StartContainer for \"5390e6739508ccb54a3c5ffa7ea9e8745ccd85924b0d7e122d5f1183c64d1512\" returns successfully" Apr 16 23:57:24.429984 containerd[1624]: time="2026-04-16T23:57:24.429858914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z7m5k,Uid:ec22351f-e1f7-41e4-a9b6-aff2fc21e395,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b51e5008d66aa52432269d72385ad94e3766a6d6be3b21d6624c3f46f39694e\"" Apr 16 23:57:24.443758 containerd[1624]: time="2026-04-16T23:57:24.443687374Z" level=info msg="CreateContainer within sandbox \"3b51e5008d66aa52432269d72385ad94e3766a6d6be3b21d6624c3f46f39694e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 16 23:57:24.467844 containerd[1624]: time="2026-04-16T23:57:24.467761740Z" level=info msg="Container 7b543151421258b81f6f2d58a03b7f144c1735285ac068ef7461d1627bc88931: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:57:24.480004 containerd[1624]: time="2026-04-16T23:57:24.479915643Z" level=info msg="CreateContainer within sandbox \"3b51e5008d66aa52432269d72385ad94e3766a6d6be3b21d6624c3f46f39694e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7b543151421258b81f6f2d58a03b7f144c1735285ac068ef7461d1627bc88931\"" Apr 16 23:57:24.480491 containerd[1624]: time="2026-04-16T23:57:24.480409682Z" level=info msg="StartContainer for \"7b543151421258b81f6f2d58a03b7f144c1735285ac068ef7461d1627bc88931\"" Apr 16 23:57:24.481831 containerd[1624]: time="2026-04-16T23:57:24.481790080Z" level=info msg="connecting to shim 7b543151421258b81f6f2d58a03b7f144c1735285ac068ef7461d1627bc88931" address="unix:///run/containerd/s/1087c9584ba8f3c76ebf3d9333fd3234504ab9c4d1bd944011523d724a282607" protocol=ttrpc version=3 Apr 16 23:57:24.506316 systemd[1]: Started cri-containerd-7b543151421258b81f6f2d58a03b7f144c1735285ac068ef7461d1627bc88931.scope - libcontainer container 7b543151421258b81f6f2d58a03b7f144c1735285ac068ef7461d1627bc88931. Apr 16 23:57:24.538703 containerd[1624]: time="2026-04-16T23:57:24.538655439Z" level=info msg="StartContainer for \"7b543151421258b81f6f2d58a03b7f144c1735285ac068ef7461d1627bc88931\" returns successfully" Apr 16 23:57:24.910170 containerd[1624]: time="2026-04-16T23:57:24.910106867Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:57:24.911950 containerd[1624]: time="2026-04-16T23:57:24.911916184Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Apr 16 23:57:24.913806 containerd[1624]: time="2026-04-16T23:57:24.913778741Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:57:24.916545 containerd[1624]: time="2026-04-16T23:57:24.916503698Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:57:24.917285 containerd[1624]: time="2026-04-16T23:57:24.917256937Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 2.625272442s" Apr 16 23:57:24.917329 containerd[1624]: time="2026-04-16T23:57:24.917290696Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 16 23:57:24.918910 containerd[1624]: time="2026-04-16T23:57:24.918843974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 16 23:57:24.922837 containerd[1624]: time="2026-04-16T23:57:24.922486769Z" level=info msg="CreateContainer within sandbox \"03d5c5dc3a0e626f1834dd7b7ef8ce2da470482427f20ec211ee20b8ba931b5d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 16 23:57:24.933060 containerd[1624]: time="2026-04-16T23:57:24.933020434Z" level=info msg="Container a2c43df7d6993218d3d527aa52a7c72ba894fa00f752273dc76e8b2849eb5646: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:57:24.945743 containerd[1624]: time="2026-04-16T23:57:24.945703056Z" level=info msg="CreateContainer within sandbox \"03d5c5dc3a0e626f1834dd7b7ef8ce2da470482427f20ec211ee20b8ba931b5d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a2c43df7d6993218d3d527aa52a7c72ba894fa00f752273dc76e8b2849eb5646\"" Apr 16 23:57:24.946453 containerd[1624]: time="2026-04-16T23:57:24.946239895Z" level=info msg="StartContainer for \"a2c43df7d6993218d3d527aa52a7c72ba894fa00f752273dc76e8b2849eb5646\"" Apr 16 23:57:24.948607 containerd[1624]: time="2026-04-16T23:57:24.948573212Z" level=info msg="connecting to shim a2c43df7d6993218d3d527aa52a7c72ba894fa00f752273dc76e8b2849eb5646" address="unix:///run/containerd/s/de461837d8a7a27c8eca2100397cd055b0cef5c2f86312e0cb55cc8f1b128dea" protocol=ttrpc version=3 Apr 16 23:57:24.964373 systemd[1]: Started cri-containerd-a2c43df7d6993218d3d527aa52a7c72ba894fa00f752273dc76e8b2849eb5646.scope - libcontainer container a2c43df7d6993218d3d527aa52a7c72ba894fa00f752273dc76e8b2849eb5646. Apr 16 23:57:24.999928 containerd[1624]: time="2026-04-16T23:57:24.999889298Z" level=info msg="StartContainer for \"a2c43df7d6993218d3d527aa52a7c72ba894fa00f752273dc76e8b2849eb5646\" returns successfully" Apr 16 23:57:25.052331 containerd[1624]: time="2026-04-16T23:57:25.052277423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7dc5f88bc4-fqkzx,Uid:db4fc8b7-8052-4c81-8eb8-55979d26b8c5,Namespace:calico-system,Attempt:0,}" Apr 16 23:57:25.052722 containerd[1624]: time="2026-04-16T23:57:25.052496943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cnqx8,Uid:83c6c081-1afd-4376-901d-205c3cf0fa07,Namespace:calico-system,Attempt:0,}" Apr 16 23:57:25.187557 systemd-networkd[1494]: cali80b557d7f27: Link UP Apr 16 23:57:25.187699 systemd-networkd[1494]: cali80b557d7f27: Gained carrier Apr 16 23:57:25.204179 containerd[1624]: 2026-04-16 23:57:25.107 [INFO][4940] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--fcb502653b-k8s-calico--kube--controllers--7dc5f88bc4--fqkzx-eth0 calico-kube-controllers-7dc5f88bc4- calico-system db4fc8b7-8052-4c81-8eb8-55979d26b8c5 868 0 2026-04-16 23:56:42 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7dc5f88bc4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459-2-4-n-fcb502653b calico-kube-controllers-7dc5f88bc4-fqkzx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali80b557d7f27 [] [] }} ContainerID="ce4d744b785d12f661f333055a3a3769b8ad52966006e31b25724ed38f5ded84" Namespace="calico-system" Pod="calico-kube-controllers-7dc5f88bc4-fqkzx" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-calico--kube--controllers--7dc5f88bc4--fqkzx-" Apr 16 23:57:25.204179 containerd[1624]: 2026-04-16 23:57:25.107 [INFO][4940] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ce4d744b785d12f661f333055a3a3769b8ad52966006e31b25724ed38f5ded84" Namespace="calico-system" Pod="calico-kube-controllers-7dc5f88bc4-fqkzx" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-calico--kube--controllers--7dc5f88bc4--fqkzx-eth0" Apr 16 23:57:25.204179 containerd[1624]: 2026-04-16 23:57:25.137 [INFO][4969] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ce4d744b785d12f661f333055a3a3769b8ad52966006e31b25724ed38f5ded84" HandleID="k8s-pod-network.ce4d744b785d12f661f333055a3a3769b8ad52966006e31b25724ed38f5ded84" Workload="ci--4459--2--4--n--fcb502653b-k8s-calico--kube--controllers--7dc5f88bc4--fqkzx-eth0" Apr 16 23:57:25.204179 containerd[1624]: 2026-04-16 23:57:25.147 [INFO][4969] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ce4d744b785d12f661f333055a3a3769b8ad52966006e31b25724ed38f5ded84" HandleID="k8s-pod-network.ce4d744b785d12f661f333055a3a3769b8ad52966006e31b25724ed38f5ded84" Workload="ci--4459--2--4--n--fcb502653b-k8s-calico--kube--controllers--7dc5f88bc4--fqkzx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004ca70), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-n-fcb502653b", "pod":"calico-kube-controllers-7dc5f88bc4-fqkzx", "timestamp":"2026-04-16 23:57:25.137512141 +0000 UTC"}, Hostname:"ci-4459-2-4-n-fcb502653b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000406dc0)} Apr 16 23:57:25.204179 containerd[1624]: 2026-04-16 23:57:25.147 [INFO][4969] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:57:25.204179 containerd[1624]: 2026-04-16 23:57:25.147 [INFO][4969] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:57:25.204179 containerd[1624]: 2026-04-16 23:57:25.147 [INFO][4969] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-fcb502653b' Apr 16 23:57:25.204179 containerd[1624]: 2026-04-16 23:57:25.149 [INFO][4969] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ce4d744b785d12f661f333055a3a3769b8ad52966006e31b25724ed38f5ded84" host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:25.204179 containerd[1624]: 2026-04-16 23:57:25.154 [INFO][4969] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:25.204179 containerd[1624]: 2026-04-16 23:57:25.163 [INFO][4969] ipam/ipam.go 526: Trying affinity for 192.168.76.192/26 host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:25.204179 containerd[1624]: 2026-04-16 23:57:25.165 [INFO][4969] ipam/ipam.go 160: Attempting to load block cidr=192.168.76.192/26 host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:25.204179 containerd[1624]: 2026-04-16 23:57:25.168 [INFO][4969] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.76.192/26 host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:25.204179 containerd[1624]: 2026-04-16 23:57:25.168 [INFO][4969] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.76.192/26 handle="k8s-pod-network.ce4d744b785d12f661f333055a3a3769b8ad52966006e31b25724ed38f5ded84" host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:25.204179 containerd[1624]: 2026-04-16 23:57:25.169 [INFO][4969] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ce4d744b785d12f661f333055a3a3769b8ad52966006e31b25724ed38f5ded84 Apr 16 23:57:25.204179 containerd[1624]: 2026-04-16 23:57:25.174 [INFO][4969] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.76.192/26 handle="k8s-pod-network.ce4d744b785d12f661f333055a3a3769b8ad52966006e31b25724ed38f5ded84" host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:25.204179 containerd[1624]: 2026-04-16 23:57:25.181 [INFO][4969] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.76.198/26] block=192.168.76.192/26 handle="k8s-pod-network.ce4d744b785d12f661f333055a3a3769b8ad52966006e31b25724ed38f5ded84" host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:25.204179 containerd[1624]: 2026-04-16 23:57:25.181 [INFO][4969] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.76.198/26] handle="k8s-pod-network.ce4d744b785d12f661f333055a3a3769b8ad52966006e31b25724ed38f5ded84" host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:25.204179 containerd[1624]: 2026-04-16 23:57:25.181 [INFO][4969] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:57:25.204179 containerd[1624]: 2026-04-16 23:57:25.181 [INFO][4969] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.76.198/26] IPv6=[] ContainerID="ce4d744b785d12f661f333055a3a3769b8ad52966006e31b25724ed38f5ded84" HandleID="k8s-pod-network.ce4d744b785d12f661f333055a3a3769b8ad52966006e31b25724ed38f5ded84" Workload="ci--4459--2--4--n--fcb502653b-k8s-calico--kube--controllers--7dc5f88bc4--fqkzx-eth0" Apr 16 23:57:25.204661 containerd[1624]: 2026-04-16 23:57:25.184 [INFO][4940] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ce4d744b785d12f661f333055a3a3769b8ad52966006e31b25724ed38f5ded84" Namespace="calico-system" Pod="calico-kube-controllers-7dc5f88bc4-fqkzx" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-calico--kube--controllers--7dc5f88bc4--fqkzx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--fcb502653b-k8s-calico--kube--controllers--7dc5f88bc4--fqkzx-eth0", GenerateName:"calico-kube-controllers-7dc5f88bc4-", Namespace:"calico-system", SelfLink:"", UID:"db4fc8b7-8052-4c81-8eb8-55979d26b8c5", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 56, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7dc5f88bc4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-fcb502653b", ContainerID:"", Pod:"calico-kube-controllers-7dc5f88bc4-fqkzx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.76.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali80b557d7f27", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:57:25.204661 containerd[1624]: 2026-04-16 23:57:25.184 [INFO][4940] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.198/32] ContainerID="ce4d744b785d12f661f333055a3a3769b8ad52966006e31b25724ed38f5ded84" Namespace="calico-system" Pod="calico-kube-controllers-7dc5f88bc4-fqkzx" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-calico--kube--controllers--7dc5f88bc4--fqkzx-eth0" Apr 16 23:57:25.204661 containerd[1624]: 2026-04-16 23:57:25.184 [INFO][4940] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali80b557d7f27 ContainerID="ce4d744b785d12f661f333055a3a3769b8ad52966006e31b25724ed38f5ded84" Namespace="calico-system" Pod="calico-kube-controllers-7dc5f88bc4-fqkzx" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-calico--kube--controllers--7dc5f88bc4--fqkzx-eth0" Apr 16 23:57:25.204661 containerd[1624]: 2026-04-16 23:57:25.186 [INFO][4940] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ce4d744b785d12f661f333055a3a3769b8ad52966006e31b25724ed38f5ded84" Namespace="calico-system" Pod="calico-kube-controllers-7dc5f88bc4-fqkzx" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-calico--kube--controllers--7dc5f88bc4--fqkzx-eth0" Apr 16 23:57:25.204661 containerd[1624]: 2026-04-16 23:57:25.188 [INFO][4940] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ce4d744b785d12f661f333055a3a3769b8ad52966006e31b25724ed38f5ded84" Namespace="calico-system" Pod="calico-kube-controllers-7dc5f88bc4-fqkzx" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-calico--kube--controllers--7dc5f88bc4--fqkzx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--fcb502653b-k8s-calico--kube--controllers--7dc5f88bc4--fqkzx-eth0", GenerateName:"calico-kube-controllers-7dc5f88bc4-", Namespace:"calico-system", SelfLink:"", UID:"db4fc8b7-8052-4c81-8eb8-55979d26b8c5", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 56, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7dc5f88bc4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-fcb502653b", ContainerID:"ce4d744b785d12f661f333055a3a3769b8ad52966006e31b25724ed38f5ded84", Pod:"calico-kube-controllers-7dc5f88bc4-fqkzx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.76.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali80b557d7f27", MAC:"d6:57:48:bc:e0:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:57:25.204661 containerd[1624]: 2026-04-16 23:57:25.200 [INFO][4940] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ce4d744b785d12f661f333055a3a3769b8ad52966006e31b25724ed38f5ded84" Namespace="calico-system" Pod="calico-kube-controllers-7dc5f88bc4-fqkzx" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-calico--kube--controllers--7dc5f88bc4--fqkzx-eth0" Apr 16 23:57:25.239662 containerd[1624]: time="2026-04-16T23:57:25.239622755Z" level=info msg="connecting to shim ce4d744b785d12f661f333055a3a3769b8ad52966006e31b25724ed38f5ded84" address="unix:///run/containerd/s/99fc3ea5e2c28780f66b8fd55e7fc759571c47e0edae8ee92d558dabdc8240ad" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:57:25.273921 kubelet[2822]: I0416 23:57:25.273793 2822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7f4f89866d-dtrtx" podStartSLOduration=41.647264026 podStartE2EDuration="44.273774146s" podCreationTimestamp="2026-04-16 23:56:41 +0000 UTC" firstStartedPulling="2026-04-16 23:57:22.291614375 +0000 UTC m=+60.328548869" lastFinishedPulling="2026-04-16 23:57:24.918124535 +0000 UTC m=+62.955058989" observedRunningTime="2026-04-16 23:57:25.254229294 +0000 UTC m=+63.291163788" watchObservedRunningTime="2026-04-16 23:57:25.273774146 +0000 UTC m=+63.310708640" Apr 16 23:57:25.275361 kubelet[2822]: I0416 23:57:25.275152 2822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-z7m5k" podStartSLOduration=57.275139144 podStartE2EDuration="57.275139144s" podCreationTimestamp="2026-04-16 23:56:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:57:25.273990106 +0000 UTC m=+63.310924600" watchObservedRunningTime="2026-04-16 23:57:25.275139144 +0000 UTC m=+63.312073638" Apr 16 23:57:25.281223 systemd[1]: Started cri-containerd-ce4d744b785d12f661f333055a3a3769b8ad52966006e31b25724ed38f5ded84.scope - libcontainer container ce4d744b785d12f661f333055a3a3769b8ad52966006e31b25724ed38f5ded84. Apr 16 23:57:25.302408 kubelet[2822]: I0416 23:57:25.302166 2822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-h2ts7" podStartSLOduration=57.302149865 podStartE2EDuration="57.302149865s" podCreationTimestamp="2026-04-16 23:56:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:57:25.302034426 +0000 UTC m=+63.338968880" watchObservedRunningTime="2026-04-16 23:57:25.302149865 +0000 UTC m=+63.339084359" Apr 16 23:57:25.324286 systemd-networkd[1494]: cali0f8eb14e434: Link UP Apr 16 23:57:25.325427 systemd-networkd[1494]: cali0f8eb14e434: Gained carrier Apr 16 23:57:25.342995 containerd[1624]: 2026-04-16 23:57:25.110 [INFO][4951] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--fcb502653b-k8s-csi--node--driver--cnqx8-eth0 csi-node-driver- calico-system 83c6c081-1afd-4376-901d-205c3cf0fa07 708 0 2026-04-16 23:56:42 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459-2-4-n-fcb502653b csi-node-driver-cnqx8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0f8eb14e434 [] [] }} ContainerID="beaebda2ed73e8f7d99e665c38957616dc7486a1d303ff80515ee883a46d0adb" Namespace="calico-system" Pod="csi-node-driver-cnqx8" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-csi--node--driver--cnqx8-" Apr 16 23:57:25.342995 containerd[1624]: 2026-04-16 23:57:25.110 [INFO][4951] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="beaebda2ed73e8f7d99e665c38957616dc7486a1d303ff80515ee883a46d0adb" Namespace="calico-system" Pod="csi-node-driver-cnqx8" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-csi--node--driver--cnqx8-eth0" Apr 16 23:57:25.342995 containerd[1624]: 2026-04-16 23:57:25.138 [INFO][4971] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="beaebda2ed73e8f7d99e665c38957616dc7486a1d303ff80515ee883a46d0adb" HandleID="k8s-pod-network.beaebda2ed73e8f7d99e665c38957616dc7486a1d303ff80515ee883a46d0adb" Workload="ci--4459--2--4--n--fcb502653b-k8s-csi--node--driver--cnqx8-eth0" Apr 16 23:57:25.342995 containerd[1624]: 2026-04-16 23:57:25.154 [INFO][4971] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="beaebda2ed73e8f7d99e665c38957616dc7486a1d303ff80515ee883a46d0adb" HandleID="k8s-pod-network.beaebda2ed73e8f7d99e665c38957616dc7486a1d303ff80515ee883a46d0adb" Workload="ci--4459--2--4--n--fcb502653b-k8s-csi--node--driver--cnqx8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ebab0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-n-fcb502653b", "pod":"csi-node-driver-cnqx8", "timestamp":"2026-04-16 23:57:25.13814518 +0000 UTC"}, Hostname:"ci-4459-2-4-n-fcb502653b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000276f20)} Apr 16 23:57:25.342995 containerd[1624]: 2026-04-16 23:57:25.154 [INFO][4971] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:57:25.342995 containerd[1624]: 2026-04-16 23:57:25.181 [INFO][4971] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:57:25.342995 containerd[1624]: 2026-04-16 23:57:25.181 [INFO][4971] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-fcb502653b' Apr 16 23:57:25.342995 containerd[1624]: 2026-04-16 23:57:25.256 [INFO][4971] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.beaebda2ed73e8f7d99e665c38957616dc7486a1d303ff80515ee883a46d0adb" host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:25.342995 containerd[1624]: 2026-04-16 23:57:25.267 [INFO][4971] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:25.342995 containerd[1624]: 2026-04-16 23:57:25.281 [INFO][4971] ipam/ipam.go 526: Trying affinity for 192.168.76.192/26 host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:25.342995 containerd[1624]: 2026-04-16 23:57:25.286 [INFO][4971] ipam/ipam.go 160: Attempting to load block cidr=192.168.76.192/26 host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:25.342995 containerd[1624]: 2026-04-16 23:57:25.297 [INFO][4971] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.76.192/26 host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:25.342995 containerd[1624]: 2026-04-16 23:57:25.297 [INFO][4971] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.76.192/26 handle="k8s-pod-network.beaebda2ed73e8f7d99e665c38957616dc7486a1d303ff80515ee883a46d0adb" host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:25.342995 containerd[1624]: 2026-04-16 23:57:25.301 [INFO][4971] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.beaebda2ed73e8f7d99e665c38957616dc7486a1d303ff80515ee883a46d0adb Apr 16 23:57:25.342995 containerd[1624]: 2026-04-16 23:57:25.309 [INFO][4971] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.76.192/26 handle="k8s-pod-network.beaebda2ed73e8f7d99e665c38957616dc7486a1d303ff80515ee883a46d0adb" host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:25.342995 containerd[1624]: 2026-04-16 23:57:25.317 [INFO][4971] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.76.199/26] block=192.168.76.192/26 handle="k8s-pod-network.beaebda2ed73e8f7d99e665c38957616dc7486a1d303ff80515ee883a46d0adb" host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:25.342995 containerd[1624]: 2026-04-16 23:57:25.317 [INFO][4971] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.76.199/26] handle="k8s-pod-network.beaebda2ed73e8f7d99e665c38957616dc7486a1d303ff80515ee883a46d0adb" host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:25.342995 containerd[1624]: 2026-04-16 23:57:25.317 [INFO][4971] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:57:25.342995 containerd[1624]: 2026-04-16 23:57:25.317 [INFO][4971] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.76.199/26] IPv6=[] ContainerID="beaebda2ed73e8f7d99e665c38957616dc7486a1d303ff80515ee883a46d0adb" HandleID="k8s-pod-network.beaebda2ed73e8f7d99e665c38957616dc7486a1d303ff80515ee883a46d0adb" Workload="ci--4459--2--4--n--fcb502653b-k8s-csi--node--driver--cnqx8-eth0" Apr 16 23:57:25.343626 containerd[1624]: 2026-04-16 23:57:25.321 [INFO][4951] cni-plugin/k8s.go 418: Populated endpoint ContainerID="beaebda2ed73e8f7d99e665c38957616dc7486a1d303ff80515ee883a46d0adb" Namespace="calico-system" Pod="csi-node-driver-cnqx8" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-csi--node--driver--cnqx8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--fcb502653b-k8s-csi--node--driver--cnqx8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"83c6c081-1afd-4376-901d-205c3cf0fa07", ResourceVersion:"708", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 56, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-fcb502653b", ContainerID:"", Pod:"csi-node-driver-cnqx8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.76.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0f8eb14e434", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:57:25.343626 containerd[1624]: 2026-04-16 23:57:25.321 [INFO][4951] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.199/32] ContainerID="beaebda2ed73e8f7d99e665c38957616dc7486a1d303ff80515ee883a46d0adb" Namespace="calico-system" Pod="csi-node-driver-cnqx8" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-csi--node--driver--cnqx8-eth0" Apr 16 23:57:25.343626 containerd[1624]: 2026-04-16 23:57:25.321 [INFO][4951] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0f8eb14e434 ContainerID="beaebda2ed73e8f7d99e665c38957616dc7486a1d303ff80515ee883a46d0adb" Namespace="calico-system" Pod="csi-node-driver-cnqx8" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-csi--node--driver--cnqx8-eth0" Apr 16 23:57:25.343626 containerd[1624]: 2026-04-16 23:57:25.324 [INFO][4951] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="beaebda2ed73e8f7d99e665c38957616dc7486a1d303ff80515ee883a46d0adb" Namespace="calico-system" Pod="csi-node-driver-cnqx8" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-csi--node--driver--cnqx8-eth0" Apr 16 23:57:25.343626 containerd[1624]: 2026-04-16 23:57:25.324 [INFO][4951] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="beaebda2ed73e8f7d99e665c38957616dc7486a1d303ff80515ee883a46d0adb" Namespace="calico-system" Pod="csi-node-driver-cnqx8" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-csi--node--driver--cnqx8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--fcb502653b-k8s-csi--node--driver--cnqx8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"83c6c081-1afd-4376-901d-205c3cf0fa07", ResourceVersion:"708", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 56, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-fcb502653b", ContainerID:"beaebda2ed73e8f7d99e665c38957616dc7486a1d303ff80515ee883a46d0adb", Pod:"csi-node-driver-cnqx8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.76.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0f8eb14e434", MAC:"22:05:55:5a:e2:f3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:57:25.343626 containerd[1624]: 2026-04-16 23:57:25.339 [INFO][4951] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="beaebda2ed73e8f7d99e665c38957616dc7486a1d303ff80515ee883a46d0adb" Namespace="calico-system" Pod="csi-node-driver-cnqx8" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-csi--node--driver--cnqx8-eth0" Apr 16 23:57:25.384487 containerd[1624]: time="2026-04-16T23:57:25.384186468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7dc5f88bc4-fqkzx,Uid:db4fc8b7-8052-4c81-8eb8-55979d26b8c5,Namespace:calico-system,Attempt:0,} returns sandbox id \"ce4d744b785d12f661f333055a3a3769b8ad52966006e31b25724ed38f5ded84\"" Apr 16 23:57:25.384601 containerd[1624]: time="2026-04-16T23:57:25.384454948Z" level=info msg="connecting to shim beaebda2ed73e8f7d99e665c38957616dc7486a1d303ff80515ee883a46d0adb" address="unix:///run/containerd/s/91298b03d3fa396fd4243356b8e307e36505a9b4c57f0771388e2a56a8441bba" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:57:25.407223 systemd[1]: Started cri-containerd-beaebda2ed73e8f7d99e665c38957616dc7486a1d303ff80515ee883a46d0adb.scope - libcontainer container beaebda2ed73e8f7d99e665c38957616dc7486a1d303ff80515ee883a46d0adb. Apr 16 23:57:25.433786 containerd[1624]: time="2026-04-16T23:57:25.433735757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cnqx8,Uid:83c6c081-1afd-4376-901d-205c3cf0fa07,Namespace:calico-system,Attempt:0,} returns sandbox id \"beaebda2ed73e8f7d99e665c38957616dc7486a1d303ff80515ee883a46d0adb\"" Apr 16 23:57:25.581273 systemd-networkd[1494]: calif10fac810ce: Gained IPv6LL Apr 16 23:57:25.837727 systemd-networkd[1494]: cali7a6017392e9: Gained IPv6LL Apr 16 23:57:26.053139 containerd[1624]: time="2026-04-16T23:57:26.053087790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f4f89866d-h5fkl,Uid:9f322798-d216-4e6e-a073-43fa8544e291,Namespace:calico-system,Attempt:0,}" Apr 16 23:57:26.187793 systemd-networkd[1494]: calib410d629506: Link UP Apr 16 23:57:26.189347 systemd-networkd[1494]: calib410d629506: Gained carrier Apr 16 23:57:26.202592 containerd[1624]: 2026-04-16 23:57:26.103 [INFO][5134] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--fcb502653b-k8s-calico--apiserver--7f4f89866d--h5fkl-eth0 calico-apiserver-7f4f89866d- calico-system 9f322798-d216-4e6e-a073-43fa8544e291 866 0 2026-04-16 23:56:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f4f89866d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-4-n-fcb502653b calico-apiserver-7f4f89866d-h5fkl eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calib410d629506 [] [] }} ContainerID="371cc2c17845dfd2ee8dc76b28876d769c8d3d3e32493f48bc53ebd1941056f6" Namespace="calico-system" Pod="calico-apiserver-7f4f89866d-h5fkl" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-calico--apiserver--7f4f89866d--h5fkl-" Apr 16 23:57:26.202592 containerd[1624]: 2026-04-16 23:57:26.103 [INFO][5134] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="371cc2c17845dfd2ee8dc76b28876d769c8d3d3e32493f48bc53ebd1941056f6" Namespace="calico-system" Pod="calico-apiserver-7f4f89866d-h5fkl" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-calico--apiserver--7f4f89866d--h5fkl-eth0" Apr 16 23:57:26.202592 containerd[1624]: 2026-04-16 23:57:26.139 [INFO][5150] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="371cc2c17845dfd2ee8dc76b28876d769c8d3d3e32493f48bc53ebd1941056f6" HandleID="k8s-pod-network.371cc2c17845dfd2ee8dc76b28876d769c8d3d3e32493f48bc53ebd1941056f6" Workload="ci--4459--2--4--n--fcb502653b-k8s-calico--apiserver--7f4f89866d--h5fkl-eth0" Apr 16 23:57:26.202592 containerd[1624]: 2026-04-16 23:57:26.150 [INFO][5150] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="371cc2c17845dfd2ee8dc76b28876d769c8d3d3e32493f48bc53ebd1941056f6" HandleID="k8s-pod-network.371cc2c17845dfd2ee8dc76b28876d769c8d3d3e32493f48bc53ebd1941056f6" Workload="ci--4459--2--4--n--fcb502653b-k8s-calico--apiserver--7f4f89866d--h5fkl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000390090), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-n-fcb502653b", "pod":"calico-apiserver-7f4f89866d-h5fkl", "timestamp":"2026-04-16 23:57:26.139637787 +0000 UTC"}, Hostname:"ci-4459-2-4-n-fcb502653b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001966e0)} Apr 16 23:57:26.202592 containerd[1624]: 2026-04-16 23:57:26.150 [INFO][5150] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:57:26.202592 containerd[1624]: 2026-04-16 23:57:26.150 [INFO][5150] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:57:26.202592 containerd[1624]: 2026-04-16 23:57:26.150 [INFO][5150] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-fcb502653b' Apr 16 23:57:26.202592 containerd[1624]: 2026-04-16 23:57:26.153 [INFO][5150] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.371cc2c17845dfd2ee8dc76b28876d769c8d3d3e32493f48bc53ebd1941056f6" host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:26.202592 containerd[1624]: 2026-04-16 23:57:26.157 [INFO][5150] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:26.202592 containerd[1624]: 2026-04-16 23:57:26.163 [INFO][5150] ipam/ipam.go 526: Trying affinity for 192.168.76.192/26 host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:26.202592 containerd[1624]: 2026-04-16 23:57:26.164 [INFO][5150] ipam/ipam.go 160: Attempting to load block cidr=192.168.76.192/26 host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:26.202592 containerd[1624]: 2026-04-16 23:57:26.168 [INFO][5150] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.76.192/26 host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:26.202592 containerd[1624]: 2026-04-16 23:57:26.168 [INFO][5150] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.76.192/26 handle="k8s-pod-network.371cc2c17845dfd2ee8dc76b28876d769c8d3d3e32493f48bc53ebd1941056f6" host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:26.202592 containerd[1624]: 2026-04-16 23:57:26.170 [INFO][5150] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.371cc2c17845dfd2ee8dc76b28876d769c8d3d3e32493f48bc53ebd1941056f6 Apr 16 23:57:26.202592 containerd[1624]: 2026-04-16 23:57:26.175 [INFO][5150] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.76.192/26 handle="k8s-pod-network.371cc2c17845dfd2ee8dc76b28876d769c8d3d3e32493f48bc53ebd1941056f6" host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:26.202592 containerd[1624]: 2026-04-16 23:57:26.183 [INFO][5150] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.76.200/26] block=192.168.76.192/26 handle="k8s-pod-network.371cc2c17845dfd2ee8dc76b28876d769c8d3d3e32493f48bc53ebd1941056f6" host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:26.202592 containerd[1624]: 2026-04-16 23:57:26.183 [INFO][5150] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.76.200/26] handle="k8s-pod-network.371cc2c17845dfd2ee8dc76b28876d769c8d3d3e32493f48bc53ebd1941056f6" host="ci-4459-2-4-n-fcb502653b" Apr 16 23:57:26.202592 containerd[1624]: 2026-04-16 23:57:26.183 [INFO][5150] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:57:26.202592 containerd[1624]: 2026-04-16 23:57:26.183 [INFO][5150] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.76.200/26] IPv6=[] ContainerID="371cc2c17845dfd2ee8dc76b28876d769c8d3d3e32493f48bc53ebd1941056f6" HandleID="k8s-pod-network.371cc2c17845dfd2ee8dc76b28876d769c8d3d3e32493f48bc53ebd1941056f6" Workload="ci--4459--2--4--n--fcb502653b-k8s-calico--apiserver--7f4f89866d--h5fkl-eth0" Apr 16 23:57:26.203198 containerd[1624]: 2026-04-16 23:57:26.185 [INFO][5134] cni-plugin/k8s.go 418: Populated endpoint ContainerID="371cc2c17845dfd2ee8dc76b28876d769c8d3d3e32493f48bc53ebd1941056f6" Namespace="calico-system" Pod="calico-apiserver-7f4f89866d-h5fkl" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-calico--apiserver--7f4f89866d--h5fkl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--fcb502653b-k8s-calico--apiserver--7f4f89866d--h5fkl-eth0", GenerateName:"calico-apiserver-7f4f89866d-", Namespace:"calico-system", SelfLink:"", UID:"9f322798-d216-4e6e-a073-43fa8544e291", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 56, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f4f89866d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-fcb502653b", ContainerID:"", Pod:"calico-apiserver-7f4f89866d-h5fkl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib410d629506", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:57:26.203198 containerd[1624]: 2026-04-16 23:57:26.186 [INFO][5134] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.200/32] ContainerID="371cc2c17845dfd2ee8dc76b28876d769c8d3d3e32493f48bc53ebd1941056f6" Namespace="calico-system" Pod="calico-apiserver-7f4f89866d-h5fkl" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-calico--apiserver--7f4f89866d--h5fkl-eth0" Apr 16 23:57:26.203198 containerd[1624]: 2026-04-16 23:57:26.186 [INFO][5134] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib410d629506 ContainerID="371cc2c17845dfd2ee8dc76b28876d769c8d3d3e32493f48bc53ebd1941056f6" Namespace="calico-system" Pod="calico-apiserver-7f4f89866d-h5fkl" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-calico--apiserver--7f4f89866d--h5fkl-eth0" Apr 16 23:57:26.203198 containerd[1624]: 2026-04-16 23:57:26.189 [INFO][5134] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="371cc2c17845dfd2ee8dc76b28876d769c8d3d3e32493f48bc53ebd1941056f6" Namespace="calico-system" Pod="calico-apiserver-7f4f89866d-h5fkl" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-calico--apiserver--7f4f89866d--h5fkl-eth0" Apr 16 23:57:26.203198 containerd[1624]: 2026-04-16 23:57:26.190 [INFO][5134] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="371cc2c17845dfd2ee8dc76b28876d769c8d3d3e32493f48bc53ebd1941056f6" Namespace="calico-system" Pod="calico-apiserver-7f4f89866d-h5fkl" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-calico--apiserver--7f4f89866d--h5fkl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--fcb502653b-k8s-calico--apiserver--7f4f89866d--h5fkl-eth0", GenerateName:"calico-apiserver-7f4f89866d-", Namespace:"calico-system", SelfLink:"", UID:"9f322798-d216-4e6e-a073-43fa8544e291", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 56, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f4f89866d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-fcb502653b", ContainerID:"371cc2c17845dfd2ee8dc76b28876d769c8d3d3e32493f48bc53ebd1941056f6", Pod:"calico-apiserver-7f4f89866d-h5fkl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib410d629506", MAC:"d6:3d:e9:1c:8c:ea", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:57:26.203198 containerd[1624]: 2026-04-16 23:57:26.200 [INFO][5134] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="371cc2c17845dfd2ee8dc76b28876d769c8d3d3e32493f48bc53ebd1941056f6" Namespace="calico-system" Pod="calico-apiserver-7f4f89866d-h5fkl" WorkloadEndpoint="ci--4459--2--4--n--fcb502653b-k8s-calico--apiserver--7f4f89866d--h5fkl-eth0" Apr 16 23:57:26.236492 containerd[1624]: time="2026-04-16T23:57:26.236432968Z" level=info msg="connecting to shim 371cc2c17845dfd2ee8dc76b28876d769c8d3d3e32493f48bc53ebd1941056f6" address="unix:///run/containerd/s/d042598753d4345727424cfdf066c9eff6d6af97780ce381f67e1e799c2d0ff3" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:57:26.249249 kubelet[2822]: I0416 23:57:26.249205 2822 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 23:57:26.262182 systemd[1]: Started cri-containerd-371cc2c17845dfd2ee8dc76b28876d769c8d3d3e32493f48bc53ebd1941056f6.scope - libcontainer container 371cc2c17845dfd2ee8dc76b28876d769c8d3d3e32493f48bc53ebd1941056f6. Apr 16 23:57:26.294390 containerd[1624]: time="2026-04-16T23:57:26.294348285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f4f89866d-h5fkl,Uid:9f322798-d216-4e6e-a073-43fa8544e291,Namespace:calico-system,Attempt:0,} returns sandbox id \"371cc2c17845dfd2ee8dc76b28876d769c8d3d3e32493f48bc53ebd1941056f6\"" Apr 16 23:57:26.301072 containerd[1624]: time="2026-04-16T23:57:26.300694236Z" level=info msg="CreateContainer within sandbox \"371cc2c17845dfd2ee8dc76b28876d769c8d3d3e32493f48bc53ebd1941056f6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 16 23:57:26.314685 containerd[1624]: time="2026-04-16T23:57:26.314344016Z" level=info msg="Container 59924a16bf7599aee076f7d646a85522cd9cf6fdad2c2314e3a10650e8047566: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:57:26.335887 containerd[1624]: time="2026-04-16T23:57:26.335771986Z" level=info msg="CreateContainer within sandbox \"371cc2c17845dfd2ee8dc76b28876d769c8d3d3e32493f48bc53ebd1941056f6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"59924a16bf7599aee076f7d646a85522cd9cf6fdad2c2314e3a10650e8047566\"" Apr 16 23:57:26.336984 containerd[1624]: time="2026-04-16T23:57:26.336945024Z" level=info msg="StartContainer for \"59924a16bf7599aee076f7d646a85522cd9cf6fdad2c2314e3a10650e8047566\"" Apr 16 23:57:26.338093 containerd[1624]: time="2026-04-16T23:57:26.338062543Z" level=info msg="connecting to shim 59924a16bf7599aee076f7d646a85522cd9cf6fdad2c2314e3a10650e8047566" address="unix:///run/containerd/s/d042598753d4345727424cfdf066c9eff6d6af97780ce381f67e1e799c2d0ff3" protocol=ttrpc version=3 Apr 16 23:57:26.359217 systemd[1]: Started cri-containerd-59924a16bf7599aee076f7d646a85522cd9cf6fdad2c2314e3a10650e8047566.scope - libcontainer container 59924a16bf7599aee076f7d646a85522cd9cf6fdad2c2314e3a10650e8047566. Apr 16 23:57:26.403356 containerd[1624]: time="2026-04-16T23:57:26.403317569Z" level=info msg="StartContainer for \"59924a16bf7599aee076f7d646a85522cd9cf6fdad2c2314e3a10650e8047566\" returns successfully" Apr 16 23:57:26.477328 systemd-networkd[1494]: cali0f8eb14e434: Gained IPv6LL Apr 16 23:57:26.797581 systemd-networkd[1494]: cali80b557d7f27: Gained IPv6LL Apr 16 23:57:27.233423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3048960995.mount: Deactivated successfully. Apr 16 23:57:27.269385 kubelet[2822]: I0416 23:57:27.269323 2822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7f4f89866d-h5fkl" podStartSLOduration=46.26913877 podStartE2EDuration="46.26913877s" podCreationTimestamp="2026-04-16 23:56:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:57:27.266952253 +0000 UTC m=+65.303886747" watchObservedRunningTime="2026-04-16 23:57:27.26913877 +0000 UTC m=+65.306073264" Apr 16 23:57:27.438315 systemd-networkd[1494]: calib410d629506: Gained IPv6LL Apr 16 23:57:27.550880 containerd[1624]: time="2026-04-16T23:57:27.550812726Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:57:27.552250 containerd[1624]: time="2026-04-16T23:57:27.552226644Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Apr 16 23:57:27.554030 containerd[1624]: time="2026-04-16T23:57:27.554005842Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:57:27.557727 containerd[1624]: time="2026-04-16T23:57:27.557698357Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:57:27.558489 containerd[1624]: time="2026-04-16T23:57:27.558461435Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 2.639571301s" Apr 16 23:57:27.558599 containerd[1624]: time="2026-04-16T23:57:27.558582715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Apr 16 23:57:27.559971 containerd[1624]: time="2026-04-16T23:57:27.559928433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 16 23:57:27.565210 containerd[1624]: time="2026-04-16T23:57:27.565144226Z" level=info msg="CreateContainer within sandbox \"10c46ef4fd5b869480045cc235fc450df17f1e5c73edec1ce8734427a3e22839\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 16 23:57:27.582114 containerd[1624]: time="2026-04-16T23:57:27.582079082Z" level=info msg="Container 619b3da86776e66d47f88c28fc5c7ec05229801b454b2d3c1fb043b620cfcc4e: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:57:27.600419 containerd[1624]: time="2026-04-16T23:57:27.600370455Z" level=info msg="CreateContainer within sandbox \"10c46ef4fd5b869480045cc235fc450df17f1e5c73edec1ce8734427a3e22839\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"619b3da86776e66d47f88c28fc5c7ec05229801b454b2d3c1fb043b620cfcc4e\"" Apr 16 23:57:27.601192 containerd[1624]: time="2026-04-16T23:57:27.601164574Z" level=info msg="StartContainer for \"619b3da86776e66d47f88c28fc5c7ec05229801b454b2d3c1fb043b620cfcc4e\"" Apr 16 23:57:27.602755 containerd[1624]: time="2026-04-16T23:57:27.602675452Z" level=info msg="connecting to shim 619b3da86776e66d47f88c28fc5c7ec05229801b454b2d3c1fb043b620cfcc4e" address="unix:///run/containerd/s/cf902df878d99be2b13f6b531fee56e8055487ef9fc936c6e35d550bf494da9e" protocol=ttrpc version=3 Apr 16 23:57:27.625195 systemd[1]: Started cri-containerd-619b3da86776e66d47f88c28fc5c7ec05229801b454b2d3c1fb043b620cfcc4e.scope - libcontainer container 619b3da86776e66d47f88c28fc5c7ec05229801b454b2d3c1fb043b620cfcc4e. Apr 16 23:57:27.660650 containerd[1624]: time="2026-04-16T23:57:27.660613209Z" level=info msg="StartContainer for \"619b3da86776e66d47f88c28fc5c7ec05229801b454b2d3c1fb043b620cfcc4e\" returns successfully" Apr 16 23:57:28.268590 kubelet[2822]: I0416 23:57:28.268522 2822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-t64m4" podStartSLOduration=42.094440906 podStartE2EDuration="47.268509259s" podCreationTimestamp="2026-04-16 23:56:41 +0000 UTC" firstStartedPulling="2026-04-16 23:57:22.385630561 +0000 UTC m=+60.422565055" lastFinishedPulling="2026-04-16 23:57:27.559698914 +0000 UTC m=+65.596633408" observedRunningTime="2026-04-16 23:57:28.26810662 +0000 UTC m=+66.305041114" watchObservedRunningTime="2026-04-16 23:57:28.268509259 +0000 UTC m=+66.305443753" Apr 16 23:57:30.587782 containerd[1624]: time="2026-04-16T23:57:30.587686538Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:57:30.590145 containerd[1624]: time="2026-04-16T23:57:30.590056254Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Apr 16 23:57:30.591709 containerd[1624]: time="2026-04-16T23:57:30.591669372Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:57:30.595411 containerd[1624]: time="2026-04-16T23:57:30.595102327Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:57:30.596019 containerd[1624]: time="2026-04-16T23:57:30.595986766Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 3.036013973s" Apr 16 23:57:30.596076 containerd[1624]: time="2026-04-16T23:57:30.596026326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Apr 16 23:57:30.597015 containerd[1624]: time="2026-04-16T23:57:30.596888484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 16 23:57:30.610837 containerd[1624]: time="2026-04-16T23:57:30.610800705Z" level=info msg="CreateContainer within sandbox \"ce4d744b785d12f661f333055a3a3769b8ad52966006e31b25724ed38f5ded84\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 16 23:57:30.624544 containerd[1624]: time="2026-04-16T23:57:30.623614486Z" level=info msg="Container cbe0b71d6676fb29ca0221e6fb1d798ccfbf00d3bbd9d4d21181ae5589f7fa60: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:57:30.634822 containerd[1624]: time="2026-04-16T23:57:30.634761310Z" level=info msg="CreateContainer within sandbox \"ce4d744b785d12f661f333055a3a3769b8ad52966006e31b25724ed38f5ded84\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"cbe0b71d6676fb29ca0221e6fb1d798ccfbf00d3bbd9d4d21181ae5589f7fa60\"" Apr 16 23:57:30.635416 containerd[1624]: time="2026-04-16T23:57:30.635341229Z" level=info msg="StartContainer for \"cbe0b71d6676fb29ca0221e6fb1d798ccfbf00d3bbd9d4d21181ae5589f7fa60\"" Apr 16 23:57:30.636754 containerd[1624]: time="2026-04-16T23:57:30.636725787Z" level=info msg="connecting to shim cbe0b71d6676fb29ca0221e6fb1d798ccfbf00d3bbd9d4d21181ae5589f7fa60" address="unix:///run/containerd/s/99fc3ea5e2c28780f66b8fd55e7fc759571c47e0edae8ee92d558dabdc8240ad" protocol=ttrpc version=3 Apr 16 23:57:30.657231 systemd[1]: Started cri-containerd-cbe0b71d6676fb29ca0221e6fb1d798ccfbf00d3bbd9d4d21181ae5589f7fa60.scope - libcontainer container cbe0b71d6676fb29ca0221e6fb1d798ccfbf00d3bbd9d4d21181ae5589f7fa60. Apr 16 23:57:30.696976 containerd[1624]: time="2026-04-16T23:57:30.696936781Z" level=info msg="StartContainer for \"cbe0b71d6676fb29ca0221e6fb1d798ccfbf00d3bbd9d4d21181ae5589f7fa60\" returns successfully" Apr 16 23:57:31.314356 kubelet[2822]: I0416 23:57:31.314283 2822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7dc5f88bc4-fqkzx" podStartSLOduration=44.105804914 podStartE2EDuration="49.314268257s" podCreationTimestamp="2026-04-16 23:56:42 +0000 UTC" firstStartedPulling="2026-04-16 23:57:25.388259142 +0000 UTC m=+63.425193636" lastFinishedPulling="2026-04-16 23:57:30.596722485 +0000 UTC m=+68.633656979" observedRunningTime="2026-04-16 23:57:31.282461463 +0000 UTC m=+69.319395957" watchObservedRunningTime="2026-04-16 23:57:31.314268257 +0000 UTC m=+69.351202751" Apr 16 23:57:32.694863 containerd[1624]: time="2026-04-16T23:57:32.694778401Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:57:32.697036 containerd[1624]: time="2026-04-16T23:57:32.696986398Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Apr 16 23:57:32.698611 containerd[1624]: time="2026-04-16T23:57:32.698561356Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:57:32.702576 containerd[1624]: time="2026-04-16T23:57:32.702483870Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:57:32.703276 containerd[1624]: time="2026-04-16T23:57:32.703216789Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 2.106285385s" Apr 16 23:57:32.703324 containerd[1624]: time="2026-04-16T23:57:32.703293189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Apr 16 23:57:32.709894 containerd[1624]: time="2026-04-16T23:57:32.709846060Z" level=info msg="CreateContainer within sandbox \"beaebda2ed73e8f7d99e665c38957616dc7486a1d303ff80515ee883a46d0adb\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 16 23:57:32.725540 containerd[1624]: time="2026-04-16T23:57:32.725500997Z" level=info msg="Container 8382d0106bdcfab809d7c6966c530ba43a327e0b74ec2fa850024382d48aab41: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:57:32.740222 containerd[1624]: time="2026-04-16T23:57:32.740164496Z" level=info msg="CreateContainer within sandbox \"beaebda2ed73e8f7d99e665c38957616dc7486a1d303ff80515ee883a46d0adb\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8382d0106bdcfab809d7c6966c530ba43a327e0b74ec2fa850024382d48aab41\"" Apr 16 23:57:32.741955 containerd[1624]: time="2026-04-16T23:57:32.741019295Z" level=info msg="StartContainer for \"8382d0106bdcfab809d7c6966c530ba43a327e0b74ec2fa850024382d48aab41\"" Apr 16 23:57:32.742704 containerd[1624]: time="2026-04-16T23:57:32.742666093Z" level=info msg="connecting to shim 8382d0106bdcfab809d7c6966c530ba43a327e0b74ec2fa850024382d48aab41" address="unix:///run/containerd/s/91298b03d3fa396fd4243356b8e307e36505a9b4c57f0771388e2a56a8441bba" protocol=ttrpc version=3 Apr 16 23:57:32.760197 systemd[1]: Started cri-containerd-8382d0106bdcfab809d7c6966c530ba43a327e0b74ec2fa850024382d48aab41.scope - libcontainer container 8382d0106bdcfab809d7c6966c530ba43a327e0b74ec2fa850024382d48aab41. Apr 16 23:57:32.826611 containerd[1624]: time="2026-04-16T23:57:32.826563172Z" level=info msg="StartContainer for \"8382d0106bdcfab809d7c6966c530ba43a327e0b74ec2fa850024382d48aab41\" returns successfully" Apr 16 23:57:32.827873 containerd[1624]: time="2026-04-16T23:57:32.827830131Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 16 23:57:34.578346 containerd[1624]: time="2026-04-16T23:57:34.578253745Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:57:34.579682 containerd[1624]: time="2026-04-16T23:57:34.579642383Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Apr 16 23:57:34.582077 containerd[1624]: time="2026-04-16T23:57:34.582026299Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:57:34.585194 containerd[1624]: time="2026-04-16T23:57:34.585146335Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:57:34.585887 containerd[1624]: time="2026-04-16T23:57:34.585775734Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 1.757907363s" Apr 16 23:57:34.585887 containerd[1624]: time="2026-04-16T23:57:34.585805134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Apr 16 23:57:34.591663 containerd[1624]: time="2026-04-16T23:57:34.591632246Z" level=info msg="CreateContainer within sandbox \"beaebda2ed73e8f7d99e665c38957616dc7486a1d303ff80515ee883a46d0adb\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 16 23:57:34.606062 containerd[1624]: time="2026-04-16T23:57:34.605171546Z" level=info msg="Container ddcdbdd50c2b792e73672331984ce567cfcf9b5484157c1cab5de0e5bc14007b: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:57:34.618334 containerd[1624]: time="2026-04-16T23:57:34.618258367Z" level=info msg="CreateContainer within sandbox \"beaebda2ed73e8f7d99e665c38957616dc7486a1d303ff80515ee883a46d0adb\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ddcdbdd50c2b792e73672331984ce567cfcf9b5484157c1cab5de0e5bc14007b\"" Apr 16 23:57:34.618808 containerd[1624]: time="2026-04-16T23:57:34.618787607Z" level=info msg="StartContainer for \"ddcdbdd50c2b792e73672331984ce567cfcf9b5484157c1cab5de0e5bc14007b\"" Apr 16 23:57:34.620953 containerd[1624]: time="2026-04-16T23:57:34.620880204Z" level=info msg="connecting to shim ddcdbdd50c2b792e73672331984ce567cfcf9b5484157c1cab5de0e5bc14007b" address="unix:///run/containerd/s/91298b03d3fa396fd4243356b8e307e36505a9b4c57f0771388e2a56a8441bba" protocol=ttrpc version=3 Apr 16 23:57:34.640290 systemd[1]: Started cri-containerd-ddcdbdd50c2b792e73672331984ce567cfcf9b5484157c1cab5de0e5bc14007b.scope - libcontainer container ddcdbdd50c2b792e73672331984ce567cfcf9b5484157c1cab5de0e5bc14007b. Apr 16 23:57:34.713096 containerd[1624]: time="2026-04-16T23:57:34.712628592Z" level=info msg="StartContainer for \"ddcdbdd50c2b792e73672331984ce567cfcf9b5484157c1cab5de0e5bc14007b\" returns successfully" Apr 16 23:57:35.129842 kubelet[2822]: I0416 23:57:35.129759 2822 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 16 23:57:35.129842 kubelet[2822]: I0416 23:57:35.129791 2822 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 16 23:57:35.300260 kubelet[2822]: I0416 23:57:35.300172 2822 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-cnqx8" podStartSLOduration=44.148857373 podStartE2EDuration="53.300156991s" podCreationTimestamp="2026-04-16 23:56:42 +0000 UTC" firstStartedPulling="2026-04-16 23:57:25.435341515 +0000 UTC m=+63.472275969" lastFinishedPulling="2026-04-16 23:57:34.586641133 +0000 UTC m=+72.623575587" observedRunningTime="2026-04-16 23:57:35.299948112 +0000 UTC m=+73.336882606" watchObservedRunningTime="2026-04-16 23:57:35.300156991 +0000 UTC m=+73.337091485" Apr 16 23:57:45.574381 kubelet[2822]: I0416 23:57:45.574126 2822 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 23:59:01.987928 systemd[1]: Started sshd@7-10.0.0.99:22-50.85.169.122:53618.service - OpenSSH per-connection server daemon (50.85.169.122:53618). Apr 16 23:59:02.095859 sshd[5944]: Accepted publickey for core from 50.85.169.122 port 53618 ssh2: RSA SHA256:u5daex5fbfF6gkH3TuW6SsdItDTN0gEqgyO+gco2L6k Apr 16 23:59:02.097276 sshd-session[5944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:59:02.101149 systemd-logind[1603]: New session 8 of user core. Apr 16 23:59:02.106192 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 16 23:59:02.208581 sshd[5947]: Connection closed by 50.85.169.122 port 53618 Apr 16 23:59:02.209141 sshd-session[5944]: pam_unix(sshd:session): session closed for user core Apr 16 23:59:02.212627 systemd[1]: sshd@7-10.0.0.99:22-50.85.169.122:53618.service: Deactivated successfully. Apr 16 23:59:02.214605 systemd[1]: session-8.scope: Deactivated successfully. Apr 16 23:59:02.215480 systemd-logind[1603]: Session 8 logged out. Waiting for processes to exit. Apr 16 23:59:02.217003 systemd-logind[1603]: Removed session 8. Apr 16 23:59:06.522194 update_engine[1607]: I20260416 23:59:06.522130 1607 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 16 23:59:06.522194 update_engine[1607]: I20260416 23:59:06.522178 1607 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 16 23:59:06.522713 update_engine[1607]: I20260416 23:59:06.522411 1607 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 16 23:59:06.522997 update_engine[1607]: I20260416 23:59:06.522897 1607 omaha_request_params.cc:62] Current group set to stable Apr 16 23:59:06.522997 update_engine[1607]: I20260416 23:59:06.522984 1607 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 16 23:59:06.522997 update_engine[1607]: I20260416 23:59:06.522991 1607 update_attempter.cc:643] Scheduling an action processor start. Apr 16 23:59:06.523121 update_engine[1607]: I20260416 23:59:06.523005 1607 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 16 23:59:06.523121 update_engine[1607]: I20260416 23:59:06.523032 1607 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 16 23:59:06.523121 update_engine[1607]: I20260416 23:59:06.523110 1607 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 16 23:59:06.523121 update_engine[1607]: I20260416 23:59:06.523118 1607 omaha_request_action.cc:272] Request: Apr 16 23:59:06.523121 update_engine[1607]: Apr 16 23:59:06.523121 update_engine[1607]: Apr 16 23:59:06.523121 update_engine[1607]: Apr 16 23:59:06.523121 update_engine[1607]: Apr 16 23:59:06.523121 update_engine[1607]: Apr 16 23:59:06.523121 update_engine[1607]: Apr 16 23:59:06.523121 update_engine[1607]: Apr 16 23:59:06.523121 update_engine[1607]: Apr 16 23:59:06.523719 update_engine[1607]: I20260416 23:59:06.523123 1607 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 23:59:06.523765 locksmithd[1650]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 16 23:59:06.524839 update_engine[1607]: I20260416 23:59:06.524801 1607 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 23:59:06.525587 update_engine[1607]: I20260416 23:59:06.525539 1607 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 23:59:06.532761 update_engine[1607]: E20260416 23:59:06.532707 1607 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 23:59:06.532845 update_engine[1607]: I20260416 23:59:06.532790 1607 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 16 23:59:07.233475 systemd[1]: Started sshd@8-10.0.0.99:22-50.85.169.122:53622.service - OpenSSH per-connection server daemon (50.85.169.122:53622). Apr 16 23:59:07.343402 sshd[5962]: Accepted publickey for core from 50.85.169.122 port 53622 ssh2: RSA SHA256:u5daex5fbfF6gkH3TuW6SsdItDTN0gEqgyO+gco2L6k Apr 16 23:59:07.344777 sshd-session[5962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:59:07.349093 systemd-logind[1603]: New session 9 of user core. Apr 16 23:59:07.356180 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 16 23:59:07.452522 sshd[5965]: Connection closed by 50.85.169.122 port 53622 Apr 16 23:59:07.454094 sshd-session[5962]: pam_unix(sshd:session): session closed for user core Apr 16 23:59:07.457427 systemd[1]: sshd@8-10.0.0.99:22-50.85.169.122:53622.service: Deactivated successfully. Apr 16 23:59:07.459230 systemd[1]: session-9.scope: Deactivated successfully. Apr 16 23:59:07.459989 systemd-logind[1603]: Session 9 logged out. Waiting for processes to exit. Apr 16 23:59:07.461578 systemd-logind[1603]: Removed session 9. Apr 16 23:59:12.476284 systemd[1]: Started sshd@9-10.0.0.99:22-50.85.169.122:39168.service - OpenSSH per-connection server daemon (50.85.169.122:39168). Apr 16 23:59:12.589083 sshd[5980]: Accepted publickey for core from 50.85.169.122 port 39168 ssh2: RSA SHA256:u5daex5fbfF6gkH3TuW6SsdItDTN0gEqgyO+gco2L6k Apr 16 23:59:12.589966 sshd-session[5980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:59:12.594305 systemd-logind[1603]: New session 10 of user core. Apr 16 23:59:12.604411 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 16 23:59:12.696073 sshd[5983]: Connection closed by 50.85.169.122 port 39168 Apr 16 23:59:12.696610 sshd-session[5980]: pam_unix(sshd:session): session closed for user core Apr 16 23:59:12.699908 systemd[1]: sshd@9-10.0.0.99:22-50.85.169.122:39168.service: Deactivated successfully. Apr 16 23:59:12.701846 systemd[1]: session-10.scope: Deactivated successfully. Apr 16 23:59:12.703674 systemd-logind[1603]: Session 10 logged out. Waiting for processes to exit. Apr 16 23:59:12.705124 systemd-logind[1603]: Removed session 10. Apr 16 23:59:16.522018 update_engine[1607]: I20260416 23:59:16.521370 1607 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 23:59:16.522018 update_engine[1607]: I20260416 23:59:16.521524 1607 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 23:59:16.522018 update_engine[1607]: I20260416 23:59:16.521967 1607 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 23:59:16.527555 update_engine[1607]: E20260416 23:59:16.527513 1607 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 23:59:16.527831 update_engine[1607]: I20260416 23:59:16.527808 1607 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 16 23:59:17.721273 systemd[1]: Started sshd@10-10.0.0.99:22-50.85.169.122:39178.service - OpenSSH per-connection server daemon (50.85.169.122:39178). Apr 16 23:59:17.831182 sshd[6038]: Accepted publickey for core from 50.85.169.122 port 39178 ssh2: RSA SHA256:u5daex5fbfF6gkH3TuW6SsdItDTN0gEqgyO+gco2L6k Apr 16 23:59:17.832582 sshd-session[6038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:59:17.837171 systemd-logind[1603]: New session 11 of user core. Apr 16 23:59:17.843335 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 16 23:59:17.938120 sshd[6041]: Connection closed by 50.85.169.122 port 39178 Apr 16 23:59:17.938089 sshd-session[6038]: pam_unix(sshd:session): session closed for user core Apr 16 23:59:17.941609 systemd[1]: sshd@10-10.0.0.99:22-50.85.169.122:39178.service: Deactivated successfully. Apr 16 23:59:17.944851 systemd[1]: session-11.scope: Deactivated successfully. Apr 16 23:59:17.946255 systemd-logind[1603]: Session 11 logged out. Waiting for processes to exit. Apr 16 23:59:17.947892 systemd-logind[1603]: Removed session 11. Apr 16 23:59:17.966672 systemd[1]: Started sshd@11-10.0.0.99:22-50.85.169.122:39182.service - OpenSSH per-connection server daemon (50.85.169.122:39182). Apr 16 23:59:18.075455 sshd[6056]: Accepted publickey for core from 50.85.169.122 port 39182 ssh2: RSA SHA256:u5daex5fbfF6gkH3TuW6SsdItDTN0gEqgyO+gco2L6k Apr 16 23:59:18.076809 sshd-session[6056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:59:18.080602 systemd-logind[1603]: New session 12 of user core. Apr 16 23:59:18.090289 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 16 23:59:18.218227 sshd[6059]: Connection closed by 50.85.169.122 port 39182 Apr 16 23:59:18.219093 sshd-session[6056]: pam_unix(sshd:session): session closed for user core Apr 16 23:59:18.225474 systemd[1]: sshd@11-10.0.0.99:22-50.85.169.122:39182.service: Deactivated successfully. Apr 16 23:59:18.230906 systemd[1]: session-12.scope: Deactivated successfully. Apr 16 23:59:18.232481 systemd-logind[1603]: Session 12 logged out. Waiting for processes to exit. Apr 16 23:59:18.243603 systemd[1]: Started sshd@12-10.0.0.99:22-50.85.169.122:39186.service - OpenSSH per-connection server daemon (50.85.169.122:39186). Apr 16 23:59:18.244632 systemd-logind[1603]: Removed session 12. Apr 16 23:59:18.359802 sshd[6071]: Accepted publickey for core from 50.85.169.122 port 39186 ssh2: RSA SHA256:u5daex5fbfF6gkH3TuW6SsdItDTN0gEqgyO+gco2L6k Apr 16 23:59:18.361021 sshd-session[6071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:59:18.365029 systemd-logind[1603]: New session 13 of user core. Apr 16 23:59:18.377232 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 16 23:59:18.473561 sshd[6074]: Connection closed by 50.85.169.122 port 39186 Apr 16 23:59:18.473947 sshd-session[6071]: pam_unix(sshd:session): session closed for user core Apr 16 23:59:18.477370 systemd[1]: sshd@12-10.0.0.99:22-50.85.169.122:39186.service: Deactivated successfully. Apr 16 23:59:18.479489 systemd[1]: session-13.scope: Deactivated successfully. Apr 16 23:59:18.480611 systemd-logind[1603]: Session 13 logged out. Waiting for processes to exit. Apr 16 23:59:18.482315 systemd-logind[1603]: Removed session 13. Apr 16 23:59:23.505254 systemd[1]: Started sshd@13-10.0.0.99:22-50.85.169.122:49318.service - OpenSSH per-connection server daemon (50.85.169.122:49318). Apr 16 23:59:23.615764 sshd[6090]: Accepted publickey for core from 50.85.169.122 port 49318 ssh2: RSA SHA256:u5daex5fbfF6gkH3TuW6SsdItDTN0gEqgyO+gco2L6k Apr 16 23:59:23.617066 sshd-session[6090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:59:23.621337 systemd-logind[1603]: New session 14 of user core. Apr 16 23:59:23.630362 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 16 23:59:23.723681 sshd[6093]: Connection closed by 50.85.169.122 port 49318 Apr 16 23:59:23.724148 sshd-session[6090]: pam_unix(sshd:session): session closed for user core Apr 16 23:59:23.727527 systemd[1]: sshd@13-10.0.0.99:22-50.85.169.122:49318.service: Deactivated successfully. Apr 16 23:59:23.729297 systemd[1]: session-14.scope: Deactivated successfully. Apr 16 23:59:23.730188 systemd-logind[1603]: Session 14 logged out. Waiting for processes to exit. Apr 16 23:59:23.731606 systemd-logind[1603]: Removed session 14. Apr 16 23:59:23.747515 systemd[1]: Started sshd@14-10.0.0.99:22-50.85.169.122:49322.service - OpenSSH per-connection server daemon (50.85.169.122:49322). Apr 16 23:59:23.856214 sshd[6106]: Accepted publickey for core from 50.85.169.122 port 49322 ssh2: RSA SHA256:u5daex5fbfF6gkH3TuW6SsdItDTN0gEqgyO+gco2L6k Apr 16 23:59:23.857538 sshd-session[6106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:59:23.862097 systemd-logind[1603]: New session 15 of user core. Apr 16 23:59:23.871422 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 16 23:59:24.008547 sshd[6109]: Connection closed by 50.85.169.122 port 49322 Apr 16 23:59:24.008924 sshd-session[6106]: pam_unix(sshd:session): session closed for user core Apr 16 23:59:24.012416 systemd[1]: sshd@14-10.0.0.99:22-50.85.169.122:49322.service: Deactivated successfully. Apr 16 23:59:24.014983 systemd[1]: session-15.scope: Deactivated successfully. Apr 16 23:59:24.015933 systemd-logind[1603]: Session 15 logged out. Waiting for processes to exit. Apr 16 23:59:24.017215 systemd-logind[1603]: Removed session 15. Apr 16 23:59:24.035905 systemd[1]: Started sshd@15-10.0.0.99:22-50.85.169.122:49332.service - OpenSSH per-connection server daemon (50.85.169.122:49332). Apr 16 23:59:24.146152 sshd[6120]: Accepted publickey for core from 50.85.169.122 port 49332 ssh2: RSA SHA256:u5daex5fbfF6gkH3TuW6SsdItDTN0gEqgyO+gco2L6k Apr 16 23:59:24.147272 sshd-session[6120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:59:24.151262 systemd-logind[1603]: New session 16 of user core. Apr 16 23:59:24.160192 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 16 23:59:24.758051 sshd[6123]: Connection closed by 50.85.169.122 port 49332 Apr 16 23:59:24.758417 sshd-session[6120]: pam_unix(sshd:session): session closed for user core Apr 16 23:59:24.764252 systemd[1]: sshd@15-10.0.0.99:22-50.85.169.122:49332.service: Deactivated successfully. Apr 16 23:59:24.770290 systemd[1]: session-16.scope: Deactivated successfully. Apr 16 23:59:24.771269 systemd-logind[1603]: Session 16 logged out. Waiting for processes to exit. Apr 16 23:59:24.772816 systemd-logind[1603]: Removed session 16. Apr 16 23:59:24.785432 systemd[1]: Started sshd@16-10.0.0.99:22-50.85.169.122:49336.service - OpenSSH per-connection server daemon (50.85.169.122:49336). Apr 16 23:59:24.908102 sshd[6150]: Accepted publickey for core from 50.85.169.122 port 49336 ssh2: RSA SHA256:u5daex5fbfF6gkH3TuW6SsdItDTN0gEqgyO+gco2L6k Apr 16 23:59:24.909060 sshd-session[6150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:59:24.913439 systemd-logind[1603]: New session 17 of user core. Apr 16 23:59:24.924206 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 16 23:59:25.129960 sshd[6153]: Connection closed by 50.85.169.122 port 49336 Apr 16 23:59:25.130256 sshd-session[6150]: pam_unix(sshd:session): session closed for user core Apr 16 23:59:25.134606 systemd[1]: sshd@16-10.0.0.99:22-50.85.169.122:49336.service: Deactivated successfully. Apr 16 23:59:25.136478 systemd[1]: session-17.scope: Deactivated successfully. Apr 16 23:59:25.137266 systemd-logind[1603]: Session 17 logged out. Waiting for processes to exit. Apr 16 23:59:25.138237 systemd-logind[1603]: Removed session 17. Apr 16 23:59:25.147307 systemd[1]: Started sshd@17-10.0.0.99:22-50.85.169.122:49348.service - OpenSSH per-connection server daemon (50.85.169.122:49348). Apr 16 23:59:25.262105 sshd[6164]: Accepted publickey for core from 50.85.169.122 port 49348 ssh2: RSA SHA256:u5daex5fbfF6gkH3TuW6SsdItDTN0gEqgyO+gco2L6k Apr 16 23:59:25.263476 sshd-session[6164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:59:25.267970 systemd-logind[1603]: New session 18 of user core. Apr 16 23:59:25.278370 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 16 23:59:25.371629 sshd[6168]: Connection closed by 50.85.169.122 port 49348 Apr 16 23:59:25.372947 sshd-session[6164]: pam_unix(sshd:session): session closed for user core Apr 16 23:59:25.376718 systemd[1]: sshd@17-10.0.0.99:22-50.85.169.122:49348.service: Deactivated successfully. Apr 16 23:59:25.378409 systemd[1]: session-18.scope: Deactivated successfully. Apr 16 23:59:25.380015 systemd-logind[1603]: Session 18 logged out. Waiting for processes to exit. Apr 16 23:59:25.381628 systemd-logind[1603]: Removed session 18. Apr 16 23:59:26.519353 update_engine[1607]: I20260416 23:59:26.519253 1607 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 23:59:26.519692 update_engine[1607]: I20260416 23:59:26.519394 1607 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 23:59:26.520242 update_engine[1607]: I20260416 23:59:26.520208 1607 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 23:59:26.524633 update_engine[1607]: E20260416 23:59:26.524583 1607 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 23:59:26.524685 update_engine[1607]: I20260416 23:59:26.524673 1607 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 16 23:59:30.395557 systemd[1]: Started sshd@18-10.0.0.99:22-50.85.169.122:50192.service - OpenSSH per-connection server daemon (50.85.169.122:50192). Apr 16 23:59:30.510539 sshd[6235]: Accepted publickey for core from 50.85.169.122 port 50192 ssh2: RSA SHA256:u5daex5fbfF6gkH3TuW6SsdItDTN0gEqgyO+gco2L6k Apr 16 23:59:30.511827 sshd-session[6235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:59:30.516151 systemd-logind[1603]: New session 19 of user core. Apr 16 23:59:30.523226 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 16 23:59:30.613898 sshd[6238]: Connection closed by 50.85.169.122 port 50192 Apr 16 23:59:30.614768 sshd-session[6235]: pam_unix(sshd:session): session closed for user core Apr 16 23:59:30.618217 systemd[1]: sshd@18-10.0.0.99:22-50.85.169.122:50192.service: Deactivated successfully. Apr 16 23:59:30.620677 systemd[1]: session-19.scope: Deactivated successfully. Apr 16 23:59:30.621664 systemd-logind[1603]: Session 19 logged out. Waiting for processes to exit. Apr 16 23:59:30.622690 systemd-logind[1603]: Removed session 19. Apr 16 23:59:35.639597 systemd[1]: Started sshd@19-10.0.0.99:22-50.85.169.122:50198.service - OpenSSH per-connection server daemon (50.85.169.122:50198). Apr 16 23:59:35.754149 sshd[6298]: Accepted publickey for core from 50.85.169.122 port 50198 ssh2: RSA SHA256:u5daex5fbfF6gkH3TuW6SsdItDTN0gEqgyO+gco2L6k Apr 16 23:59:35.755361 sshd-session[6298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:59:35.759291 systemd-logind[1603]: New session 20 of user core. Apr 16 23:59:35.771196 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 16 23:59:35.864275 sshd[6301]: Connection closed by 50.85.169.122 port 50198 Apr 16 23:59:35.864873 sshd-session[6298]: pam_unix(sshd:session): session closed for user core Apr 16 23:59:35.868436 systemd[1]: sshd@19-10.0.0.99:22-50.85.169.122:50198.service: Deactivated successfully. Apr 16 23:59:35.870211 systemd[1]: session-20.scope: Deactivated successfully. Apr 16 23:59:35.871521 systemd-logind[1603]: Session 20 logged out. Waiting for processes to exit. Apr 16 23:59:35.872469 systemd-logind[1603]: Removed session 20. Apr 16 23:59:36.518158 update_engine[1607]: I20260416 23:59:36.518083 1607 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 23:59:36.518644 update_engine[1607]: I20260416 23:59:36.518168 1607 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 23:59:36.518668 update_engine[1607]: I20260416 23:59:36.518651 1607 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 23:59:36.524471 update_engine[1607]: E20260416 23:59:36.524437 1607 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 23:59:36.524533 update_engine[1607]: I20260416 23:59:36.524509 1607 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 16 23:59:36.524533 update_engine[1607]: I20260416 23:59:36.524518 1607 omaha_request_action.cc:617] Omaha request response: Apr 16 23:59:36.524627 update_engine[1607]: E20260416 23:59:36.524590 1607 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 16 23:59:36.524691 update_engine[1607]: I20260416 23:59:36.524657 1607 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 16 23:59:36.524691 update_engine[1607]: I20260416 23:59:36.524670 1607 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 16 23:59:36.524691 update_engine[1607]: I20260416 23:59:36.524677 1607 update_attempter.cc:306] Processing Done. Apr 16 23:59:36.524760 update_engine[1607]: E20260416 23:59:36.524709 1607 update_attempter.cc:619] Update failed. Apr 16 23:59:36.524760 update_engine[1607]: I20260416 23:59:36.524718 1607 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 16 23:59:36.524760 update_engine[1607]: I20260416 23:59:36.524724 1607 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 16 23:59:36.524760 update_engine[1607]: I20260416 23:59:36.524727 1607 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 16 23:59:36.524840 update_engine[1607]: I20260416 23:59:36.524791 1607 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 16 23:59:36.524840 update_engine[1607]: I20260416 23:59:36.524823 1607 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 16 23:59:36.524840 update_engine[1607]: I20260416 23:59:36.524828 1607 omaha_request_action.cc:272] Request: Apr 16 23:59:36.524840 update_engine[1607]: Apr 16 23:59:36.524840 update_engine[1607]: Apr 16 23:59:36.524840 update_engine[1607]: Apr 16 23:59:36.524840 update_engine[1607]: Apr 16 23:59:36.524840 update_engine[1607]: Apr 16 23:59:36.524840 update_engine[1607]: Apr 16 23:59:36.524840 update_engine[1607]: I20260416 23:59:36.524833 1607 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 23:59:36.525006 update_engine[1607]: I20260416 23:59:36.524848 1607 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 23:59:36.525213 update_engine[1607]: I20260416 23:59:36.525103 1607 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 23:59:36.525356 locksmithd[1650]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 16 23:59:36.530936 update_engine[1607]: E20260416 23:59:36.530897 1607 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 23:59:36.530994 update_engine[1607]: I20260416 23:59:36.530970 1607 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 16 23:59:36.530994 update_engine[1607]: I20260416 23:59:36.530977 1607 omaha_request_action.cc:617] Omaha request response: Apr 16 23:59:36.530994 update_engine[1607]: I20260416 23:59:36.530983 1607 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 16 23:59:36.530994 update_engine[1607]: I20260416 23:59:36.530987 1607 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 16 23:59:36.530994 update_engine[1607]: I20260416 23:59:36.530992 1607 update_attempter.cc:306] Processing Done. Apr 16 23:59:36.531280 update_engine[1607]: I20260416 23:59:36.530997 1607 update_attempter.cc:310] Error event sent. Apr 16 23:59:36.531280 update_engine[1607]: I20260416 23:59:36.531004 1607 update_check_scheduler.cc:74] Next update check in 49m22s Apr 16 23:59:36.531320 locksmithd[1650]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 16 23:59:40.893635 systemd[1]: Started sshd@20-10.0.0.99:22-50.85.169.122:39432.service - OpenSSH per-connection server daemon (50.85.169.122:39432). Apr 16 23:59:41.004681 sshd[6314]: Accepted publickey for core from 50.85.169.122 port 39432 ssh2: RSA SHA256:u5daex5fbfF6gkH3TuW6SsdItDTN0gEqgyO+gco2L6k Apr 16 23:59:41.005916 sshd-session[6314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:59:41.010102 systemd-logind[1603]: New session 21 of user core. Apr 16 23:59:41.024331 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 16 23:59:41.154004 sshd[6317]: Connection closed by 50.85.169.122 port 39432 Apr 16 23:59:41.154848 sshd-session[6314]: pam_unix(sshd:session): session closed for user core Apr 16 23:59:41.158417 systemd[1]: sshd@20-10.0.0.99:22-50.85.169.122:39432.service: Deactivated successfully. Apr 16 23:59:41.161558 systemd[1]: session-21.scope: Deactivated successfully. Apr 16 23:59:41.162266 systemd-logind[1603]: Session 21 logged out. Waiting for processes to exit. Apr 16 23:59:41.163222 systemd-logind[1603]: Removed session 21. Apr 16 23:59:46.178659 systemd[1]: Started sshd@21-10.0.0.99:22-50.85.169.122:39440.service - OpenSSH per-connection server daemon (50.85.169.122:39440). Apr 16 23:59:46.281349 sshd[6356]: Accepted publickey for core from 50.85.169.122 port 39440 ssh2: RSA SHA256:u5daex5fbfF6gkH3TuW6SsdItDTN0gEqgyO+gco2L6k Apr 16 23:59:46.282511 sshd-session[6356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:59:46.286348 systemd-logind[1603]: New session 22 of user core. Apr 16 23:59:46.299397 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 16 23:59:46.388557 sshd[6359]: Connection closed by 50.85.169.122 port 39440 Apr 16 23:59:46.388907 sshd-session[6356]: pam_unix(sshd:session): session closed for user core Apr 16 23:59:46.392314 systemd[1]: sshd@21-10.0.0.99:22-50.85.169.122:39440.service: Deactivated successfully. Apr 16 23:59:46.394165 systemd[1]: session-22.scope: Deactivated successfully. Apr 16 23:59:46.395587 systemd-logind[1603]: Session 22 logged out. Waiting for processes to exit. Apr 16 23:59:46.396432 systemd-logind[1603]: Removed session 22. Apr 16 23:59:51.413639 systemd[1]: Started sshd@22-10.0.0.99:22-50.85.169.122:51852.service - OpenSSH per-connection server daemon (50.85.169.122:51852). Apr 16 23:59:51.523847 sshd[6372]: Accepted publickey for core from 50.85.169.122 port 51852 ssh2: RSA SHA256:u5daex5fbfF6gkH3TuW6SsdItDTN0gEqgyO+gco2L6k Apr 16 23:59:51.524980 sshd-session[6372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:59:51.528859 systemd-logind[1603]: New session 23 of user core. Apr 16 23:59:51.537414 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 16 23:59:51.629989 sshd[6375]: Connection closed by 50.85.169.122 port 51852 Apr 16 23:59:51.630450 sshd-session[6372]: pam_unix(sshd:session): session closed for user core Apr 16 23:59:51.633934 systemd[1]: sshd@22-10.0.0.99:22-50.85.169.122:51852.service: Deactivated successfully. Apr 16 23:59:51.635694 systemd[1]: session-23.scope: Deactivated successfully. Apr 16 23:59:51.636971 systemd-logind[1603]: Session 23 logged out. Waiting for processes to exit. Apr 16 23:59:51.638072 systemd-logind[1603]: Removed session 23. Apr 17 00:00:16.947925 systemd[1]: cri-containerd-9a4b2c43c21806eaa8e12140f9762819f4e0bbd2a1c4126527d78129150098c7.scope: Deactivated successfully. Apr 17 00:00:16.948587 systemd[1]: cri-containerd-9a4b2c43c21806eaa8e12140f9762819f4e0bbd2a1c4126527d78129150098c7.scope: Consumed 26.127s CPU time, 111.6M memory peak. Apr 17 00:00:16.949433 containerd[1624]: time="2026-04-17T00:00:16.949362255Z" level=info msg="received container exit event container_id:\"9a4b2c43c21806eaa8e12140f9762819f4e0bbd2a1c4126527d78129150098c7\" id:\"9a4b2c43c21806eaa8e12140f9762819f4e0bbd2a1c4126527d78129150098c7\" pid:3173 exit_status:1 exited_at:{seconds:1776384016 nanos:948700775}" Apr 17 00:00:16.967436 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a4b2c43c21806eaa8e12140f9762819f4e0bbd2a1c4126527d78129150098c7-rootfs.mount: Deactivated successfully. Apr 17 00:00:17.424749 kubelet[2822]: E0417 00:00:17.424694 2822 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.99:52882->10.0.0.88:2379: read: connection timed out" Apr 17 00:00:18.360527 systemd[1]: cri-containerd-739288ab053da5eb86dfd3b9cd2f95167245f28328e5bd68d5789c7fb41de29c.scope: Deactivated successfully. Apr 17 00:00:18.360821 systemd[1]: cri-containerd-739288ab053da5eb86dfd3b9cd2f95167245f28328e5bd68d5789c7fb41de29c.scope: Consumed 4.301s CPU time, 67.4M memory peak. Apr 17 00:00:18.363327 containerd[1624]: time="2026-04-17T00:00:18.363187150Z" level=info msg="received container exit event container_id:\"739288ab053da5eb86dfd3b9cd2f95167245f28328e5bd68d5789c7fb41de29c\" id:\"739288ab053da5eb86dfd3b9cd2f95167245f28328e5bd68d5789c7fb41de29c\" pid:2643 exit_status:1 exited_at:{seconds:1776384018 nanos:362851151}" Apr 17 00:00:18.384254 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-739288ab053da5eb86dfd3b9cd2f95167245f28328e5bd68d5789c7fb41de29c-rootfs.mount: Deactivated successfully. Apr 17 00:00:18.616147 kubelet[2822]: I0417 00:00:18.616023 2822 scope.go:117] "RemoveContainer" containerID="9a4b2c43c21806eaa8e12140f9762819f4e0bbd2a1c4126527d78129150098c7" Apr 17 00:00:18.899361 containerd[1624]: time="2026-04-17T00:00:18.899264663Z" level=info msg="CreateContainer within sandbox \"5533caad9fde56aa0dbdda2b39fc82bde0f1f535c31eadfe8f141aad21b22d40\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 17 00:00:19.515998 containerd[1624]: time="2026-04-17T00:00:19.515638941Z" level=info msg="Container 9544946217f62281d12cbce25fffaefdfbc1093f08173fefa2757df91ac8018d: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:00:19.619479 kubelet[2822]: I0417 00:00:19.619452 2822 scope.go:117] "RemoveContainer" containerID="739288ab053da5eb86dfd3b9cd2f95167245f28328e5bd68d5789c7fb41de29c" Apr 17 00:00:19.621353 containerd[1624]: time="2026-04-17T00:00:19.621310469Z" level=info msg="CreateContainer within sandbox \"732ba33fa3dd7673917ffb6433dcd856a11ee751633efec09ad4c816a6d7abb6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 17 00:00:19.779315 containerd[1624]: time="2026-04-17T00:00:19.779190763Z" level=info msg="CreateContainer within sandbox \"5533caad9fde56aa0dbdda2b39fc82bde0f1f535c31eadfe8f141aad21b22d40\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"9544946217f62281d12cbce25fffaefdfbc1093f08173fefa2757df91ac8018d\"" Apr 17 00:00:19.779823 containerd[1624]: time="2026-04-17T00:00:19.779774202Z" level=info msg="StartContainer for \"9544946217f62281d12cbce25fffaefdfbc1093f08173fefa2757df91ac8018d\"" Apr 17 00:00:19.780800 containerd[1624]: time="2026-04-17T00:00:19.780762561Z" level=info msg="connecting to shim 9544946217f62281d12cbce25fffaefdfbc1093f08173fefa2757df91ac8018d" address="unix:///run/containerd/s/d3c8637f82b5443925e11261e741844f29d2ed9c791bd6d0e29bd9f8d1d734de" protocol=ttrpc version=3 Apr 17 00:00:19.801274 systemd[1]: Started cri-containerd-9544946217f62281d12cbce25fffaefdfbc1093f08173fefa2757df91ac8018d.scope - libcontainer container 9544946217f62281d12cbce25fffaefdfbc1093f08173fefa2757df91ac8018d. Apr 17 00:00:20.135511 containerd[1624]: time="2026-04-17T00:00:20.135409213Z" level=info msg="StartContainer for \"9544946217f62281d12cbce25fffaefdfbc1093f08173fefa2757df91ac8018d\" returns successfully" Apr 17 00:00:20.322095 containerd[1624]: time="2026-04-17T00:00:20.322035706Z" level=info msg="Container 863dcd946e0bfe0c9def5d16ba2cd47226e8e494822da4cf86f15695334fa684: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:00:20.617789 containerd[1624]: time="2026-04-17T00:00:20.617717923Z" level=info msg="CreateContainer within sandbox \"732ba33fa3dd7673917ffb6433dcd856a11ee751633efec09ad4c816a6d7abb6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"863dcd946e0bfe0c9def5d16ba2cd47226e8e494822da4cf86f15695334fa684\"" Apr 17 00:00:20.618402 containerd[1624]: time="2026-04-17T00:00:20.618374322Z" level=info msg="StartContainer for \"863dcd946e0bfe0c9def5d16ba2cd47226e8e494822da4cf86f15695334fa684\"" Apr 17 00:00:20.619753 containerd[1624]: time="2026-04-17T00:00:20.619713320Z" level=info msg="connecting to shim 863dcd946e0bfe0c9def5d16ba2cd47226e8e494822da4cf86f15695334fa684" address="unix:///run/containerd/s/30ae00e849707d340adf47077673b1e1de1221da4010374220d402bbd827271e" protocol=ttrpc version=3 Apr 17 00:00:20.638211 systemd[1]: Started cri-containerd-863dcd946e0bfe0c9def5d16ba2cd47226e8e494822da4cf86f15695334fa684.scope - libcontainer container 863dcd946e0bfe0c9def5d16ba2cd47226e8e494822da4cf86f15695334fa684. Apr 17 00:00:20.862007 containerd[1624]: time="2026-04-17T00:00:20.861968573Z" level=info msg="StartContainer for \"863dcd946e0bfe0c9def5d16ba2cd47226e8e494822da4cf86f15695334fa684\" returns successfully" Apr 17 00:00:21.903411 kubelet[2822]: E0417 00:00:21.903210 2822 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.99:52674->10.0.0.88:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4459-2-4-n-fcb502653b.18a6fbd8c1e4b496 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4459-2-4-n-fcb502653b,UID:ffd4fc445f5286ea18a3631baf165ee8,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4459-2-4-n-fcb502653b,},FirstTimestamp:2026-04-17 00:00:11.428738198 +0000 UTC m=+229.465672692,LastTimestamp:2026-04-17 00:00:11.428738198 +0000 UTC m=+229.465672692,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-4-n-fcb502653b,}" Apr 17 00:00:23.746935 systemd[1]: cri-containerd-d8877585f386354d1d7ecd59e43ce6a271a575e12d8d3fa31c9659ce8539283b.scope: Deactivated successfully. Apr 17 00:00:23.747286 systemd[1]: cri-containerd-d8877585f386354d1d7ecd59e43ce6a271a575e12d8d3fa31c9659ce8539283b.scope: Consumed 2.942s CPU time, 25.4M memory peak. Apr 17 00:00:23.749500 containerd[1624]: time="2026-04-17T00:00:23.749463719Z" level=info msg="received container exit event container_id:\"d8877585f386354d1d7ecd59e43ce6a271a575e12d8d3fa31c9659ce8539283b\" id:\"d8877585f386354d1d7ecd59e43ce6a271a575e12d8d3fa31c9659ce8539283b\" pid:2679 exit_status:1 exited_at:{seconds:1776384023 nanos:749216400}" Apr 17 00:00:23.769136 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8877585f386354d1d7ecd59e43ce6a271a575e12d8d3fa31c9659ce8539283b-rootfs.mount: Deactivated successfully. Apr 17 00:00:24.641588 kubelet[2822]: I0417 00:00:24.641539 2822 scope.go:117] "RemoveContainer" containerID="d8877585f386354d1d7ecd59e43ce6a271a575e12d8d3fa31c9659ce8539283b" Apr 17 00:00:24.643201 containerd[1624]: time="2026-04-17T00:00:24.643165200Z" level=info msg="CreateContainer within sandbox \"426635444568733c08bd6b2628b2bcea7247033201f3f01f6b2cce1bb07cce7e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 17 00:00:25.163709 containerd[1624]: time="2026-04-17T00:00:25.162823736Z" level=info msg="Container 4cdcf39bc8d7c9a7f0dea7b912c3fce6f9dc301332199bb244fea384b5b8edf7: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:00:25.478620 systemd[1]: cri-containerd-9544946217f62281d12cbce25fffaefdfbc1093f08173fefa2757df91ac8018d.scope: Deactivated successfully. Apr 17 00:00:25.479525 containerd[1624]: time="2026-04-17T00:00:25.479091723Z" level=info msg="received container exit event container_id:\"9544946217f62281d12cbce25fffaefdfbc1093f08173fefa2757df91ac8018d\" id:\"9544946217f62281d12cbce25fffaefdfbc1093f08173fefa2757df91ac8018d\" pid:6549 exit_status:1 exited_at:{seconds:1776384025 nanos:478860803}" Apr 17 00:00:25.498412 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9544946217f62281d12cbce25fffaefdfbc1093f08173fefa2757df91ac8018d-rootfs.mount: Deactivated successfully. Apr 17 00:00:25.597654 containerd[1624]: time="2026-04-17T00:00:25.597594954Z" level=info msg="CreateContainer within sandbox \"426635444568733c08bd6b2628b2bcea7247033201f3f01f6b2cce1bb07cce7e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"4cdcf39bc8d7c9a7f0dea7b912c3fce6f9dc301332199bb244fea384b5b8edf7\"" Apr 17 00:00:25.617320 containerd[1624]: time="2026-04-17T00:00:25.598163193Z" level=info msg="StartContainer for \"4cdcf39bc8d7c9a7f0dea7b912c3fce6f9dc301332199bb244fea384b5b8edf7\"" Apr 17 00:00:25.620675 containerd[1624]: time="2026-04-17T00:00:25.620637321Z" level=info msg="connecting to shim 4cdcf39bc8d7c9a7f0dea7b912c3fce6f9dc301332199bb244fea384b5b8edf7" address="unix:///run/containerd/s/90ddf14663eba449761f223be919fc82893b9637b59c36bac2302a565b3c3240" protocol=ttrpc version=3 Apr 17 00:00:25.642343 systemd[1]: Started cri-containerd-4cdcf39bc8d7c9a7f0dea7b912c3fce6f9dc301332199bb244fea384b5b8edf7.scope - libcontainer container 4cdcf39bc8d7c9a7f0dea7b912c3fce6f9dc301332199bb244fea384b5b8edf7. Apr 17 00:00:25.684272 kubelet[2822]: I0417 00:00:25.684160 2822 scope.go:117] "RemoveContainer" containerID="9a4b2c43c21806eaa8e12140f9762819f4e0bbd2a1c4126527d78129150098c7" Apr 17 00:00:25.684609 kubelet[2822]: I0417 00:00:25.684433 2822 scope.go:117] "RemoveContainer" containerID="9544946217f62281d12cbce25fffaefdfbc1093f08173fefa2757df91ac8018d" Apr 17 00:00:25.684638 kubelet[2822]: E0417 00:00:25.684607 2822 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-6bf85f8dd-mjzz5_tigera-operator(9463c683-bead-4e7e-b2e3-777720a3bf2c)\"" pod="tigera-operator/tigera-operator-6bf85f8dd-mjzz5" podUID="9463c683-bead-4e7e-b2e3-777720a3bf2c" Apr 17 00:00:25.687336 containerd[1624]: time="2026-04-17T00:00:25.687290385Z" level=info msg="RemoveContainer for \"9a4b2c43c21806eaa8e12140f9762819f4e0bbd2a1c4126527d78129150098c7\"" Apr 17 00:00:25.712813 containerd[1624]: time="2026-04-17T00:00:25.712727549Z" level=info msg="StartContainer for \"4cdcf39bc8d7c9a7f0dea7b912c3fce6f9dc301332199bb244fea384b5b8edf7\" returns successfully" Apr 17 00:00:25.752110 containerd[1624]: time="2026-04-17T00:00:25.752079132Z" level=info msg="RemoveContainer for \"9a4b2c43c21806eaa8e12140f9762819f4e0bbd2a1c4126527d78129150098c7\" returns successfully" Apr 17 00:00:27.426247 kubelet[2822]: E0417 00:00:27.426174 2822 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ci-4459-2-4-n-fcb502653b)" Apr 17 00:00:27.919091 kubelet[2822]: I0417 00:00:27.919016 2822 status_manager.go:895] "Failed to get status for pod" podUID="ffd4fc445f5286ea18a3631baf165ee8" pod="kube-system/kube-apiserver-ci-4459-2-4-n-fcb502653b" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.99:52800->10.0.0.88:2379: read: connection timed out" Apr 17 00:00:33.963118 kernel: pcieport 0000:00:01.0: pciehp: Slot(0): Button press: will power off in 5 sec Apr 17 00:00:37.052074 kubelet[2822]: I0417 00:00:37.051908 2822 scope.go:117] "RemoveContainer" containerID="9544946217f62281d12cbce25fffaefdfbc1093f08173fefa2757df91ac8018d" Apr 17 00:00:37.070177 containerd[1624]: time="2026-04-17T00:00:37.053819993Z" level=info msg="CreateContainer within sandbox \"5533caad9fde56aa0dbdda2b39fc82bde0f1f535c31eadfe8f141aad21b22d40\" for container &ContainerMetadata{Name:tigera-operator,Attempt:2,}" Apr 17 00:00:37.128061 containerd[1624]: time="2026-04-17T00:00:37.127937847Z" level=info msg="Container 8c673829dc15a414e21a4fd31786e8f35033ae13ba772ce723a1a144b144fa86: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:00:37.146213 containerd[1624]: time="2026-04-17T00:00:37.146169821Z" level=info msg="CreateContainer within sandbox \"5533caad9fde56aa0dbdda2b39fc82bde0f1f535c31eadfe8f141aad21b22d40\" for &ContainerMetadata{Name:tigera-operator,Attempt:2,} returns container id \"8c673829dc15a414e21a4fd31786e8f35033ae13ba772ce723a1a144b144fa86\"" Apr 17 00:00:37.146647 containerd[1624]: time="2026-04-17T00:00:37.146625060Z" level=info msg="StartContainer for \"8c673829dc15a414e21a4fd31786e8f35033ae13ba772ce723a1a144b144fa86\"" Apr 17 00:00:37.147506 containerd[1624]: time="2026-04-17T00:00:37.147481339Z" level=info msg="connecting to shim 8c673829dc15a414e21a4fd31786e8f35033ae13ba772ce723a1a144b144fa86" address="unix:///run/containerd/s/d3c8637f82b5443925e11261e741844f29d2ed9c791bd6d0e29bd9f8d1d734de" protocol=ttrpc version=3 Apr 17 00:00:37.168377 systemd[1]: Started cri-containerd-8c673829dc15a414e21a4fd31786e8f35033ae13ba772ce723a1a144b144fa86.scope - libcontainer container 8c673829dc15a414e21a4fd31786e8f35033ae13ba772ce723a1a144b144fa86. Apr 17 00:00:37.202600 containerd[1624]: time="2026-04-17T00:00:37.202565780Z" level=info msg="StartContainer for \"8c673829dc15a414e21a4fd31786e8f35033ae13ba772ce723a1a144b144fa86\" returns successfully" Apr 17 00:00:37.427419 kubelet[2822]: E0417 00:00:37.427025 2822 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-fcb502653b?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"