Mar 13 00:01:20.806994 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 13 00:01:20.807019 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Thu Mar 12 22:07:21 -00 2026 Mar 13 00:01:20.807029 kernel: KASLR enabled Mar 13 00:01:20.807035 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Mar 13 00:01:20.807040 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390bb018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Mar 13 00:01:20.807046 kernel: random: crng init done Mar 13 00:01:20.807053 kernel: secureboot: Secure boot disabled Mar 13 00:01:20.807058 kernel: ACPI: Early table checksum verification disabled Mar 13 00:01:20.807064 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Mar 13 00:01:20.807070 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Mar 13 00:01:20.807077 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:01:20.807083 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:01:20.807089 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:01:20.807095 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:01:20.807102 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:01:20.807109 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:01:20.807115 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:01:20.807121 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:01:20.807127 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:01:20.807133 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Mar 13 00:01:20.807139 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Mar 13 00:01:20.807145 kernel: ACPI: Use ACPI SPCR as default console: Yes Mar 13 00:01:20.807152 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Mar 13 00:01:20.807198 kernel: NODE_DATA(0) allocated [mem 0x13967da00-0x139684fff] Mar 13 00:01:20.807205 kernel: Zone ranges: Mar 13 00:01:20.807211 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Mar 13 00:01:20.807220 kernel: DMA32 empty Mar 13 00:01:20.807226 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Mar 13 00:01:20.807232 kernel: Device empty Mar 13 00:01:20.807238 kernel: Movable zone start for each node Mar 13 00:01:20.807244 kernel: Early memory node ranges Mar 13 00:01:20.807250 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Mar 13 00:01:20.807256 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Mar 13 00:01:20.807262 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Mar 13 00:01:20.807268 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Mar 13 00:01:20.807274 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Mar 13 00:01:20.807280 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Mar 13 00:01:20.807286 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Mar 13 00:01:20.807293 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Mar 13 00:01:20.807299 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Mar 13 00:01:20.808397 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Mar 13 00:01:20.808406 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Mar 13 00:01:20.808413 kernel: cma: Reserved 16 MiB at 0x00000000ff000000 on node -1 Mar 13 00:01:20.808422 kernel: psci: probing for conduit method from ACPI. Mar 13 00:01:20.808428 kernel: psci: PSCIv1.1 detected in firmware. Mar 13 00:01:20.808434 kernel: psci: Using standard PSCI v0.2 function IDs Mar 13 00:01:20.808441 kernel: psci: Trusted OS migration not required Mar 13 00:01:20.808447 kernel: psci: SMC Calling Convention v1.1 Mar 13 00:01:20.808454 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Mar 13 00:01:20.808461 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Mar 13 00:01:20.808467 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Mar 13 00:01:20.808474 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 13 00:01:20.808480 kernel: Detected PIPT I-cache on CPU0 Mar 13 00:01:20.808487 kernel: CPU features: detected: GIC system register CPU interface Mar 13 00:01:20.808494 kernel: CPU features: detected: Spectre-v4 Mar 13 00:01:20.808501 kernel: CPU features: detected: Spectre-BHB Mar 13 00:01:20.808508 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 13 00:01:20.808514 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 13 00:01:20.808520 kernel: CPU features: detected: ARM erratum 1418040 Mar 13 00:01:20.808527 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 13 00:01:20.808533 kernel: alternatives: applying boot alternatives Mar 13 00:01:20.808542 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=9bf054737b516803a47d5bd373cc1c618bc257c93cef3d2e2bc09897e693383d Mar 13 00:01:20.808548 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 13 00:01:20.808555 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 13 00:01:20.808561 kernel: Fallback order for Node 0: 0 Mar 13 00:01:20.808569 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1024000 Mar 13 00:01:20.808575 kernel: Policy zone: Normal Mar 13 00:01:20.808582 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 13 00:01:20.808588 kernel: software IO TLB: area num 2. Mar 13 00:01:20.808595 kernel: software IO TLB: mapped [mem 0x00000000fb000000-0x00000000ff000000] (64MB) Mar 13 00:01:20.808601 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 13 00:01:20.808608 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 13 00:01:20.808615 kernel: rcu: RCU event tracing is enabled. Mar 13 00:01:20.808621 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 13 00:01:20.808628 kernel: Trampoline variant of Tasks RCU enabled. Mar 13 00:01:20.808634 kernel: Tracing variant of Tasks RCU enabled. Mar 13 00:01:20.808641 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 13 00:01:20.808649 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 13 00:01:20.808656 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 13 00:01:20.808662 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 13 00:01:20.808669 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 13 00:01:20.808675 kernel: GICv3: 256 SPIs implemented Mar 13 00:01:20.808681 kernel: GICv3: 0 Extended SPIs implemented Mar 13 00:01:20.808688 kernel: Root IRQ handler: gic_handle_irq Mar 13 00:01:20.808694 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Mar 13 00:01:20.808701 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Mar 13 00:01:20.808707 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Mar 13 00:01:20.808714 kernel: ITS [mem 0x08080000-0x0809ffff] Mar 13 00:01:20.808722 kernel: ITS@0x0000000008080000: allocated 8192 Devices @100100000 (indirect, esz 8, psz 64K, shr 1) Mar 13 00:01:20.808728 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @100110000 (flat, esz 8, psz 64K, shr 1) Mar 13 00:01:20.808735 kernel: GICv3: using LPI property table @0x0000000100120000 Mar 13 00:01:20.808741 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000100130000 Mar 13 00:01:20.808748 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 13 00:01:20.808754 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 13 00:01:20.808761 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 13 00:01:20.808767 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 13 00:01:20.808774 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 13 00:01:20.808781 kernel: Console: colour dummy device 80x25 Mar 13 00:01:20.808787 kernel: ACPI: Core revision 20240827 Mar 13 00:01:20.808795 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 13 00:01:20.808802 kernel: pid_max: default: 32768 minimum: 301 Mar 13 00:01:20.808809 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 13 00:01:20.808815 kernel: landlock: Up and running. Mar 13 00:01:20.808822 kernel: SELinux: Initializing. Mar 13 00:01:20.808829 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 13 00:01:20.808835 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 13 00:01:20.808842 kernel: rcu: Hierarchical SRCU implementation. Mar 13 00:01:20.808849 kernel: rcu: Max phase no-delay instances is 400. Mar 13 00:01:20.808857 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 13 00:01:20.808864 kernel: Remapping and enabling EFI services. Mar 13 00:01:20.808871 kernel: smp: Bringing up secondary CPUs ... Mar 13 00:01:20.808877 kernel: Detected PIPT I-cache on CPU1 Mar 13 00:01:20.808884 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Mar 13 00:01:20.808891 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100140000 Mar 13 00:01:20.808897 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 13 00:01:20.808904 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 13 00:01:20.808911 kernel: smp: Brought up 1 node, 2 CPUs Mar 13 00:01:20.808918 kernel: SMP: Total of 2 processors activated. Mar 13 00:01:20.808931 kernel: CPU: All CPU(s) started at EL1 Mar 13 00:01:20.808938 kernel: CPU features: detected: 32-bit EL0 Support Mar 13 00:01:20.808947 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 13 00:01:20.808954 kernel: CPU features: detected: Common not Private translations Mar 13 00:01:20.808961 kernel: CPU features: detected: CRC32 instructions Mar 13 00:01:20.808968 kernel: CPU features: detected: Enhanced Virtualization Traps Mar 13 00:01:20.808975 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 13 00:01:20.808984 kernel: CPU features: detected: LSE atomic instructions Mar 13 00:01:20.808991 kernel: CPU features: detected: Privileged Access Never Mar 13 00:01:20.808998 kernel: CPU features: detected: RAS Extension Support Mar 13 00:01:20.809006 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 13 00:01:20.809013 kernel: alternatives: applying system-wide alternatives Mar 13 00:01:20.809020 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Mar 13 00:01:20.809027 kernel: Memory: 3858852K/4096000K available (11200K kernel code, 2458K rwdata, 9088K rodata, 39552K init, 1038K bss, 215668K reserved, 16384K cma-reserved) Mar 13 00:01:20.809035 kernel: devtmpfs: initialized Mar 13 00:01:20.809042 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 13 00:01:20.809050 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 13 00:01:20.809057 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 13 00:01:20.809064 kernel: 0 pages in range for non-PLT usage Mar 13 00:01:20.809071 kernel: 508400 pages in range for PLT usage Mar 13 00:01:20.809079 kernel: pinctrl core: initialized pinctrl subsystem Mar 13 00:01:20.809086 kernel: SMBIOS 3.0.0 present. Mar 13 00:01:20.809093 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Mar 13 00:01:20.809100 kernel: DMI: Memory slots populated: 1/1 Mar 13 00:01:20.809107 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 13 00:01:20.809116 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 13 00:01:20.809123 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 13 00:01:20.809130 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 13 00:01:20.809137 kernel: audit: initializing netlink subsys (disabled) Mar 13 00:01:20.809145 kernel: audit: type=2000 audit(0.014:1): state=initialized audit_enabled=0 res=1 Mar 13 00:01:20.809152 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 13 00:01:20.809170 kernel: cpuidle: using governor menu Mar 13 00:01:20.809177 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 13 00:01:20.809184 kernel: ASID allocator initialised with 32768 entries Mar 13 00:01:20.809194 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 13 00:01:20.809201 kernel: Serial: AMBA PL011 UART driver Mar 13 00:01:20.809208 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 13 00:01:20.809215 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 13 00:01:20.809222 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 13 00:01:20.809229 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 13 00:01:20.809237 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 13 00:01:20.809244 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 13 00:01:20.809251 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 13 00:01:20.809259 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 13 00:01:20.809266 kernel: ACPI: Added _OSI(Module Device) Mar 13 00:01:20.809273 kernel: ACPI: Added _OSI(Processor Device) Mar 13 00:01:20.809280 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 13 00:01:20.809287 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 13 00:01:20.809294 kernel: ACPI: Interpreter enabled Mar 13 00:01:20.809333 kernel: ACPI: Using GIC for interrupt routing Mar 13 00:01:20.810339 kernel: ACPI: MCFG table detected, 1 entries Mar 13 00:01:20.810353 kernel: ACPI: CPU0 has been hot-added Mar 13 00:01:20.810360 kernel: ACPI: CPU1 has been hot-added Mar 13 00:01:20.810371 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Mar 13 00:01:20.810379 kernel: printk: legacy console [ttyAMA0] enabled Mar 13 00:01:20.810386 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 13 00:01:20.810543 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 13 00:01:20.810610 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 13 00:01:20.810669 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 13 00:01:20.810726 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Mar 13 00:01:20.810786 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Mar 13 00:01:20.810795 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Mar 13 00:01:20.810803 kernel: PCI host bridge to bus 0000:00 Mar 13 00:01:20.810874 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Mar 13 00:01:20.810928 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 13 00:01:20.810981 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Mar 13 00:01:20.811032 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 13 00:01:20.811112 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Mar 13 00:01:20.811204 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 conventional PCI endpoint Mar 13 00:01:20.811270 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11289000-0x11289fff] Mar 13 00:01:20.814453 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref] Mar 13 00:01:20.814579 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 00:01:20.814654 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11288000-0x11288fff] Mar 13 00:01:20.814722 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Mar 13 00:01:20.814782 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff] Mar 13 00:01:20.814845 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80000fffff 64bit pref] Mar 13 00:01:20.814926 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 00:01:20.814994 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11287000-0x11287fff] Mar 13 00:01:20.815051 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Mar 13 00:01:20.815112 kernel: pci 0000:00:02.1: bridge window [mem 0x10e00000-0x10ffffff] Mar 13 00:01:20.815260 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 00:01:20.815341 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11286000-0x11286fff] Mar 13 00:01:20.815419 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Mar 13 00:01:20.815494 kernel: pci 0000:00:02.2: bridge window [mem 0x10c00000-0x10dfffff] Mar 13 00:01:20.815556 kernel: pci 0000:00:02.2: bridge window [mem 0x8000100000-0x80001fffff 64bit pref] Mar 13 00:01:20.815625 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 00:01:20.815686 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11285000-0x11285fff] Mar 13 00:01:20.815765 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Mar 13 00:01:20.815835 kernel: pci 0000:00:02.3: bridge window [mem 0x10a00000-0x10bfffff] Mar 13 00:01:20.815905 kernel: pci 0000:00:02.3: bridge window [mem 0x8000200000-0x80002fffff 64bit pref] Mar 13 00:01:20.815977 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 00:01:20.816041 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11284000-0x11284fff] Mar 13 00:01:20.816109 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Mar 13 00:01:20.816186 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Mar 13 00:01:20.816259 kernel: pci 0000:00:02.4: bridge window [mem 0x8000300000-0x80003fffff 64bit pref] Mar 13 00:01:20.817569 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 00:01:20.817658 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11283000-0x11283fff] Mar 13 00:01:20.817719 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Mar 13 00:01:20.817777 kernel: pci 0000:00:02.5: bridge window [mem 0x10600000-0x107fffff] Mar 13 00:01:20.817835 kernel: pci 0000:00:02.5: bridge window [mem 0x8000400000-0x80004fffff 64bit pref] Mar 13 00:01:20.817902 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 00:01:20.817969 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11282000-0x11282fff] Mar 13 00:01:20.818029 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Mar 13 00:01:20.818103 kernel: pci 0000:00:02.6: bridge window [mem 0x10400000-0x105fffff] Mar 13 00:01:20.818215 kernel: pci 0000:00:02.6: bridge window [mem 0x8000500000-0x80005fffff 64bit pref] Mar 13 00:01:20.818294 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 00:01:20.818373 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11281000-0x11281fff] Mar 13 00:01:20.818437 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Mar 13 00:01:20.818500 kernel: pci 0000:00:02.7: bridge window [mem 0x10200000-0x103fffff] Mar 13 00:01:20.818570 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 00:01:20.818630 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11280000-0x11280fff] Mar 13 00:01:20.818691 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Mar 13 00:01:20.818749 kernel: pci 0000:00:03.0: bridge window [mem 0x10000000-0x101fffff] Mar 13 00:01:20.818821 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 conventional PCI endpoint Mar 13 00:01:20.818882 kernel: pci 0000:00:04.0: BAR 0 [io 0x0000-0x0007] Mar 13 00:01:20.818953 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Mar 13 00:01:20.819016 kernel: pci 0000:01:00.0: BAR 1 [mem 0x11000000-0x11000fff] Mar 13 00:01:20.819086 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Mar 13 00:01:20.819145 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref] Mar 13 00:01:20.819239 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Mar 13 00:01:20.820009 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10e00000-0x10e03fff 64bit] Mar 13 00:01:20.820115 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 PCIe Endpoint Mar 13 00:01:20.820238 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10c00000-0x10c00fff] Mar 13 00:01:20.820947 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000100000-0x8000103fff 64bit pref] Mar 13 00:01:20.821062 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Mar 13 00:01:20.821127 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000200000-0x8000203fff 64bit pref] Mar 13 00:01:20.821223 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Mar 13 00:01:20.821298 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff] Mar 13 00:01:20.821401 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000300000-0x8000303fff 64bit pref] Mar 13 00:01:20.821472 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 PCIe Endpoint Mar 13 00:01:20.821534 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10600000-0x10600fff] Mar 13 00:01:20.821595 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref] Mar 13 00:01:20.821664 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Mar 13 00:01:20.821732 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10400000-0x10400fff] Mar 13 00:01:20.821815 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000500000-0x8000503fff 64bit pref] Mar 13 00:01:20.821881 kernel: pci 0000:07:00.0: ROM [mem 0xfff80000-0xffffffff pref] Mar 13 00:01:20.821946 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Mar 13 00:01:20.822006 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Mar 13 00:01:20.822065 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Mar 13 00:01:20.822128 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Mar 13 00:01:20.822208 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Mar 13 00:01:20.822288 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Mar 13 00:01:20.822612 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Mar 13 00:01:20.822699 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Mar 13 00:01:20.822759 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Mar 13 00:01:20.822841 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Mar 13 00:01:20.822914 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Mar 13 00:01:20.822983 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Mar 13 00:01:20.823063 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Mar 13 00:01:20.823136 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Mar 13 00:01:20.823267 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Mar 13 00:01:20.823399 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Mar 13 00:01:20.823477 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Mar 13 00:01:20.823550 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Mar 13 00:01:20.823630 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Mar 13 00:01:20.823702 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Mar 13 00:01:20.823773 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Mar 13 00:01:20.823837 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Mar 13 00:01:20.823899 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Mar 13 00:01:20.823960 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Mar 13 00:01:20.824027 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Mar 13 00:01:20.824092 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Mar 13 00:01:20.824153 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Mar 13 00:01:20.824236 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff]: assigned Mar 13 00:01:20.824297 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref]: assigned Mar 13 00:01:20.824387 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff]: assigned Mar 13 00:01:20.824451 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref]: assigned Mar 13 00:01:20.824514 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff]: assigned Mar 13 00:01:20.824579 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref]: assigned Mar 13 00:01:20.824647 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff]: assigned Mar 13 00:01:20.824708 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref]: assigned Mar 13 00:01:20.824770 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff]: assigned Mar 13 00:01:20.824831 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref]: assigned Mar 13 00:01:20.824891 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff]: assigned Mar 13 00:01:20.824952 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref]: assigned Mar 13 00:01:20.825014 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff]: assigned Mar 13 00:01:20.825078 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref]: assigned Mar 13 00:01:20.825141 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff]: assigned Mar 13 00:01:20.825220 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref]: assigned Mar 13 00:01:20.825286 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff]: assigned Mar 13 00:01:20.825364 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref]: assigned Mar 13 00:01:20.825432 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8001200000-0x8001203fff 64bit pref]: assigned Mar 13 00:01:20.825494 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11200000-0x11200fff]: assigned Mar 13 00:01:20.825556 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11201000-0x11201fff]: assigned Mar 13 00:01:20.825620 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Mar 13 00:01:20.825682 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11202000-0x11202fff]: assigned Mar 13 00:01:20.825743 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Mar 13 00:01:20.825806 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11203000-0x11203fff]: assigned Mar 13 00:01:20.825867 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Mar 13 00:01:20.825932 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11204000-0x11204fff]: assigned Mar 13 00:01:20.825992 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Mar 13 00:01:20.826054 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11205000-0x11205fff]: assigned Mar 13 00:01:20.826115 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Mar 13 00:01:20.826209 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11206000-0x11206fff]: assigned Mar 13 00:01:20.826275 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Mar 13 00:01:20.826351 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11207000-0x11207fff]: assigned Mar 13 00:01:20.826415 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Mar 13 00:01:20.826481 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11208000-0x11208fff]: assigned Mar 13 00:01:20.826542 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Mar 13 00:01:20.826603 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11209000-0x11209fff]: assigned Mar 13 00:01:20.826665 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff]: assigned Mar 13 00:01:20.826733 kernel: pci 0000:00:04.0: BAR 0 [io 0xa000-0xa007]: assigned Mar 13 00:01:20.826802 kernel: pci 0000:01:00.0: ROM [mem 0x10000000-0x1007ffff pref]: assigned Mar 13 00:01:20.826863 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Mar 13 00:01:20.826925 kernel: pci 0000:01:00.0: BAR 1 [mem 0x10080000-0x10080fff]: assigned Mar 13 00:01:20.826983 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Mar 13 00:01:20.827041 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Mar 13 00:01:20.827099 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Mar 13 00:01:20.827167 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Mar 13 00:01:20.827242 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10200000-0x10203fff 64bit]: assigned Mar 13 00:01:20.827331 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Mar 13 00:01:20.827406 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Mar 13 00:01:20.827470 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Mar 13 00:01:20.827529 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Mar 13 00:01:20.827594 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref]: assigned Mar 13 00:01:20.827654 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10400000-0x10400fff]: assigned Mar 13 00:01:20.827715 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Mar 13 00:01:20.827774 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Mar 13 00:01:20.827835 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Mar 13 00:01:20.827893 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Mar 13 00:01:20.827958 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref]: assigned Mar 13 00:01:20.828016 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Mar 13 00:01:20.828074 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Mar 13 00:01:20.828133 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Mar 13 00:01:20.828228 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Mar 13 00:01:20.828379 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000800000-0x8000803fff 64bit pref]: assigned Mar 13 00:01:20.829814 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff]: assigned Mar 13 00:01:20.829893 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Mar 13 00:01:20.829953 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Mar 13 00:01:20.830029 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Mar 13 00:01:20.830105 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Mar 13 00:01:20.830220 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000a00000-0x8000a03fff 64bit pref]: assigned Mar 13 00:01:20.830295 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10a00000-0x10a00fff]: assigned Mar 13 00:01:20.831555 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Mar 13 00:01:20.831644 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Mar 13 00:01:20.831709 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Mar 13 00:01:20.831769 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Mar 13 00:01:20.831840 kernel: pci 0000:07:00.0: ROM [mem 0x10c00000-0x10c7ffff pref]: assigned Mar 13 00:01:20.831903 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000c00000-0x8000c03fff 64bit pref]: assigned Mar 13 00:01:20.831968 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10c80000-0x10c80fff]: assigned Mar 13 00:01:20.832038 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Mar 13 00:01:20.832104 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Mar 13 00:01:20.832215 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Mar 13 00:01:20.832290 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Mar 13 00:01:20.834427 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Mar 13 00:01:20.834498 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Mar 13 00:01:20.834557 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Mar 13 00:01:20.834617 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Mar 13 00:01:20.834680 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Mar 13 00:01:20.834739 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Mar 13 00:01:20.834799 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Mar 13 00:01:20.834863 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Mar 13 00:01:20.834929 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Mar 13 00:01:20.834984 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 13 00:01:20.835037 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Mar 13 00:01:20.835103 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Mar 13 00:01:20.835196 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Mar 13 00:01:20.835265 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Mar 13 00:01:20.836388 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Mar 13 00:01:20.836483 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Mar 13 00:01:20.836570 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Mar 13 00:01:20.836655 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Mar 13 00:01:20.836727 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Mar 13 00:01:20.836808 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Mar 13 00:01:20.836887 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Mar 13 00:01:20.836957 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Mar 13 00:01:20.837026 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Mar 13 00:01:20.837106 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Mar 13 00:01:20.837194 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Mar 13 00:01:20.837269 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Mar 13 00:01:20.837377 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Mar 13 00:01:20.837451 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Mar 13 00:01:20.837520 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Mar 13 00:01:20.837597 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Mar 13 00:01:20.837667 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Mar 13 00:01:20.837750 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Mar 13 00:01:20.837854 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Mar 13 00:01:20.837930 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Mar 13 00:01:20.837999 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Mar 13 00:01:20.838077 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Mar 13 00:01:20.838147 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Mar 13 00:01:20.838243 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Mar 13 00:01:20.838256 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 13 00:01:20.838266 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 13 00:01:20.838279 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 13 00:01:20.838289 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 13 00:01:20.838298 kernel: iommu: Default domain type: Translated Mar 13 00:01:20.838337 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 13 00:01:20.838347 kernel: efivars: Registered efivars operations Mar 13 00:01:20.838356 kernel: vgaarb: loaded Mar 13 00:01:20.838366 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 13 00:01:20.838375 kernel: VFS: Disk quotas dquot_6.6.0 Mar 13 00:01:20.838385 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 13 00:01:20.838397 kernel: pnp: PnP ACPI init Mar 13 00:01:20.838500 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Mar 13 00:01:20.838515 kernel: pnp: PnP ACPI: found 1 devices Mar 13 00:01:20.838524 kernel: NET: Registered PF_INET protocol family Mar 13 00:01:20.838534 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 13 00:01:20.838544 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 13 00:01:20.838553 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 13 00:01:20.838563 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 13 00:01:20.838574 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 13 00:01:20.838584 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 13 00:01:20.838593 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 13 00:01:20.838603 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 13 00:01:20.838613 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 13 00:01:20.838701 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Mar 13 00:01:20.838714 kernel: PCI: CLS 0 bytes, default 64 Mar 13 00:01:20.838724 kernel: kvm [1]: HYP mode not available Mar 13 00:01:20.838733 kernel: Initialise system trusted keyrings Mar 13 00:01:20.838745 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 13 00:01:20.838755 kernel: Key type asymmetric registered Mar 13 00:01:20.838764 kernel: Asymmetric key parser 'x509' registered Mar 13 00:01:20.838773 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 13 00:01:20.838783 kernel: io scheduler mq-deadline registered Mar 13 00:01:20.838792 kernel: io scheduler kyber registered Mar 13 00:01:20.838801 kernel: io scheduler bfq registered Mar 13 00:01:20.838812 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Mar 13 00:01:20.838893 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Mar 13 00:01:20.838974 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Mar 13 00:01:20.839049 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 13 00:01:20.839128 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Mar 13 00:01:20.839258 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Mar 13 00:01:20.840435 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 13 00:01:20.840553 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Mar 13 00:01:20.840620 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Mar 13 00:01:20.840680 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 13 00:01:20.840752 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Mar 13 00:01:20.840830 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Mar 13 00:01:20.840894 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 13 00:01:20.840960 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Mar 13 00:01:20.841020 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Mar 13 00:01:20.841079 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 13 00:01:20.841144 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Mar 13 00:01:20.841226 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Mar 13 00:01:20.841294 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 13 00:01:20.842448 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Mar 13 00:01:20.842520 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Mar 13 00:01:20.842580 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 13 00:01:20.842643 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Mar 13 00:01:20.842702 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Mar 13 00:01:20.842761 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 13 00:01:20.842777 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Mar 13 00:01:20.842838 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Mar 13 00:01:20.842896 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Mar 13 00:01:20.842954 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 13 00:01:20.842964 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 13 00:01:20.842972 kernel: ACPI: button: Power Button [PWRB] Mar 13 00:01:20.842979 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 13 00:01:20.843046 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Mar 13 00:01:20.843112 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Mar 13 00:01:20.843124 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 13 00:01:20.843132 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Mar 13 00:01:20.843215 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Mar 13 00:01:20.843227 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Mar 13 00:01:20.843234 kernel: thunder_xcv, ver 1.0 Mar 13 00:01:20.843242 kernel: thunder_bgx, ver 1.0 Mar 13 00:01:20.843249 kernel: nicpf, ver 1.0 Mar 13 00:01:20.843256 kernel: nicvf, ver 1.0 Mar 13 00:01:20.843370 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 13 00:01:20.843438 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-13T00:01:20 UTC (1773360080) Mar 13 00:01:20.843450 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 13 00:01:20.843458 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Mar 13 00:01:20.843465 kernel: watchdog: NMI not fully supported Mar 13 00:01:20.843472 kernel: watchdog: Hard watchdog permanently disabled Mar 13 00:01:20.843481 kernel: NET: Registered PF_INET6 protocol family Mar 13 00:01:20.843488 kernel: Segment Routing with IPv6 Mar 13 00:01:20.843496 kernel: In-situ OAM (IOAM) with IPv6 Mar 13 00:01:20.843505 kernel: NET: Registered PF_PACKET protocol family Mar 13 00:01:20.843512 kernel: Key type dns_resolver registered Mar 13 00:01:20.843520 kernel: registered taskstats version 1 Mar 13 00:01:20.843527 kernel: Loading compiled-in X.509 certificates Mar 13 00:01:20.843535 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: 653709f5ad64856a37b70c07139630123477ee1c' Mar 13 00:01:20.843543 kernel: Demotion targets for Node 0: null Mar 13 00:01:20.843551 kernel: Key type .fscrypt registered Mar 13 00:01:20.843558 kernel: Key type fscrypt-provisioning registered Mar 13 00:01:20.843565 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 13 00:01:20.843574 kernel: ima: Allocated hash algorithm: sha1 Mar 13 00:01:20.843582 kernel: ima: No architecture policies found Mar 13 00:01:20.843590 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 13 00:01:20.843597 kernel: clk: Disabling unused clocks Mar 13 00:01:20.843604 kernel: PM: genpd: Disabling unused power domains Mar 13 00:01:20.843612 kernel: Warning: unable to open an initial console. Mar 13 00:01:20.843619 kernel: Freeing unused kernel memory: 39552K Mar 13 00:01:20.843627 kernel: Run /init as init process Mar 13 00:01:20.843634 kernel: with arguments: Mar 13 00:01:20.843643 kernel: /init Mar 13 00:01:20.843650 kernel: with environment: Mar 13 00:01:20.843658 kernel: HOME=/ Mar 13 00:01:20.843665 kernel: TERM=linux Mar 13 00:01:20.843674 systemd[1]: Successfully made /usr/ read-only. Mar 13 00:01:20.843685 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 13 00:01:20.843693 systemd[1]: Detected virtualization kvm. Mar 13 00:01:20.843702 systemd[1]: Detected architecture arm64. Mar 13 00:01:20.843710 systemd[1]: Running in initrd. Mar 13 00:01:20.843718 systemd[1]: No hostname configured, using default hostname. Mar 13 00:01:20.843726 systemd[1]: Hostname set to . Mar 13 00:01:20.843733 systemd[1]: Initializing machine ID from VM UUID. Mar 13 00:01:20.843741 systemd[1]: Queued start job for default target initrd.target. Mar 13 00:01:20.843749 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:01:20.843757 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:01:20.843765 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 13 00:01:20.843775 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 13 00:01:20.843783 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 13 00:01:20.843791 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 13 00:01:20.843800 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 13 00:01:20.843808 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 13 00:01:20.843816 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:01:20.843826 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:01:20.843834 systemd[1]: Reached target paths.target - Path Units. Mar 13 00:01:20.843842 systemd[1]: Reached target slices.target - Slice Units. Mar 13 00:01:20.843850 systemd[1]: Reached target swap.target - Swaps. Mar 13 00:01:20.843858 systemd[1]: Reached target timers.target - Timer Units. Mar 13 00:01:20.843866 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 13 00:01:20.843874 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 13 00:01:20.843882 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 13 00:01:20.843889 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 13 00:01:20.843899 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:01:20.843907 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 13 00:01:20.843916 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:01:20.843924 systemd[1]: Reached target sockets.target - Socket Units. Mar 13 00:01:20.843932 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 13 00:01:20.843940 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 13 00:01:20.843948 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 13 00:01:20.843956 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 13 00:01:20.843965 systemd[1]: Starting systemd-fsck-usr.service... Mar 13 00:01:20.843973 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 13 00:01:20.843981 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 13 00:01:20.843989 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:01:20.843997 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 13 00:01:20.844006 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:01:20.844016 systemd[1]: Finished systemd-fsck-usr.service. Mar 13 00:01:20.844024 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 13 00:01:20.844056 systemd-journald[244]: Collecting audit messages is disabled. Mar 13 00:01:20.844079 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 13 00:01:20.844088 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 13 00:01:20.844096 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 13 00:01:20.844104 kernel: Bridge firewalling registered Mar 13 00:01:20.844112 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:01:20.844120 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 13 00:01:20.844128 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:01:20.844138 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 13 00:01:20.844146 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 00:01:20.844167 systemd-journald[244]: Journal started Mar 13 00:01:20.844187 systemd-journald[244]: Runtime Journal (/run/log/journal/11faabcf5f3d432da30f719fd1e68b9e) is 8M, max 76.5M, 68.5M free. Mar 13 00:01:20.795985 systemd-modules-load[246]: Inserted module 'overlay' Mar 13 00:01:20.846381 systemd[1]: Started systemd-journald.service - Journal Service. Mar 13 00:01:20.822379 systemd-modules-load[246]: Inserted module 'br_netfilter' Mar 13 00:01:20.861516 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 13 00:01:20.868549 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:01:20.884220 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 00:01:20.884608 systemd-tmpfiles[265]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 13 00:01:20.887677 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 13 00:01:20.890143 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:01:20.896151 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 13 00:01:20.925111 dracut-cmdline[284]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=9bf054737b516803a47d5bd373cc1c618bc257c93cef3d2e2bc09897e693383d Mar 13 00:01:20.943812 systemd-resolved[286]: Positive Trust Anchors: Mar 13 00:01:20.943829 systemd-resolved[286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 13 00:01:20.943860 systemd-resolved[286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 13 00:01:20.949529 systemd-resolved[286]: Defaulting to hostname 'linux'. Mar 13 00:01:20.950502 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 13 00:01:20.951984 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:01:21.021323 kernel: SCSI subsystem initialized Mar 13 00:01:21.025330 kernel: Loading iSCSI transport class v2.0-870. Mar 13 00:01:21.033334 kernel: iscsi: registered transport (tcp) Mar 13 00:01:21.046336 kernel: iscsi: registered transport (qla4xxx) Mar 13 00:01:21.046406 kernel: QLogic iSCSI HBA Driver Mar 13 00:01:21.066669 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 13 00:01:21.092920 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:01:21.094106 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 13 00:01:21.141351 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 13 00:01:21.146480 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 13 00:01:21.214367 kernel: raid6: neonx8 gen() 15640 MB/s Mar 13 00:01:21.230354 kernel: raid6: neonx4 gen() 15719 MB/s Mar 13 00:01:21.247410 kernel: raid6: neonx2 gen() 13056 MB/s Mar 13 00:01:21.264356 kernel: raid6: neonx1 gen() 10388 MB/s Mar 13 00:01:21.281376 kernel: raid6: int64x8 gen() 6849 MB/s Mar 13 00:01:21.298482 kernel: raid6: int64x4 gen() 7315 MB/s Mar 13 00:01:21.315366 kernel: raid6: int64x2 gen() 6064 MB/s Mar 13 00:01:21.332356 kernel: raid6: int64x1 gen() 4993 MB/s Mar 13 00:01:21.332430 kernel: raid6: using algorithm neonx4 gen() 15719 MB/s Mar 13 00:01:21.349379 kernel: raid6: .... xor() 12262 MB/s, rmw enabled Mar 13 00:01:21.349461 kernel: raid6: using neon recovery algorithm Mar 13 00:01:21.354348 kernel: xor: measuring software checksum speed Mar 13 00:01:21.354423 kernel: 8regs : 21618 MB/sec Mar 13 00:01:21.354459 kernel: 32regs : 19812 MB/sec Mar 13 00:01:21.355402 kernel: arm64_neon : 28157 MB/sec Mar 13 00:01:21.355435 kernel: xor: using function: arm64_neon (28157 MB/sec) Mar 13 00:01:21.410436 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 13 00:01:21.421016 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 13 00:01:21.424467 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:01:21.455112 systemd-udevd[495]: Using default interface naming scheme 'v255'. Mar 13 00:01:21.459588 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:01:21.465595 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 13 00:01:21.504277 dracut-pre-trigger[504]: rd.md=0: removing MD RAID activation Mar 13 00:01:21.536862 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 13 00:01:21.539990 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 13 00:01:21.605455 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:01:21.609698 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 13 00:01:21.704340 kernel: virtio_scsi virtio5: 2/0/0 default/read/poll queues Mar 13 00:01:21.709338 kernel: scsi host0: Virtio SCSI HBA Mar 13 00:01:21.714700 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 13 00:01:21.714777 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Mar 13 00:01:21.718075 kernel: ACPI: bus type USB registered Mar 13 00:01:21.718129 kernel: usbcore: registered new interface driver usbfs Mar 13 00:01:21.718192 kernel: usbcore: registered new interface driver hub Mar 13 00:01:21.725461 kernel: usbcore: registered new device driver usb Mar 13 00:01:21.737536 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:01:21.738380 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:01:21.740845 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:01:21.745784 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:01:21.767820 kernel: sd 0:0:0:1: Power-on or device reset occurred Mar 13 00:01:21.768047 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Mar 13 00:01:21.768168 kernel: sd 0:0:0:1: [sda] Write Protect is off Mar 13 00:01:21.768256 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Mar 13 00:01:21.768351 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 13 00:01:21.771239 kernel: sr 0:0:0:0: Power-on or device reset occurred Mar 13 00:01:21.775353 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Mar 13 00:01:21.775531 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 13 00:01:21.775542 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Mar 13 00:01:21.784682 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 13 00:01:21.784747 kernel: GPT:17805311 != 80003071 Mar 13 00:01:21.784757 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 13 00:01:21.784767 kernel: GPT:17805311 != 80003071 Mar 13 00:01:21.784793 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 13 00:01:21.784811 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 13 00:01:21.785681 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Mar 13 00:01:21.791038 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 13 00:01:21.791250 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Mar 13 00:01:21.791391 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Mar 13 00:01:21.791606 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:01:21.794395 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 13 00:01:21.794545 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Mar 13 00:01:21.795379 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Mar 13 00:01:21.798343 kernel: hub 1-0:1.0: USB hub found Mar 13 00:01:21.798552 kernel: hub 1-0:1.0: 4 ports detected Mar 13 00:01:21.803344 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Mar 13 00:01:21.803541 kernel: hub 2-0:1.0: USB hub found Mar 13 00:01:21.803647 kernel: hub 2-0:1.0: 4 ports detected Mar 13 00:01:21.860737 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 13 00:01:21.871493 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Mar 13 00:01:21.880514 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Mar 13 00:01:21.895345 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Mar 13 00:01:21.896061 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Mar 13 00:01:21.899082 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 13 00:01:21.906856 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 13 00:01:21.909598 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 13 00:01:21.911684 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:01:21.912995 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 13 00:01:21.916479 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 13 00:01:21.923875 disk-uuid[600]: Primary Header is updated. Mar 13 00:01:21.923875 disk-uuid[600]: Secondary Entries is updated. Mar 13 00:01:21.923875 disk-uuid[600]: Secondary Header is updated. Mar 13 00:01:21.940360 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 13 00:01:21.945940 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 13 00:01:22.034665 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Mar 13 00:01:22.168655 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Mar 13 00:01:22.168728 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Mar 13 00:01:22.168976 kernel: usbcore: registered new interface driver usbhid Mar 13 00:01:22.169370 kernel: usbhid: USB HID core driver Mar 13 00:01:22.274408 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Mar 13 00:01:22.400367 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Mar 13 00:01:22.453355 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Mar 13 00:01:22.963343 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 13 00:01:22.963797 disk-uuid[601]: The operation has completed successfully. Mar 13 00:01:23.024407 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 13 00:01:23.024515 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 13 00:01:23.053003 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 13 00:01:23.065830 sh[625]: Success Mar 13 00:01:23.082249 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 13 00:01:23.082299 kernel: device-mapper: uevent: version 1.0.3 Mar 13 00:01:23.082340 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 13 00:01:23.092352 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Mar 13 00:01:23.141955 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 13 00:01:23.143592 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 13 00:01:23.159612 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 13 00:01:23.169616 kernel: BTRFS: device fsid fcbb17b2-5053-44fc-82f0-b24e4919d6d8 devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (637) Mar 13 00:01:23.171325 kernel: BTRFS info (device dm-0): first mount of filesystem fcbb17b2-5053-44fc-82f0-b24e4919d6d8 Mar 13 00:01:23.171383 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 13 00:01:23.178392 kernel: BTRFS info (device dm-0 state E): enabling ssd optimizations Mar 13 00:01:23.178465 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 13 00:01:23.178480 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 13 00:01:23.180696 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 13 00:01:23.181786 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 13 00:01:23.183037 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 13 00:01:23.185478 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 13 00:01:23.186653 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 13 00:01:23.221356 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (670) Mar 13 00:01:23.223327 kernel: BTRFS info (device sda6): first mount of filesystem 3c8fd7d8-36f6-4dc1-84ec-9e522970376b Mar 13 00:01:23.223389 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 13 00:01:23.228502 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 13 00:01:23.228571 kernel: BTRFS info (device sda6): turning on async discard Mar 13 00:01:23.228582 kernel: BTRFS info (device sda6): enabling free space tree Mar 13 00:01:23.236326 kernel: BTRFS info (device sda6): last unmount of filesystem 3c8fd7d8-36f6-4dc1-84ec-9e522970376b Mar 13 00:01:23.237975 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 13 00:01:23.241711 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 13 00:01:23.332542 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 13 00:01:23.345161 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 13 00:01:23.386844 systemd-networkd[811]: lo: Link UP Mar 13 00:01:23.386857 systemd-networkd[811]: lo: Gained carrier Mar 13 00:01:23.388464 systemd-networkd[811]: Enumeration completed Mar 13 00:01:23.389147 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 13 00:01:23.389174 systemd-networkd[811]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:01:23.389178 systemd-networkd[811]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 00:01:23.390545 systemd-networkd[811]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:01:23.396167 ignition[721]: Ignition 2.22.0 Mar 13 00:01:23.390548 systemd-networkd[811]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 00:01:23.396175 ignition[721]: Stage: fetch-offline Mar 13 00:01:23.391524 systemd-networkd[811]: eth0: Link UP Mar 13 00:01:23.396208 ignition[721]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:01:23.391686 systemd-networkd[811]: eth1: Link UP Mar 13 00:01:23.396216 ignition[721]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 13 00:01:23.391833 systemd-networkd[811]: eth0: Gained carrier Mar 13 00:01:23.396435 ignition[721]: parsed url from cmdline: "" Mar 13 00:01:23.391845 systemd-networkd[811]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:01:23.396438 ignition[721]: no config URL provided Mar 13 00:01:23.392599 systemd[1]: Reached target network.target - Network. Mar 13 00:01:23.396444 ignition[721]: reading system config file "/usr/lib/ignition/user.ign" Mar 13 00:01:23.398944 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 13 00:01:23.396451 ignition[721]: no config at "/usr/lib/ignition/user.ign" Mar 13 00:01:23.399487 systemd-networkd[811]: eth1: Gained carrier Mar 13 00:01:23.396457 ignition[721]: failed to fetch config: resource requires networking Mar 13 00:01:23.399504 systemd-networkd[811]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:01:23.396637 ignition[721]: Ignition finished successfully Mar 13 00:01:23.402742 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 13 00:01:23.430692 ignition[815]: Ignition 2.22.0 Mar 13 00:01:23.430721 ignition[815]: Stage: fetch Mar 13 00:01:23.430890 ignition[815]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:01:23.430900 ignition[815]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 13 00:01:23.430992 ignition[815]: parsed url from cmdline: "" Mar 13 00:01:23.430995 ignition[815]: no config URL provided Mar 13 00:01:23.431000 ignition[815]: reading system config file "/usr/lib/ignition/user.ign" Mar 13 00:01:23.431007 ignition[815]: no config at "/usr/lib/ignition/user.ign" Mar 13 00:01:23.431046 ignition[815]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Mar 13 00:01:23.431509 ignition[815]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 13 00:01:23.437474 systemd-networkd[811]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Mar 13 00:01:23.442392 systemd-networkd[811]: eth0: DHCPv4 address 168.119.109.176/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 13 00:01:23.632013 ignition[815]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Mar 13 00:01:23.638261 ignition[815]: GET result: OK Mar 13 00:01:23.639427 ignition[815]: parsing config with SHA512: dc04572d3f7088babc2593d874b18779db5d41796b44a4332f4d689293160505a25c1398128f221f3d3b40f2bcec79a7312f8cc8a89ae791beaf5cee4a6bdfd3 Mar 13 00:01:23.645836 unknown[815]: fetched base config from "system" Mar 13 00:01:23.646565 unknown[815]: fetched base config from "system" Mar 13 00:01:23.647290 ignition[815]: fetch: fetch complete Mar 13 00:01:23.646573 unknown[815]: fetched user config from "hetzner" Mar 13 00:01:23.647297 ignition[815]: fetch: fetch passed Mar 13 00:01:23.647377 ignition[815]: Ignition finished successfully Mar 13 00:01:23.652473 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 13 00:01:23.655940 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 13 00:01:23.700843 ignition[822]: Ignition 2.22.0 Mar 13 00:01:23.700864 ignition[822]: Stage: kargs Mar 13 00:01:23.701003 ignition[822]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:01:23.701012 ignition[822]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 13 00:01:23.705294 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 13 00:01:23.701814 ignition[822]: kargs: kargs passed Mar 13 00:01:23.701860 ignition[822]: Ignition finished successfully Mar 13 00:01:23.708284 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 13 00:01:23.757622 ignition[829]: Ignition 2.22.0 Mar 13 00:01:23.757637 ignition[829]: Stage: disks Mar 13 00:01:23.757789 ignition[829]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:01:23.757797 ignition[829]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 13 00:01:23.760807 ignition[829]: disks: disks passed Mar 13 00:01:23.760885 ignition[829]: Ignition finished successfully Mar 13 00:01:23.763791 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 13 00:01:23.765358 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 13 00:01:23.766638 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 13 00:01:23.767871 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 13 00:01:23.768985 systemd[1]: Reached target sysinit.target - System Initialization. Mar 13 00:01:23.769994 systemd[1]: Reached target basic.target - Basic System. Mar 13 00:01:23.772061 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 13 00:01:23.806769 systemd-fsck[838]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Mar 13 00:01:23.813240 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 13 00:01:23.817348 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 13 00:01:23.893341 kernel: EXT4-fs (sda9): mounted filesystem 4b09db19-3beb-48c2-8dcb-3eec5602206c r/w with ordered data mode. Quota mode: none. Mar 13 00:01:23.894612 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 13 00:01:23.896802 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 13 00:01:23.899968 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 13 00:01:23.902502 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 13 00:01:23.910329 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 13 00:01:23.915329 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 13 00:01:23.915389 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 13 00:01:23.923383 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (846) Mar 13 00:01:23.921861 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 13 00:01:23.925364 kernel: BTRFS info (device sda6): first mount of filesystem 3c8fd7d8-36f6-4dc1-84ec-9e522970376b Mar 13 00:01:23.925391 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 13 00:01:23.927379 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 13 00:01:23.946731 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 13 00:01:23.946822 kernel: BTRFS info (device sda6): turning on async discard Mar 13 00:01:23.952937 kernel: BTRFS info (device sda6): enabling free space tree Mar 13 00:01:23.961191 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 13 00:01:23.989532 coreos-metadata[848]: Mar 13 00:01:23.989 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Mar 13 00:01:23.994736 coreos-metadata[848]: Mar 13 00:01:23.994 INFO Fetch successful Mar 13 00:01:23.999944 coreos-metadata[848]: Mar 13 00:01:23.999 INFO wrote hostname ci-4459-2-4-n-499db54055 to /sysroot/etc/hostname Mar 13 00:01:24.003522 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 13 00:01:24.008404 initrd-setup-root[874]: cut: /sysroot/etc/passwd: No such file or directory Mar 13 00:01:24.015357 initrd-setup-root[881]: cut: /sysroot/etc/group: No such file or directory Mar 13 00:01:24.021042 initrd-setup-root[888]: cut: /sysroot/etc/shadow: No such file or directory Mar 13 00:01:24.026664 initrd-setup-root[895]: cut: /sysroot/etc/gshadow: No such file or directory Mar 13 00:01:24.133578 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 13 00:01:24.138190 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 13 00:01:24.140659 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 13 00:01:24.158368 kernel: BTRFS info (device sda6): last unmount of filesystem 3c8fd7d8-36f6-4dc1-84ec-9e522970376b Mar 13 00:01:24.169130 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 13 00:01:24.179398 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 13 00:01:24.193741 ignition[963]: INFO : Ignition 2.22.0 Mar 13 00:01:24.193741 ignition[963]: INFO : Stage: mount Mar 13 00:01:24.195840 ignition[963]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:01:24.195840 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 13 00:01:24.195840 ignition[963]: INFO : mount: mount passed Mar 13 00:01:24.195840 ignition[963]: INFO : Ignition finished successfully Mar 13 00:01:24.200396 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 13 00:01:24.202557 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 13 00:01:24.237373 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 13 00:01:24.271346 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (975) Mar 13 00:01:24.273375 kernel: BTRFS info (device sda6): first mount of filesystem 3c8fd7d8-36f6-4dc1-84ec-9e522970376b Mar 13 00:01:24.273426 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 13 00:01:24.277648 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 13 00:01:24.277732 kernel: BTRFS info (device sda6): turning on async discard Mar 13 00:01:24.277750 kernel: BTRFS info (device sda6): enabling free space tree Mar 13 00:01:24.280457 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 13 00:01:24.315916 ignition[992]: INFO : Ignition 2.22.0 Mar 13 00:01:24.317411 ignition[992]: INFO : Stage: files Mar 13 00:01:24.317411 ignition[992]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:01:24.317411 ignition[992]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 13 00:01:24.320388 ignition[992]: DEBUG : files: compiled without relabeling support, skipping Mar 13 00:01:24.320388 ignition[992]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 13 00:01:24.320388 ignition[992]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 13 00:01:24.324255 ignition[992]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 13 00:01:24.324255 ignition[992]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 13 00:01:24.326475 ignition[992]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 13 00:01:24.325433 unknown[992]: wrote ssh authorized keys file for user: core Mar 13 00:01:24.328562 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 13 00:01:24.328562 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 13 00:01:24.430769 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 13 00:01:24.502878 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 13 00:01:24.504393 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 13 00:01:24.504393 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 13 00:01:24.504393 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 13 00:01:24.504393 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 13 00:01:24.504393 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 13 00:01:24.504393 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 13 00:01:24.504393 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 13 00:01:24.504393 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 13 00:01:24.514788 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 13 00:01:24.514788 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 13 00:01:24.514788 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 13 00:01:24.518704 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 13 00:01:24.518704 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 13 00:01:24.518704 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Mar 13 00:01:24.721506 systemd-networkd[811]: eth0: Gained IPv6LL Mar 13 00:01:24.813780 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 13 00:01:25.036557 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 13 00:01:25.038048 ignition[992]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 13 00:01:25.039814 ignition[992]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 13 00:01:25.044499 ignition[992]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 13 00:01:25.044499 ignition[992]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 13 00:01:25.044499 ignition[992]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 13 00:01:25.044499 ignition[992]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 13 00:01:25.044499 ignition[992]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 13 00:01:25.044499 ignition[992]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 13 00:01:25.044499 ignition[992]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Mar 13 00:01:25.044499 ignition[992]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Mar 13 00:01:25.044499 ignition[992]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 13 00:01:25.044499 ignition[992]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 13 00:01:25.044499 ignition[992]: INFO : files: files passed Mar 13 00:01:25.044499 ignition[992]: INFO : Ignition finished successfully Mar 13 00:01:25.050431 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 13 00:01:25.055890 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 13 00:01:25.061553 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 13 00:01:25.083798 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 13 00:01:25.084407 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 13 00:01:25.094337 initrd-setup-root-after-ignition[1022]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:01:25.094337 initrd-setup-root-after-ignition[1022]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:01:25.096517 initrd-setup-root-after-ignition[1026]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:01:25.099970 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 13 00:01:25.102735 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 13 00:01:25.104891 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 13 00:01:25.170798 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 13 00:01:25.170987 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 13 00:01:25.173664 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 13 00:01:25.174893 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 13 00:01:25.175968 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 13 00:01:25.177036 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 13 00:01:25.203491 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 13 00:01:25.207507 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 13 00:01:25.229163 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:01:25.231154 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:01:25.232372 systemd[1]: Stopped target timers.target - Timer Units. Mar 13 00:01:25.233641 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 13 00:01:25.233827 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 13 00:01:25.235476 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 13 00:01:25.236861 systemd[1]: Stopped target basic.target - Basic System. Mar 13 00:01:25.237931 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 13 00:01:25.238977 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 13 00:01:25.240052 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 13 00:01:25.241165 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 13 00:01:25.242358 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 13 00:01:25.243402 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 13 00:01:25.244594 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 13 00:01:25.245798 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 13 00:01:25.246790 systemd[1]: Stopped target swap.target - Swaps. Mar 13 00:01:25.247644 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 13 00:01:25.247816 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 13 00:01:25.249091 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:01:25.250232 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:01:25.251356 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 13 00:01:25.251911 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:01:25.252735 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 13 00:01:25.252857 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 13 00:01:25.254633 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 13 00:01:25.254826 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 13 00:01:25.255957 systemd[1]: ignition-files.service: Deactivated successfully. Mar 13 00:01:25.256118 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 13 00:01:25.257043 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 13 00:01:25.257206 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 13 00:01:25.260546 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 13 00:01:25.261124 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 13 00:01:25.261391 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:01:25.265827 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 13 00:01:25.267916 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 13 00:01:25.268710 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:01:25.271231 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 13 00:01:25.271424 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 13 00:01:25.281024 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 13 00:01:25.284534 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 13 00:01:25.291278 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 13 00:01:25.292539 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 13 00:01:25.293206 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 13 00:01:25.297458 systemd-networkd[811]: eth1: Gained IPv6LL Mar 13 00:01:25.305757 ignition[1046]: INFO : Ignition 2.22.0 Mar 13 00:01:25.305757 ignition[1046]: INFO : Stage: umount Mar 13 00:01:25.307163 ignition[1046]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:01:25.307163 ignition[1046]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 13 00:01:25.307163 ignition[1046]: INFO : umount: umount passed Mar 13 00:01:25.307163 ignition[1046]: INFO : Ignition finished successfully Mar 13 00:01:25.309158 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 13 00:01:25.309399 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 13 00:01:25.310838 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 13 00:01:25.310926 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 13 00:01:25.311896 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 13 00:01:25.311941 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 13 00:01:25.313877 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 13 00:01:25.313921 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 13 00:01:25.314910 systemd[1]: Stopped target network.target - Network. Mar 13 00:01:25.315819 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 13 00:01:25.315869 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 13 00:01:25.317037 systemd[1]: Stopped target paths.target - Path Units. Mar 13 00:01:25.318212 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 13 00:01:25.322423 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:01:25.324009 systemd[1]: Stopped target slices.target - Slice Units. Mar 13 00:01:25.325584 systemd[1]: Stopped target sockets.target - Socket Units. Mar 13 00:01:25.327044 systemd[1]: iscsid.socket: Deactivated successfully. Mar 13 00:01:25.327128 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 13 00:01:25.327938 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 13 00:01:25.327970 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 13 00:01:25.328949 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 13 00:01:25.329004 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 13 00:01:25.329853 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 13 00:01:25.329890 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 13 00:01:25.330898 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 13 00:01:25.330941 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 13 00:01:25.332084 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 13 00:01:25.333203 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 13 00:01:25.341239 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 13 00:01:25.341507 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 13 00:01:25.346032 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 13 00:01:25.346448 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 13 00:01:25.346496 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:01:25.349511 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:01:25.349755 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 13 00:01:25.349877 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 13 00:01:25.352422 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 13 00:01:25.352913 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 13 00:01:25.354218 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 13 00:01:25.354259 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:01:25.356190 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 13 00:01:25.357884 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 13 00:01:25.357968 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 13 00:01:25.360154 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 13 00:01:25.360214 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:01:25.363616 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 13 00:01:25.363672 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 13 00:01:25.364380 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:01:25.366797 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 13 00:01:25.381472 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 13 00:01:25.387682 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:01:25.390462 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 13 00:01:25.390554 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 13 00:01:25.392835 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 13 00:01:25.392868 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:01:25.394700 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 13 00:01:25.394758 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 13 00:01:25.397555 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 13 00:01:25.397607 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 13 00:01:25.399205 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 13 00:01:25.399254 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 00:01:25.401815 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 13 00:01:25.403436 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 13 00:01:25.403512 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:01:25.406827 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 13 00:01:25.406895 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:01:25.408737 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:01:25.408824 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:01:25.411840 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 13 00:01:25.412609 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 13 00:01:25.421287 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 13 00:01:25.421434 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 13 00:01:25.424258 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 13 00:01:25.426022 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 13 00:01:25.454479 systemd[1]: Switching root. Mar 13 00:01:25.489520 systemd-journald[244]: Journal stopped Mar 13 00:01:26.485276 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Mar 13 00:01:26.485425 kernel: SELinux: policy capability network_peer_controls=1 Mar 13 00:01:26.485462 kernel: SELinux: policy capability open_perms=1 Mar 13 00:01:26.485485 kernel: SELinux: policy capability extended_socket_class=1 Mar 13 00:01:26.485506 kernel: SELinux: policy capability always_check_network=0 Mar 13 00:01:26.485527 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 13 00:01:26.485555 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 13 00:01:26.485576 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 13 00:01:26.485597 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 13 00:01:26.485618 kernel: SELinux: policy capability userspace_initial_context=0 Mar 13 00:01:26.485641 kernel: audit: type=1403 audit(1773360085.647:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 13 00:01:26.485664 systemd[1]: Successfully loaded SELinux policy in 54.127ms. Mar 13 00:01:26.485708 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.160ms. Mar 13 00:01:26.485734 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 13 00:01:26.485761 systemd[1]: Detected virtualization kvm. Mar 13 00:01:26.485784 systemd[1]: Detected architecture arm64. Mar 13 00:01:26.485814 systemd[1]: Detected first boot. Mar 13 00:01:26.485839 systemd[1]: Hostname set to . Mar 13 00:01:26.485862 systemd[1]: Initializing machine ID from VM UUID. Mar 13 00:01:26.485885 zram_generator::config[1090]: No configuration found. Mar 13 00:01:26.485916 kernel: NET: Registered PF_VSOCK protocol family Mar 13 00:01:26.485939 systemd[1]: Populated /etc with preset unit settings. Mar 13 00:01:26.485966 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 13 00:01:26.485989 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 13 00:01:26.486012 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 13 00:01:26.486051 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 13 00:01:26.486087 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 13 00:01:26.486114 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 13 00:01:26.486141 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 13 00:01:26.486165 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 13 00:01:26.486188 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 13 00:01:26.486211 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 13 00:01:26.486235 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 13 00:01:26.486258 systemd[1]: Created slice user.slice - User and Session Slice. Mar 13 00:01:26.486281 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:01:26.491363 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:01:26.491417 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 13 00:01:26.491437 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 13 00:01:26.491448 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 13 00:01:26.491460 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 13 00:01:26.491471 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 13 00:01:26.491480 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:01:26.491491 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:01:26.491503 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 13 00:01:26.491513 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 13 00:01:26.491523 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 13 00:01:26.491533 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 13 00:01:26.491543 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:01:26.491553 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 13 00:01:26.491565 systemd[1]: Reached target slices.target - Slice Units. Mar 13 00:01:26.491575 systemd[1]: Reached target swap.target - Swaps. Mar 13 00:01:26.491585 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 13 00:01:26.491596 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 13 00:01:26.491606 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 13 00:01:26.491616 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:01:26.491626 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 13 00:01:26.491636 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:01:26.491647 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 13 00:01:26.491657 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 13 00:01:26.491667 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 13 00:01:26.491677 systemd[1]: Mounting media.mount - External Media Directory... Mar 13 00:01:26.491689 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 13 00:01:26.491699 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 13 00:01:26.491709 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 13 00:01:26.491720 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 13 00:01:26.491731 systemd[1]: Reached target machines.target - Containers. Mar 13 00:01:26.491741 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 13 00:01:26.491751 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:01:26.491762 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 13 00:01:26.491773 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 13 00:01:26.491783 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 00:01:26.491793 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 13 00:01:26.491803 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 00:01:26.491814 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 13 00:01:26.491823 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 00:01:26.491834 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 13 00:01:26.491844 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 13 00:01:26.491854 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 13 00:01:26.491865 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 13 00:01:26.491877 systemd[1]: Stopped systemd-fsck-usr.service. Mar 13 00:01:26.491888 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:01:26.491899 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 13 00:01:26.491910 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 13 00:01:26.491920 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 13 00:01:26.491930 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 13 00:01:26.491946 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 13 00:01:26.491956 kernel: fuse: init (API version 7.41) Mar 13 00:01:26.491967 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 13 00:01:26.491977 systemd[1]: verity-setup.service: Deactivated successfully. Mar 13 00:01:26.491987 systemd[1]: Stopped verity-setup.service. Mar 13 00:01:26.491999 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 13 00:01:26.492009 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 13 00:01:26.492020 kernel: loop: module loaded Mar 13 00:01:26.492040 systemd[1]: Mounted media.mount - External Media Directory. Mar 13 00:01:26.492055 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 13 00:01:26.492066 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 13 00:01:26.492078 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 13 00:01:26.492091 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:01:26.492101 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 13 00:01:26.492111 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 13 00:01:26.492121 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 00:01:26.492131 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 00:01:26.492141 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 00:01:26.492151 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 00:01:26.492161 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 13 00:01:26.492172 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 13 00:01:26.492182 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 00:01:26.492192 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 00:01:26.492202 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 13 00:01:26.492213 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:01:26.492223 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 13 00:01:26.492233 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 13 00:01:26.492244 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 13 00:01:26.492254 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 13 00:01:26.492266 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 13 00:01:26.492276 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 13 00:01:26.492287 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 13 00:01:26.492297 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 13 00:01:26.498488 systemd-journald[1154]: Collecting audit messages is disabled. Mar 13 00:01:26.498529 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:01:26.498542 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 13 00:01:26.498555 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 13 00:01:26.498566 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 13 00:01:26.498577 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 13 00:01:26.498587 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 00:01:26.498599 systemd-journald[1154]: Journal started Mar 13 00:01:26.498621 systemd-journald[1154]: Runtime Journal (/run/log/journal/11faabcf5f3d432da30f719fd1e68b9e) is 8M, max 76.5M, 68.5M free. Mar 13 00:01:26.155997 systemd[1]: Queued start job for default target multi-user.target. Mar 13 00:01:26.162378 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 13 00:01:26.163286 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 13 00:01:26.506242 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 13 00:01:26.509184 systemd[1]: Started systemd-journald.service - Journal Service. Mar 13 00:01:26.513427 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 13 00:01:26.516432 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 13 00:01:26.520383 kernel: ACPI: bus type drm_connector registered Mar 13 00:01:26.518728 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 13 00:01:26.529274 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 13 00:01:26.531354 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 13 00:01:26.538461 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 13 00:01:26.551227 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 13 00:01:26.555249 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 13 00:01:26.561371 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 13 00:01:26.563120 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 13 00:01:26.566446 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 13 00:01:26.585101 systemd-journald[1154]: Time spent on flushing to /var/log/journal/11faabcf5f3d432da30f719fd1e68b9e is 38.278ms for 1172 entries. Mar 13 00:01:26.585101 systemd-journald[1154]: System Journal (/var/log/journal/11faabcf5f3d432da30f719fd1e68b9e) is 8M, max 584.8M, 576.8M free. Mar 13 00:01:26.633201 kernel: loop0: detected capacity change from 0 to 119840 Mar 13 00:01:26.633231 systemd-journald[1154]: Received client request to flush runtime journal. Mar 13 00:01:26.633255 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 13 00:01:26.601184 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:01:26.637392 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 13 00:01:26.639698 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 13 00:01:26.645446 kernel: loop1: detected capacity change from 0 to 209336 Mar 13 00:01:26.674178 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 13 00:01:26.680718 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 13 00:01:26.686398 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:01:26.706384 kernel: loop2: detected capacity change from 0 to 100632 Mar 13 00:01:26.733340 systemd-tmpfiles[1229]: ACLs are not supported, ignoring. Mar 13 00:01:26.733359 systemd-tmpfiles[1229]: ACLs are not supported, ignoring. Mar 13 00:01:26.741976 kernel: loop3: detected capacity change from 0 to 8 Mar 13 00:01:26.740746 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:01:26.764340 kernel: loop4: detected capacity change from 0 to 119840 Mar 13 00:01:26.782330 kernel: loop5: detected capacity change from 0 to 209336 Mar 13 00:01:26.806339 kernel: loop6: detected capacity change from 0 to 100632 Mar 13 00:01:26.832349 kernel: loop7: detected capacity change from 0 to 8 Mar 13 00:01:26.833391 (sd-merge)[1237]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Mar 13 00:01:26.833857 (sd-merge)[1237]: Merged extensions into '/usr'. Mar 13 00:01:26.840375 systemd[1]: Reload requested from client PID 1188 ('systemd-sysext') (unit systemd-sysext.service)... Mar 13 00:01:26.840392 systemd[1]: Reloading... Mar 13 00:01:26.979332 zram_generator::config[1262]: No configuration found. Mar 13 00:01:27.051347 ldconfig[1184]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 13 00:01:27.196421 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 13 00:01:27.196835 systemd[1]: Reloading finished in 355 ms. Mar 13 00:01:27.215499 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 13 00:01:27.216503 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 13 00:01:27.217490 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 13 00:01:27.233450 systemd[1]: Starting ensure-sysext.service... Mar 13 00:01:27.239667 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 13 00:01:27.249491 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:01:27.265550 systemd[1]: Reload requested from client PID 1301 ('systemctl') (unit ensure-sysext.service)... Mar 13 00:01:27.265568 systemd[1]: Reloading... Mar 13 00:01:27.265780 systemd-tmpfiles[1302]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 13 00:01:27.266140 systemd-tmpfiles[1302]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 13 00:01:27.266480 systemd-tmpfiles[1302]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 13 00:01:27.266733 systemd-tmpfiles[1302]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 13 00:01:27.267470 systemd-tmpfiles[1302]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 13 00:01:27.267693 systemd-tmpfiles[1302]: ACLs are not supported, ignoring. Mar 13 00:01:27.267740 systemd-tmpfiles[1302]: ACLs are not supported, ignoring. Mar 13 00:01:27.271462 systemd-tmpfiles[1302]: Detected autofs mount point /boot during canonicalization of boot. Mar 13 00:01:27.271589 systemd-tmpfiles[1302]: Skipping /boot Mar 13 00:01:27.279184 systemd-tmpfiles[1302]: Detected autofs mount point /boot during canonicalization of boot. Mar 13 00:01:27.279914 systemd-tmpfiles[1302]: Skipping /boot Mar 13 00:01:27.334512 systemd-udevd[1303]: Using default interface naming scheme 'v255'. Mar 13 00:01:27.351343 zram_generator::config[1332]: No configuration found. Mar 13 00:01:27.583664 systemd[1]: Reloading finished in 317 ms. Mar 13 00:01:27.609375 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:01:27.617163 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:01:27.648905 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 13 00:01:27.652617 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 13 00:01:27.653477 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:01:27.656360 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 00:01:27.660630 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 00:01:27.675875 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 00:01:27.676693 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:01:27.676824 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:01:27.680166 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 13 00:01:27.684615 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 13 00:01:27.689094 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 13 00:01:27.696731 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 13 00:01:27.700927 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 00:01:27.701166 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 00:01:27.702425 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 00:01:27.702583 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 00:01:27.709951 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 13 00:01:27.715105 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:01:27.728595 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 00:01:27.736262 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 00:01:27.737096 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:01:27.739327 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 13 00:01:27.740335 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:01:27.741958 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 00:01:27.742365 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 00:01:27.743885 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 13 00:01:27.763231 systemd[1]: Finished ensure-sysext.service. Mar 13 00:01:27.765484 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 13 00:01:27.775805 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:01:27.787766 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 13 00:01:27.790155 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 00:01:27.791251 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:01:27.791346 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:01:27.796095 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 13 00:01:27.801499 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 13 00:01:27.802924 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 13 00:01:27.806741 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 00:01:27.806948 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 00:01:27.808189 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 00:01:27.808358 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 00:01:27.809708 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 13 00:01:27.815019 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 13 00:01:27.818764 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 13 00:01:27.840255 augenrules[1459]: No rules Mar 13 00:01:27.841345 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 13 00:01:27.845833 systemd[1]: audit-rules.service: Deactivated successfully. Mar 13 00:01:27.846057 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 13 00:01:27.847658 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 13 00:01:27.848224 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 13 00:01:27.853666 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 00:01:27.854408 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 00:01:27.856951 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 13 00:01:27.867891 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 13 00:01:27.869569 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 13 00:01:27.895590 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 13 00:01:27.901519 kernel: mousedev: PS/2 mouse device common for all mice Mar 13 00:01:27.945755 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Mar 13 00:01:27.945819 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Mar 13 00:01:27.945831 kernel: [drm] features: -context_init Mar 13 00:01:27.946619 kernel: [drm] number of scanouts: 1 Mar 13 00:01:27.947813 kernel: [drm] number of cap sets: 0 Mar 13 00:01:27.947877 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Mar 13 00:01:27.954317 kernel: Console: switching to colour frame buffer device 160x50 Mar 13 00:01:27.967351 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Mar 13 00:01:28.005552 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Mar 13 00:01:28.007485 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:01:28.011550 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 00:01:28.018097 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 00:01:28.025484 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 00:01:28.026191 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:01:28.026237 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:01:28.026261 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 13 00:01:28.051137 systemd-networkd[1419]: lo: Link UP Mar 13 00:01:28.051147 systemd-networkd[1419]: lo: Gained carrier Mar 13 00:01:28.058093 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 00:01:28.058292 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 00:01:28.059711 systemd-networkd[1419]: Enumeration completed Mar 13 00:01:28.060293 systemd-networkd[1419]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:01:28.061574 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 13 00:01:28.062708 systemd-networkd[1419]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 00:01:28.063711 systemd-resolved[1421]: Positive Trust Anchors: Mar 13 00:01:28.065259 systemd-resolved[1421]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 13 00:01:28.065296 systemd-resolved[1421]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 13 00:01:28.065727 systemd-networkd[1419]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:01:28.065731 systemd-networkd[1419]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 00:01:28.066698 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 13 00:01:28.069643 systemd-networkd[1419]: eth0: Link UP Mar 13 00:01:28.069762 systemd-networkd[1419]: eth0: Gained carrier Mar 13 00:01:28.069789 systemd-networkd[1419]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:01:28.071551 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 13 00:01:28.072214 systemd-resolved[1421]: Using system hostname 'ci-4459-2-4-n-499db54055'. Mar 13 00:01:28.074455 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 13 00:01:28.074845 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 00:01:28.075063 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 00:01:28.076391 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 13 00:01:28.077163 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 13 00:01:28.080692 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 00:01:28.080889 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 00:01:28.083119 systemd[1]: Reached target network.target - Network. Mar 13 00:01:28.084270 systemd-networkd[1419]: eth1: Link UP Mar 13 00:01:28.086493 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:01:28.087276 systemd[1]: Reached target sysinit.target - System Initialization. Mar 13 00:01:28.087660 systemd-networkd[1419]: eth1: Gained carrier Mar 13 00:01:28.087693 systemd-networkd[1419]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:01:28.091836 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 13 00:01:28.093945 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 13 00:01:28.095499 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 13 00:01:28.096671 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 13 00:01:28.096706 systemd[1]: Reached target paths.target - Path Units. Mar 13 00:01:28.098358 systemd[1]: Reached target time-set.target - System Time Set. Mar 13 00:01:28.099107 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 13 00:01:28.099902 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 13 00:01:28.102340 systemd[1]: Reached target timers.target - Timer Units. Mar 13 00:01:28.104387 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 13 00:01:28.107237 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 13 00:01:28.113934 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 13 00:01:28.115859 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 13 00:01:28.118394 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 13 00:01:28.124383 systemd-networkd[1419]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Mar 13 00:01:28.129156 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 13 00:01:28.129901 systemd-networkd[1419]: eth0: DHCPv4 address 168.119.109.176/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 13 00:01:28.130415 systemd-timesyncd[1450]: Network configuration changed, trying to establish connection. Mar 13 00:01:28.131115 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 13 00:01:28.134161 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 13 00:01:28.134958 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 13 00:01:28.136904 systemd[1]: Reached target sockets.target - Socket Units. Mar 13 00:01:28.137534 systemd[1]: Reached target basic.target - Basic System. Mar 13 00:01:28.138100 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 13 00:01:28.138119 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 13 00:01:28.141487 systemd[1]: Starting containerd.service - containerd container runtime... Mar 13 00:01:28.144667 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 13 00:01:28.148472 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 13 00:01:28.153512 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 13 00:01:28.158645 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 13 00:01:28.161351 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 13 00:01:28.163394 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 13 00:01:28.167714 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 13 00:01:28.175246 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 13 00:01:28.180873 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Mar 13 00:01:28.181678 extend-filesystems[1509]: Found /dev/sda6 Mar 13 00:01:28.185478 extend-filesystems[1509]: Found /dev/sda9 Mar 13 00:01:28.191786 extend-filesystems[1509]: Checking size of /dev/sda9 Mar 13 00:01:28.188820 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 13 00:01:28.617295 extend-filesystems[1509]: Resized partition /dev/sda9 Mar 13 00:01:28.621194 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Mar 13 00:01:28.192754 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 13 00:01:28.621307 extend-filesystems[1527]: resize2fs 1.47.3 (8-Jul-2025) Mar 13 00:01:28.612559 systemd-timesyncd[1450]: Contacted time server 5.9.193.27:123 (0.flatcar.pool.ntp.org). Mar 13 00:01:28.612630 systemd-timesyncd[1450]: Initial clock synchronization to Fri 2026-03-13 00:01:28.612430 UTC. Mar 13 00:01:28.635686 jq[1508]: false Mar 13 00:01:28.612679 systemd-resolved[1421]: Clock change detected. Flushing caches. Mar 13 00:01:28.629326 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 13 00:01:28.633344 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 13 00:01:28.633894 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 13 00:01:28.638706 systemd[1]: Starting update-engine.service - Update Engine... Mar 13 00:01:28.649440 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 13 00:01:28.655828 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 13 00:01:28.661661 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 13 00:01:28.666891 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 13 00:01:28.668020 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 13 00:01:28.668425 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 13 00:01:28.668585 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 13 00:01:28.723461 jq[1533]: true Mar 13 00:01:28.727155 (ntainerd)[1547]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 13 00:01:28.728872 update_engine[1531]: I20260313 00:01:28.728209 1531 main.cc:92] Flatcar Update Engine starting Mar 13 00:01:28.745149 jq[1552]: true Mar 13 00:01:28.754833 tar[1540]: linux-arm64/LICENSE Mar 13 00:01:28.754833 tar[1540]: linux-arm64/helm Mar 13 00:01:28.761739 dbus-daemon[1506]: [system] SELinux support is enabled Mar 13 00:01:28.762742 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 13 00:01:28.765487 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 13 00:01:28.765515 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 13 00:01:28.766419 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 13 00:01:28.766433 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 13 00:01:28.772930 systemd[1]: motdgen.service: Deactivated successfully. Mar 13 00:01:28.773339 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 13 00:01:28.785403 systemd[1]: Started update-engine.service - Update Engine. Mar 13 00:01:28.789531 update_engine[1531]: I20260313 00:01:28.789286 1531 update_check_scheduler.cc:74] Next update check in 11m21s Mar 13 00:01:28.791340 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Mar 13 00:01:28.794056 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 13 00:01:28.799911 coreos-metadata[1505]: Mar 13 00:01:28.793 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Mar 13 00:01:28.811683 coreos-metadata[1505]: Mar 13 00:01:28.801 INFO Fetch successful Mar 13 00:01:28.811683 coreos-metadata[1505]: Mar 13 00:01:28.801 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Mar 13 00:01:28.811683 coreos-metadata[1505]: Mar 13 00:01:28.804 INFO Fetch successful Mar 13 00:01:28.806658 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 13 00:01:28.811834 extend-filesystems[1527]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Mar 13 00:01:28.811834 extend-filesystems[1527]: old_desc_blocks = 1, new_desc_blocks = 5 Mar 13 00:01:28.811834 extend-filesystems[1527]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Mar 13 00:01:28.808646 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 13 00:01:28.822730 extend-filesystems[1509]: Resized filesystem in /dev/sda9 Mar 13 00:01:28.860999 bash[1574]: Updated "/home/core/.ssh/authorized_keys" Mar 13 00:01:28.860282 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 13 00:01:28.869339 systemd[1]: Starting sshkeys.service... Mar 13 00:01:28.884821 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:01:28.935853 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:01:28.937324 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:01:28.945792 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:01:28.958056 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 13 00:01:28.964373 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 13 00:01:29.004123 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 13 00:01:29.005193 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 13 00:01:29.042109 coreos-metadata[1588]: Mar 13 00:01:29.041 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Mar 13 00:01:29.048924 coreos-metadata[1588]: Mar 13 00:01:29.048 INFO Fetch successful Mar 13 00:01:29.051834 unknown[1588]: wrote ssh authorized keys file for user: core Mar 13 00:01:29.114924 update-ssh-keys[1598]: Updated "/home/core/.ssh/authorized_keys" Mar 13 00:01:29.118130 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 13 00:01:29.134895 systemd[1]: Finished sshkeys.service. Mar 13 00:01:29.232222 systemd-logind[1528]: New seat seat0. Mar 13 00:01:29.239642 systemd-logind[1528]: Watching system buttons on /dev/input/event0 (Power Button) Mar 13 00:01:29.240773 systemd-logind[1528]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Mar 13 00:01:29.242390 systemd[1]: Started systemd-logind.service - User Login Management. Mar 13 00:01:29.246797 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:01:29.251649 containerd[1547]: time="2026-03-13T00:01:29Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 13 00:01:29.253976 containerd[1547]: time="2026-03-13T00:01:29.252845331Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 13 00:01:29.287989 containerd[1547]: time="2026-03-13T00:01:29.287923251Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.68µs" Mar 13 00:01:29.291093 containerd[1547]: time="2026-03-13T00:01:29.290139851Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 13 00:01:29.291093 containerd[1547]: time="2026-03-13T00:01:29.290535971Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 13 00:01:29.291093 containerd[1547]: time="2026-03-13T00:01:29.290714291Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 13 00:01:29.291093 containerd[1547]: time="2026-03-13T00:01:29.290731611Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 13 00:01:29.291093 containerd[1547]: time="2026-03-13T00:01:29.290821851Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 13 00:01:29.291093 containerd[1547]: time="2026-03-13T00:01:29.290889251Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 13 00:01:29.291093 containerd[1547]: time="2026-03-13T00:01:29.290901491Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 13 00:01:29.291455 containerd[1547]: time="2026-03-13T00:01:29.291429251Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 13 00:01:29.291511 containerd[1547]: time="2026-03-13T00:01:29.291497291Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 13 00:01:29.293449 containerd[1547]: time="2026-03-13T00:01:29.291547851Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 13 00:01:29.293449 containerd[1547]: time="2026-03-13T00:01:29.291561211Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 13 00:01:29.293449 containerd[1547]: time="2026-03-13T00:01:29.291659291Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 13 00:01:29.293449 containerd[1547]: time="2026-03-13T00:01:29.291883691Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 13 00:01:29.293449 containerd[1547]: time="2026-03-13T00:01:29.291914011Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 13 00:01:29.293449 containerd[1547]: time="2026-03-13T00:01:29.291925051Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 13 00:01:29.293727 containerd[1547]: time="2026-03-13T00:01:29.293649691Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 13 00:01:29.294151 containerd[1547]: time="2026-03-13T00:01:29.294123211Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 13 00:01:29.294320 containerd[1547]: time="2026-03-13T00:01:29.294302051Z" level=info msg="metadata content store policy set" policy=shared Mar 13 00:01:29.304621 containerd[1547]: time="2026-03-13T00:01:29.304542331Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 13 00:01:29.306097 containerd[1547]: time="2026-03-13T00:01:29.304879611Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 13 00:01:29.306097 containerd[1547]: time="2026-03-13T00:01:29.304915891Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 13 00:01:29.306097 containerd[1547]: time="2026-03-13T00:01:29.304932531Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 13 00:01:29.306097 containerd[1547]: time="2026-03-13T00:01:29.304948811Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 13 00:01:29.306097 containerd[1547]: time="2026-03-13T00:01:29.304960811Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 13 00:01:29.306097 containerd[1547]: time="2026-03-13T00:01:29.304974051Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 13 00:01:29.306097 containerd[1547]: time="2026-03-13T00:01:29.304989171Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 13 00:01:29.306097 containerd[1547]: time="2026-03-13T00:01:29.305005011Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 13 00:01:29.306097 containerd[1547]: time="2026-03-13T00:01:29.305017691Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 13 00:01:29.306097 containerd[1547]: time="2026-03-13T00:01:29.305030051Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 13 00:01:29.306097 containerd[1547]: time="2026-03-13T00:01:29.305049051Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 13 00:01:29.306097 containerd[1547]: time="2026-03-13T00:01:29.305243491Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 13 00:01:29.306097 containerd[1547]: time="2026-03-13T00:01:29.305268211Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 13 00:01:29.306097 containerd[1547]: time="2026-03-13T00:01:29.305284091Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 13 00:01:29.306399 containerd[1547]: time="2026-03-13T00:01:29.305297931Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 13 00:01:29.306399 containerd[1547]: time="2026-03-13T00:01:29.305312091Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 13 00:01:29.306399 containerd[1547]: time="2026-03-13T00:01:29.305325011Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 13 00:01:29.306399 containerd[1547]: time="2026-03-13T00:01:29.305338211Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 13 00:01:29.306399 containerd[1547]: time="2026-03-13T00:01:29.305350811Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 13 00:01:29.306399 containerd[1547]: time="2026-03-13T00:01:29.305365411Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 13 00:01:29.306399 containerd[1547]: time="2026-03-13T00:01:29.305377371Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 13 00:01:29.306399 containerd[1547]: time="2026-03-13T00:01:29.305388851Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 13 00:01:29.306399 containerd[1547]: time="2026-03-13T00:01:29.305705211Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 13 00:01:29.306399 containerd[1547]: time="2026-03-13T00:01:29.305721291Z" level=info msg="Start snapshots syncer" Mar 13 00:01:29.306399 containerd[1547]: time="2026-03-13T00:01:29.305791571Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 13 00:01:29.307151 containerd[1547]: time="2026-03-13T00:01:29.306058051Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 13 00:01:29.309579 containerd[1547]: time="2026-03-13T00:01:29.309000091Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 13 00:01:29.309579 containerd[1547]: time="2026-03-13T00:01:29.309118611Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 13 00:01:29.309579 containerd[1547]: time="2026-03-13T00:01:29.309444691Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 13 00:01:29.309579 containerd[1547]: time="2026-03-13T00:01:29.309475091Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 13 00:01:29.309579 containerd[1547]: time="2026-03-13T00:01:29.309496971Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 13 00:01:29.309579 containerd[1547]: time="2026-03-13T00:01:29.309509131Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 13 00:01:29.309579 containerd[1547]: time="2026-03-13T00:01:29.309522931Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 13 00:01:29.309579 containerd[1547]: time="2026-03-13T00:01:29.309534651Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 13 00:01:29.309579 containerd[1547]: time="2026-03-13T00:01:29.309548491Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 13 00:01:29.309856 containerd[1547]: time="2026-03-13T00:01:29.309832771Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 13 00:01:29.310000 containerd[1547]: time="2026-03-13T00:01:29.309935011Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 13 00:01:29.310000 containerd[1547]: time="2026-03-13T00:01:29.309953851Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 13 00:01:29.310092 containerd[1547]: time="2026-03-13T00:01:29.310079251Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 13 00:01:29.310234 containerd[1547]: time="2026-03-13T00:01:29.310206891Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 13 00:01:29.310316 containerd[1547]: time="2026-03-13T00:01:29.310290731Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 13 00:01:29.310369 containerd[1547]: time="2026-03-13T00:01:29.310356131Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 13 00:01:29.310422 containerd[1547]: time="2026-03-13T00:01:29.310410571Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 13 00:01:29.310466 containerd[1547]: time="2026-03-13T00:01:29.310456131Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 13 00:01:29.310526 containerd[1547]: time="2026-03-13T00:01:29.310514611Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 13 00:01:29.310662 containerd[1547]: time="2026-03-13T00:01:29.310652531Z" level=info msg="runtime interface created" Mar 13 00:01:29.310718 containerd[1547]: time="2026-03-13T00:01:29.310700131Z" level=info msg="created NRI interface" Mar 13 00:01:29.310828 containerd[1547]: time="2026-03-13T00:01:29.310813371Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 13 00:01:29.310908 containerd[1547]: time="2026-03-13T00:01:29.310897131Z" level=info msg="Connect containerd service" Mar 13 00:01:29.310999 containerd[1547]: time="2026-03-13T00:01:29.310987411Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 13 00:01:29.314481 containerd[1547]: time="2026-03-13T00:01:29.314201251Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 13 00:01:29.385282 locksmithd[1564]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 13 00:01:29.485324 containerd[1547]: time="2026-03-13T00:01:29.485252411Z" level=info msg="Start subscribing containerd event" Mar 13 00:01:29.485491 containerd[1547]: time="2026-03-13T00:01:29.485478011Z" level=info msg="Start recovering state" Mar 13 00:01:29.488248 containerd[1547]: time="2026-03-13T00:01:29.488196771Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 13 00:01:29.489085 containerd[1547]: time="2026-03-13T00:01:29.488393131Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 13 00:01:29.489288 containerd[1547]: time="2026-03-13T00:01:29.489269011Z" level=info msg="Start event monitor" Mar 13 00:01:29.489366 containerd[1547]: time="2026-03-13T00:01:29.489355451Z" level=info msg="Start cni network conf syncer for default" Mar 13 00:01:29.489560 containerd[1547]: time="2026-03-13T00:01:29.489544531Z" level=info msg="Start streaming server" Mar 13 00:01:29.489626 containerd[1547]: time="2026-03-13T00:01:29.489614931Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 13 00:01:29.489670 containerd[1547]: time="2026-03-13T00:01:29.489660171Z" level=info msg="runtime interface starting up..." Mar 13 00:01:29.489945 containerd[1547]: time="2026-03-13T00:01:29.489929131Z" level=info msg="starting plugins..." Mar 13 00:01:29.490030 containerd[1547]: time="2026-03-13T00:01:29.490019211Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 13 00:01:29.492006 containerd[1547]: time="2026-03-13T00:01:29.491629931Z" level=info msg="containerd successfully booted in 0.241463s" Mar 13 00:01:29.491757 systemd[1]: Started containerd.service - containerd container runtime. Mar 13 00:01:29.523281 tar[1540]: linux-arm64/README.md Mar 13 00:01:29.541790 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 13 00:01:29.872269 systemd-networkd[1419]: eth1: Gained IPv6LL Mar 13 00:01:29.876750 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 13 00:01:29.880045 systemd[1]: Reached target network-online.target - Network is Online. Mar 13 00:01:29.884252 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:01:29.887625 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 13 00:01:29.938233 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 13 00:01:30.384194 systemd-networkd[1419]: eth0: Gained IPv6LL Mar 13 00:01:30.585567 sshd_keygen[1532]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 13 00:01:30.607494 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 13 00:01:30.611310 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 13 00:01:30.630863 systemd[1]: issuegen.service: Deactivated successfully. Mar 13 00:01:30.631168 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 13 00:01:30.634372 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 13 00:01:30.651198 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 13 00:01:30.657126 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 13 00:01:30.662229 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 13 00:01:30.663268 systemd[1]: Reached target getty.target - Login Prompts. Mar 13 00:01:30.665249 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:01:30.668536 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 13 00:01:30.669659 systemd[1]: Startup finished in 2.340s (kernel) + 5.029s (initrd) + 4.659s (userspace) = 12.029s. Mar 13 00:01:30.685490 (kubelet)[1663]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:01:31.201123 kubelet[1663]: E0313 00:01:31.200953 1663 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:01:31.204904 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:01:31.205053 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:01:31.207088 systemd[1]: kubelet.service: Consumed 839ms CPU time, 257M memory peak. Mar 13 00:01:41.327029 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 13 00:01:41.330789 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:01:41.495209 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:01:41.508007 (kubelet)[1684]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:01:41.552026 kubelet[1684]: E0313 00:01:41.551973 1684 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:01:41.556419 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:01:41.556575 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:01:41.557297 systemd[1]: kubelet.service: Consumed 178ms CPU time, 104.9M memory peak. Mar 13 00:01:51.576991 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 13 00:01:51.579129 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:01:51.749677 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:01:51.765694 (kubelet)[1700]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:01:51.812003 kubelet[1700]: E0313 00:01:51.811953 1700 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:01:51.817610 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:01:51.817743 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:01:51.818291 systemd[1]: kubelet.service: Consumed 166ms CPU time, 104.5M memory peak. Mar 13 00:02:01.826683 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 13 00:02:01.830254 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:02:01.997530 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:02:02.012092 (kubelet)[1714]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:02:02.061691 kubelet[1714]: E0313 00:02:02.061617 1714 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:02:02.064615 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:02:02.064765 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:02:02.065670 systemd[1]: kubelet.service: Consumed 170ms CPU time, 104.9M memory peak. Mar 13 00:02:02.697995 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 13 00:02:02.702134 systemd[1]: Started sshd@0-168.119.109.176:22-20.161.92.111:49428.service - OpenSSH per-connection server daemon (20.161.92.111:49428). Mar 13 00:02:03.257110 sshd[1723]: Accepted publickey for core from 20.161.92.111 port 49428 ssh2: RSA SHA256:FZv9jIyBkQro1AwcCziPsaQ5MV8OObRYGf9smKb8nkU Mar 13 00:02:03.259667 sshd-session[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:02:03.280190 systemd-logind[1528]: New session 1 of user core. Mar 13 00:02:03.281428 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 13 00:02:03.282927 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 13 00:02:03.328101 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 13 00:02:03.331464 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 13 00:02:03.345825 (systemd)[1728]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 13 00:02:03.349379 systemd-logind[1528]: New session c1 of user core. Mar 13 00:02:03.488707 systemd[1728]: Queued start job for default target default.target. Mar 13 00:02:03.500962 systemd[1728]: Created slice app.slice - User Application Slice. Mar 13 00:02:03.501000 systemd[1728]: Reached target paths.target - Paths. Mar 13 00:02:03.501043 systemd[1728]: Reached target timers.target - Timers. Mar 13 00:02:03.502421 systemd[1728]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 13 00:02:03.518580 systemd[1728]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 13 00:02:03.518699 systemd[1728]: Reached target sockets.target - Sockets. Mar 13 00:02:03.518760 systemd[1728]: Reached target basic.target - Basic System. Mar 13 00:02:03.518793 systemd[1728]: Reached target default.target - Main User Target. Mar 13 00:02:03.518822 systemd[1728]: Startup finished in 161ms. Mar 13 00:02:03.519090 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 13 00:02:03.529440 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 13 00:02:03.832038 systemd[1]: Started sshd@1-168.119.109.176:22-20.161.92.111:49436.service - OpenSSH per-connection server daemon (20.161.92.111:49436). Mar 13 00:02:04.359129 sshd[1739]: Accepted publickey for core from 20.161.92.111 port 49436 ssh2: RSA SHA256:FZv9jIyBkQro1AwcCziPsaQ5MV8OObRYGf9smKb8nkU Mar 13 00:02:04.360843 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:02:04.367748 systemd-logind[1528]: New session 2 of user core. Mar 13 00:02:04.378411 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 13 00:02:04.647492 sshd[1742]: Connection closed by 20.161.92.111 port 49436 Mar 13 00:02:04.648425 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Mar 13 00:02:04.655277 systemd-logind[1528]: Session 2 logged out. Waiting for processes to exit. Mar 13 00:02:04.655993 systemd[1]: sshd@1-168.119.109.176:22-20.161.92.111:49436.service: Deactivated successfully. Mar 13 00:02:04.658240 systemd[1]: session-2.scope: Deactivated successfully. Mar 13 00:02:04.661772 systemd-logind[1528]: Removed session 2. Mar 13 00:02:04.765954 systemd[1]: Started sshd@2-168.119.109.176:22-20.161.92.111:49446.service - OpenSSH per-connection server daemon (20.161.92.111:49446). Mar 13 00:02:05.292509 sshd[1748]: Accepted publickey for core from 20.161.92.111 port 49446 ssh2: RSA SHA256:FZv9jIyBkQro1AwcCziPsaQ5MV8OObRYGf9smKb8nkU Mar 13 00:02:05.296963 sshd-session[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:02:05.304565 systemd-logind[1528]: New session 3 of user core. Mar 13 00:02:05.317407 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 13 00:02:05.573706 sshd[1751]: Connection closed by 20.161.92.111 port 49446 Mar 13 00:02:05.575040 sshd-session[1748]: pam_unix(sshd:session): session closed for user core Mar 13 00:02:05.580644 systemd-logind[1528]: Session 3 logged out. Waiting for processes to exit. Mar 13 00:02:05.581043 systemd[1]: sshd@2-168.119.109.176:22-20.161.92.111:49446.service: Deactivated successfully. Mar 13 00:02:05.584668 systemd[1]: session-3.scope: Deactivated successfully. Mar 13 00:02:05.587312 systemd-logind[1528]: Removed session 3. Mar 13 00:02:05.679276 systemd[1]: Started sshd@3-168.119.109.176:22-20.161.92.111:49456.service - OpenSSH per-connection server daemon (20.161.92.111:49456). Mar 13 00:02:06.212145 sshd[1757]: Accepted publickey for core from 20.161.92.111 port 49456 ssh2: RSA SHA256:FZv9jIyBkQro1AwcCziPsaQ5MV8OObRYGf9smKb8nkU Mar 13 00:02:06.213738 sshd-session[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:02:06.220098 systemd-logind[1528]: New session 4 of user core. Mar 13 00:02:06.228416 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 13 00:02:06.498306 sshd[1760]: Connection closed by 20.161.92.111 port 49456 Mar 13 00:02:06.498968 sshd-session[1757]: pam_unix(sshd:session): session closed for user core Mar 13 00:02:06.504926 systemd[1]: sshd@3-168.119.109.176:22-20.161.92.111:49456.service: Deactivated successfully. Mar 13 00:02:06.508573 systemd[1]: session-4.scope: Deactivated successfully. Mar 13 00:02:06.509644 systemd-logind[1528]: Session 4 logged out. Waiting for processes to exit. Mar 13 00:02:06.511421 systemd-logind[1528]: Removed session 4. Mar 13 00:02:06.613361 systemd[1]: Started sshd@4-168.119.109.176:22-20.161.92.111:49468.service - OpenSSH per-connection server daemon (20.161.92.111:49468). Mar 13 00:02:07.154121 sshd[1766]: Accepted publickey for core from 20.161.92.111 port 49468 ssh2: RSA SHA256:FZv9jIyBkQro1AwcCziPsaQ5MV8OObRYGf9smKb8nkU Mar 13 00:02:07.155532 sshd-session[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:02:07.161993 systemd-logind[1528]: New session 5 of user core. Mar 13 00:02:07.168363 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 13 00:02:07.363786 sudo[1770]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 13 00:02:07.364075 sudo[1770]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:02:07.381552 sudo[1770]: pam_unix(sudo:session): session closed for user root Mar 13 00:02:07.479500 sshd[1769]: Connection closed by 20.161.92.111 port 49468 Mar 13 00:02:07.480625 sshd-session[1766]: pam_unix(sshd:session): session closed for user core Mar 13 00:02:07.486726 systemd-logind[1528]: Session 5 logged out. Waiting for processes to exit. Mar 13 00:02:07.487304 systemd[1]: sshd@4-168.119.109.176:22-20.161.92.111:49468.service: Deactivated successfully. Mar 13 00:02:07.490826 systemd[1]: session-5.scope: Deactivated successfully. Mar 13 00:02:07.494905 systemd-logind[1528]: Removed session 5. Mar 13 00:02:07.587910 systemd[1]: Started sshd@5-168.119.109.176:22-20.161.92.111:49484.service - OpenSSH per-connection server daemon (20.161.92.111:49484). Mar 13 00:02:08.113137 sshd[1776]: Accepted publickey for core from 20.161.92.111 port 49484 ssh2: RSA SHA256:FZv9jIyBkQro1AwcCziPsaQ5MV8OObRYGf9smKb8nkU Mar 13 00:02:08.115164 sshd-session[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:02:08.121018 systemd-logind[1528]: New session 6 of user core. Mar 13 00:02:08.126346 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 13 00:02:08.308269 sudo[1781]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 13 00:02:08.308530 sudo[1781]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:02:08.314998 sudo[1781]: pam_unix(sudo:session): session closed for user root Mar 13 00:02:08.322218 sudo[1780]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 13 00:02:08.322508 sudo[1780]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:02:08.333338 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 13 00:02:08.377905 augenrules[1803]: No rules Mar 13 00:02:08.379341 systemd[1]: audit-rules.service: Deactivated successfully. Mar 13 00:02:08.380460 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 13 00:02:08.383232 sudo[1780]: pam_unix(sudo:session): session closed for user root Mar 13 00:02:08.478099 sshd[1779]: Connection closed by 20.161.92.111 port 49484 Mar 13 00:02:08.477197 sshd-session[1776]: pam_unix(sshd:session): session closed for user core Mar 13 00:02:08.482641 systemd[1]: sshd@5-168.119.109.176:22-20.161.92.111:49484.service: Deactivated successfully. Mar 13 00:02:08.484879 systemd[1]: session-6.scope: Deactivated successfully. Mar 13 00:02:08.487874 systemd-logind[1528]: Session 6 logged out. Waiting for processes to exit. Mar 13 00:02:08.488998 systemd-logind[1528]: Removed session 6. Mar 13 00:02:08.585597 systemd[1]: Started sshd@6-168.119.109.176:22-20.161.92.111:49486.service - OpenSSH per-connection server daemon (20.161.92.111:49486). Mar 13 00:02:09.111479 sshd[1812]: Accepted publickey for core from 20.161.92.111 port 49486 ssh2: RSA SHA256:FZv9jIyBkQro1AwcCziPsaQ5MV8OObRYGf9smKb8nkU Mar 13 00:02:09.113198 sshd-session[1812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:02:09.117849 systemd-logind[1528]: New session 7 of user core. Mar 13 00:02:09.130414 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 13 00:02:09.309186 sudo[1816]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 13 00:02:09.310381 sudo[1816]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:02:09.653462 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 13 00:02:09.667639 (dockerd)[1833]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 13 00:02:09.921887 dockerd[1833]: time="2026-03-13T00:02:09.921716955Z" level=info msg="Starting up" Mar 13 00:02:09.923339 dockerd[1833]: time="2026-03-13T00:02:09.923256210Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 13 00:02:09.939369 dockerd[1833]: time="2026-03-13T00:02:09.939319821Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 13 00:02:09.956753 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1257926153-merged.mount: Deactivated successfully. Mar 13 00:02:09.988399 dockerd[1833]: time="2026-03-13T00:02:09.988337323Z" level=info msg="Loading containers: start." Mar 13 00:02:10.002194 kernel: Initializing XFRM netlink socket Mar 13 00:02:10.253284 systemd-networkd[1419]: docker0: Link UP Mar 13 00:02:10.257788 dockerd[1833]: time="2026-03-13T00:02:10.257751655Z" level=info msg="Loading containers: done." Mar 13 00:02:10.275866 dockerd[1833]: time="2026-03-13T00:02:10.275471217Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 13 00:02:10.275866 dockerd[1833]: time="2026-03-13T00:02:10.275569176Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 13 00:02:10.275866 dockerd[1833]: time="2026-03-13T00:02:10.275690534Z" level=info msg="Initializing buildkit" Mar 13 00:02:10.299587 dockerd[1833]: time="2026-03-13T00:02:10.299542641Z" level=info msg="Completed buildkit initialization" Mar 13 00:02:10.307541 dockerd[1833]: time="2026-03-13T00:02:10.307485396Z" level=info msg="Daemon has completed initialization" Mar 13 00:02:10.307964 dockerd[1833]: time="2026-03-13T00:02:10.307766632Z" level=info msg="API listen on /run/docker.sock" Mar 13 00:02:10.311173 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 13 00:02:10.800806 containerd[1547]: time="2026-03-13T00:02:10.800646280Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 13 00:02:10.955556 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2767194577-merged.mount: Deactivated successfully. Mar 13 00:02:11.305460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3968656849.mount: Deactivated successfully. Mar 13 00:02:12.076481 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 13 00:02:12.079634 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:02:12.226782 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:02:12.237389 (kubelet)[2109]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:02:12.283076 kubelet[2109]: E0313 00:02:12.283020 2109 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:02:12.287523 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:02:12.287703 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:02:12.289224 systemd[1]: kubelet.service: Consumed 155ms CPU time, 106.5M memory peak. Mar 13 00:02:12.469619 containerd[1547]: time="2026-03-13T00:02:12.468338323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:02:12.470236 containerd[1547]: time="2026-03-13T00:02:12.470159498Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=27390272" Mar 13 00:02:12.471279 containerd[1547]: time="2026-03-13T00:02:12.471205364Z" level=info msg="ImageCreate event name:\"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:02:12.479167 containerd[1547]: time="2026-03-13T00:02:12.478846699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:02:12.480924 containerd[1547]: time="2026-03-13T00:02:12.480727473Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"27386773\" in 1.679863836s" Mar 13 00:02:12.480924 containerd[1547]: time="2026-03-13T00:02:12.480787112Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\"" Mar 13 00:02:12.481503 containerd[1547]: time="2026-03-13T00:02:12.481464063Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 13 00:02:13.641096 update_engine[1531]: I20260313 00:02:13.640803 1531 update_attempter.cc:509] Updating boot flags... Mar 13 00:02:14.075856 containerd[1547]: time="2026-03-13T00:02:14.075656683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:02:14.077674 containerd[1547]: time="2026-03-13T00:02:14.077610859Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=23552126" Mar 13 00:02:14.078500 containerd[1547]: time="2026-03-13T00:02:14.078444689Z" level=info msg="ImageCreate event name:\"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:02:14.082508 containerd[1547]: time="2026-03-13T00:02:14.082430001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:02:14.084009 containerd[1547]: time="2026-03-13T00:02:14.083212671Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"25136510\" in 1.60167017s" Mar 13 00:02:14.084009 containerd[1547]: time="2026-03-13T00:02:14.083307190Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\"" Mar 13 00:02:14.084220 containerd[1547]: time="2026-03-13T00:02:14.084091661Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 13 00:02:15.195244 containerd[1547]: time="2026-03-13T00:02:15.195175978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:02:15.196784 containerd[1547]: time="2026-03-13T00:02:15.196738800Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=18301325" Mar 13 00:02:15.197200 containerd[1547]: time="2026-03-13T00:02:15.197160915Z" level=info msg="ImageCreate event name:\"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:02:15.200281 containerd[1547]: time="2026-03-13T00:02:15.200225001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:02:15.202409 containerd[1547]: time="2026-03-13T00:02:15.202075540Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"19885727\" in 1.11793476s" Mar 13 00:02:15.202409 containerd[1547]: time="2026-03-13T00:02:15.202114059Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\"" Mar 13 00:02:15.203802 containerd[1547]: time="2026-03-13T00:02:15.203687721Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 13 00:02:16.171697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1866870318.mount: Deactivated successfully. Mar 13 00:02:16.538190 containerd[1547]: time="2026-03-13T00:02:16.538006342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:02:16.540281 containerd[1547]: time="2026-03-13T00:02:16.540227638Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=28148896" Mar 13 00:02:16.540995 containerd[1547]: time="2026-03-13T00:02:16.540948911Z" level=info msg="ImageCreate event name:\"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:02:16.542990 containerd[1547]: time="2026-03-13T00:02:16.542954929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:02:16.543724 containerd[1547]: time="2026-03-13T00:02:16.543684161Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"28147889\" in 1.339946201s" Mar 13 00:02:16.543846 containerd[1547]: time="2026-03-13T00:02:16.543829240Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\"" Mar 13 00:02:16.544530 containerd[1547]: time="2026-03-13T00:02:16.544498673Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 13 00:02:17.005934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount617541120.mount: Deactivated successfully. Mar 13 00:02:17.955214 containerd[1547]: time="2026-03-13T00:02:17.955145801Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:02:17.955734 containerd[1547]: time="2026-03-13T00:02:17.955660156Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152209" Mar 13 00:02:17.957433 containerd[1547]: time="2026-03-13T00:02:17.957342179Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:02:17.961094 containerd[1547]: time="2026-03-13T00:02:17.960594546Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:02:17.962069 containerd[1547]: time="2026-03-13T00:02:17.961801334Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.417256742s" Mar 13 00:02:17.962069 containerd[1547]: time="2026-03-13T00:02:17.961847654Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Mar 13 00:02:17.962339 containerd[1547]: time="2026-03-13T00:02:17.962309929Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 13 00:02:18.399986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3022882516.mount: Deactivated successfully. Mar 13 00:02:18.406850 containerd[1547]: time="2026-03-13T00:02:18.406783153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:02:18.407851 containerd[1547]: time="2026-03-13T00:02:18.407582626Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Mar 13 00:02:18.409111 containerd[1547]: time="2026-03-13T00:02:18.409014612Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:02:18.412668 containerd[1547]: time="2026-03-13T00:02:18.412105344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:02:18.413355 containerd[1547]: time="2026-03-13T00:02:18.413291173Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 450.943804ms" Mar 13 00:02:18.413355 containerd[1547]: time="2026-03-13T00:02:18.413352572Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 13 00:02:18.413965 containerd[1547]: time="2026-03-13T00:02:18.413935207Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 13 00:02:18.870922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount59337339.mount: Deactivated successfully. Mar 13 00:02:19.638106 containerd[1547]: time="2026-03-13T00:02:19.638028229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:02:19.639939 containerd[1547]: time="2026-03-13T00:02:19.639883653Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=21885878" Mar 13 00:02:19.640779 containerd[1547]: time="2026-03-13T00:02:19.640742325Z" level=info msg="ImageCreate event name:\"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:02:19.644905 containerd[1547]: time="2026-03-13T00:02:19.644842649Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:02:19.647209 containerd[1547]: time="2026-03-13T00:02:19.647117270Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"21882972\" in 1.232997065s" Mar 13 00:02:19.647209 containerd[1547]: time="2026-03-13T00:02:19.647168109Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\"" Mar 13 00:02:22.326670 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 13 00:02:22.332318 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:02:22.504272 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:02:22.513848 (kubelet)[2304]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:02:22.557418 kubelet[2304]: E0313 00:02:22.557367 2304 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:02:22.560654 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:02:22.560777 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:02:22.561626 systemd[1]: kubelet.service: Consumed 163ms CPU time, 106.1M memory peak. Mar 13 00:02:25.191247 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:02:25.191710 systemd[1]: kubelet.service: Consumed 163ms CPU time, 106.1M memory peak. Mar 13 00:02:25.194476 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:02:25.230739 systemd[1]: Reload requested from client PID 2318 ('systemctl') (unit session-7.scope)... Mar 13 00:02:25.230754 systemd[1]: Reloading... Mar 13 00:02:25.366102 zram_generator::config[2365]: No configuration found. Mar 13 00:02:25.553767 systemd[1]: Reloading finished in 322 ms. Mar 13 00:02:25.611158 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 13 00:02:25.611331 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 13 00:02:25.613187 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:02:25.613282 systemd[1]: kubelet.service: Consumed 109ms CPU time, 94.9M memory peak. Mar 13 00:02:25.615924 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:02:25.770304 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:02:25.782587 (kubelet)[2410]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 13 00:02:25.827108 kubelet[2410]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 00:02:25.827488 kubelet[2410]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 13 00:02:25.827488 kubelet[2410]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 00:02:25.827488 kubelet[2410]: I0313 00:02:25.827226 2410 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 00:02:26.404901 kubelet[2410]: I0313 00:02:26.404857 2410 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 13 00:02:26.405937 kubelet[2410]: I0313 00:02:26.405028 2410 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 00:02:26.405937 kubelet[2410]: I0313 00:02:26.405495 2410 server.go:956] "Client rotation is on, will bootstrap in background" Mar 13 00:02:26.435931 kubelet[2410]: E0313 00:02:26.435829 2410 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://168.119.109.176:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 168.119.109.176:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 13 00:02:26.439685 kubelet[2410]: I0313 00:02:26.439655 2410 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 13 00:02:26.451518 kubelet[2410]: I0313 00:02:26.451493 2410 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 00:02:26.456018 kubelet[2410]: I0313 00:02:26.455964 2410 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 13 00:02:26.458251 kubelet[2410]: I0313 00:02:26.458179 2410 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 00:02:26.458503 kubelet[2410]: I0313 00:02:26.458250 2410 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-4-n-499db54055","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 00:02:26.458503 kubelet[2410]: I0313 00:02:26.458491 2410 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 00:02:26.458503 kubelet[2410]: I0313 00:02:26.458502 2410 container_manager_linux.go:303] "Creating device plugin manager" Mar 13 00:02:26.458782 kubelet[2410]: I0313 00:02:26.458738 2410 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:02:26.462307 kubelet[2410]: I0313 00:02:26.462270 2410 kubelet.go:480] "Attempting to sync node with API server" Mar 13 00:02:26.462307 kubelet[2410]: I0313 00:02:26.462314 2410 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 00:02:26.462817 kubelet[2410]: I0313 00:02:26.462343 2410 kubelet.go:386] "Adding apiserver pod source" Mar 13 00:02:26.462817 kubelet[2410]: I0313 00:02:26.462362 2410 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 00:02:26.468396 kubelet[2410]: E0313 00:02:26.468347 2410 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://168.119.109.176:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 168.119.109.176:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 13 00:02:26.468527 kubelet[2410]: E0313 00:02:26.468447 2410 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://168.119.109.176:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-4-n-499db54055&limit=500&resourceVersion=0\": dial tcp 168.119.109.176:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 13 00:02:26.468838 kubelet[2410]: I0313 00:02:26.468815 2410 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 13 00:02:26.469579 kubelet[2410]: I0313 00:02:26.469547 2410 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 13 00:02:26.469706 kubelet[2410]: W0313 00:02:26.469685 2410 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 13 00:02:26.474304 kubelet[2410]: I0313 00:02:26.473873 2410 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 13 00:02:26.474304 kubelet[2410]: I0313 00:02:26.473921 2410 server.go:1289] "Started kubelet" Mar 13 00:02:26.480119 kubelet[2410]: E0313 00:02:26.478274 2410 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://168.119.109.176:6443/api/v1/namespaces/default/events\": dial tcp 168.119.109.176:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-2-4-n-499db54055.189c3da8556942b5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-4-n-499db54055,UID:ci-4459-2-4-n-499db54055,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-4-n-499db54055,},FirstTimestamp:2026-03-13 00:02:26.473894581 +0000 UTC m=+0.683921592,LastTimestamp:2026-03-13 00:02:26.473894581 +0000 UTC m=+0.683921592,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-4-n-499db54055,}" Mar 13 00:02:26.481169 kubelet[2410]: I0313 00:02:26.481041 2410 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 00:02:26.483083 kubelet[2410]: I0313 00:02:26.482798 2410 server.go:317] "Adding debug handlers to kubelet server" Mar 13 00:02:26.489636 kubelet[2410]: I0313 00:02:26.489599 2410 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 00:02:26.489800 kubelet[2410]: I0313 00:02:26.489583 2410 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 00:02:26.490135 kubelet[2410]: I0313 00:02:26.490119 2410 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 00:02:26.490690 kubelet[2410]: I0313 00:02:26.490674 2410 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 13 00:02:26.494214 kubelet[2410]: I0313 00:02:26.494181 2410 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 13 00:02:26.495472 kubelet[2410]: I0313 00:02:26.495358 2410 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 13 00:02:26.495472 kubelet[2410]: I0313 00:02:26.495432 2410 reconciler.go:26] "Reconciler: start to sync state" Mar 13 00:02:26.495778 kubelet[2410]: E0313 00:02:26.495740 2410 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-499db54055\" not found" Mar 13 00:02:26.496826 kubelet[2410]: I0313 00:02:26.496800 2410 factory.go:223] Registration of the systemd container factory successfully Mar 13 00:02:26.496937 kubelet[2410]: I0313 00:02:26.496915 2410 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 13 00:02:26.497436 kubelet[2410]: E0313 00:02:26.497392 2410 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.109.176:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-499db54055?timeout=10s\": dial tcp 168.119.109.176:6443: connect: connection refused" interval="200ms" Mar 13 00:02:26.497542 kubelet[2410]: E0313 00:02:26.497504 2410 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://168.119.109.176:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 168.119.109.176:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 13 00:02:26.498470 kubelet[2410]: I0313 00:02:26.498410 2410 factory.go:223] Registration of the containerd container factory successfully Mar 13 00:02:26.505463 kubelet[2410]: E0313 00:02:26.505429 2410 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 13 00:02:26.517228 kubelet[2410]: I0313 00:02:26.517173 2410 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 13 00:02:26.519970 kubelet[2410]: I0313 00:02:26.519912 2410 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 13 00:02:26.519970 kubelet[2410]: I0313 00:02:26.519946 2410 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 13 00:02:26.520833 kubelet[2410]: I0313 00:02:26.520222 2410 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 13 00:02:26.520833 kubelet[2410]: I0313 00:02:26.520239 2410 kubelet.go:2436] "Starting kubelet main sync loop" Mar 13 00:02:26.520833 kubelet[2410]: E0313 00:02:26.520289 2410 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 00:02:26.525685 kubelet[2410]: E0313 00:02:26.525627 2410 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://168.119.109.176:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 168.119.109.176:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 13 00:02:26.528172 kubelet[2410]: I0313 00:02:26.528142 2410 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 13 00:02:26.528172 kubelet[2410]: I0313 00:02:26.528161 2410 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 13 00:02:26.528172 kubelet[2410]: I0313 00:02:26.528178 2410 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:02:26.531149 kubelet[2410]: I0313 00:02:26.531111 2410 policy_none.go:49] "None policy: Start" Mar 13 00:02:26.531149 kubelet[2410]: I0313 00:02:26.531146 2410 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 13 00:02:26.531149 kubelet[2410]: I0313 00:02:26.531160 2410 state_mem.go:35] "Initializing new in-memory state store" Mar 13 00:02:26.538030 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 13 00:02:26.554702 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 13 00:02:26.558451 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 13 00:02:26.578044 kubelet[2410]: E0313 00:02:26.578001 2410 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 13 00:02:26.578804 kubelet[2410]: I0313 00:02:26.578726 2410 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 00:02:26.578804 kubelet[2410]: I0313 00:02:26.578761 2410 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 00:02:26.580052 kubelet[2410]: I0313 00:02:26.580005 2410 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 00:02:26.581843 kubelet[2410]: E0313 00:02:26.581778 2410 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 13 00:02:26.581843 kubelet[2410]: E0313 00:02:26.581840 2410 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-2-4-n-499db54055\" not found" Mar 13 00:02:26.635733 systemd[1]: Created slice kubepods-burstable-pode600220d207e9f6fd8d922cc6a156c98.slice - libcontainer container kubepods-burstable-pode600220d207e9f6fd8d922cc6a156c98.slice. Mar 13 00:02:26.644124 kubelet[2410]: E0313 00:02:26.643514 2410 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-499db54055\" not found" node="ci-4459-2-4-n-499db54055" Mar 13 00:02:26.650052 systemd[1]: Created slice kubepods-burstable-pod308a86c98bec02064e2dcf67d5d91801.slice - libcontainer container kubepods-burstable-pod308a86c98bec02064e2dcf67d5d91801.slice. Mar 13 00:02:26.653043 kubelet[2410]: E0313 00:02:26.653019 2410 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-499db54055\" not found" node="ci-4459-2-4-n-499db54055" Mar 13 00:02:26.655567 systemd[1]: Created slice kubepods-burstable-pod44d4cb533159abdfa2bb4f6afb0ccca0.slice - libcontainer container kubepods-burstable-pod44d4cb533159abdfa2bb4f6afb0ccca0.slice. Mar 13 00:02:26.657580 kubelet[2410]: E0313 00:02:26.657553 2410 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-499db54055\" not found" node="ci-4459-2-4-n-499db54055" Mar 13 00:02:26.682854 kubelet[2410]: I0313 00:02:26.682360 2410 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-499db54055" Mar 13 00:02:26.682854 kubelet[2410]: E0313 00:02:26.682803 2410 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://168.119.109.176:6443/api/v1/nodes\": dial tcp 168.119.109.176:6443: connect: connection refused" node="ci-4459-2-4-n-499db54055" Mar 13 00:02:26.698562 kubelet[2410]: E0313 00:02:26.698523 2410 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.109.176:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-499db54055?timeout=10s\": dial tcp 168.119.109.176:6443: connect: connection refused" interval="400ms" Mar 13 00:02:26.797602 kubelet[2410]: I0313 00:02:26.797501 2410 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/308a86c98bec02064e2dcf67d5d91801-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-4-n-499db54055\" (UID: \"308a86c98bec02064e2dcf67d5d91801\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-499db54055" Mar 13 00:02:26.797993 kubelet[2410]: I0313 00:02:26.797809 2410 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/44d4cb533159abdfa2bb4f6afb0ccca0-kubeconfig\") pod \"kube-scheduler-ci-4459-2-4-n-499db54055\" (UID: \"44d4cb533159abdfa2bb4f6afb0ccca0\") " pod="kube-system/kube-scheduler-ci-4459-2-4-n-499db54055" Mar 13 00:02:26.797993 kubelet[2410]: I0313 00:02:26.797840 2410 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e600220d207e9f6fd8d922cc6a156c98-k8s-certs\") pod \"kube-apiserver-ci-4459-2-4-n-499db54055\" (UID: \"e600220d207e9f6fd8d922cc6a156c98\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-499db54055" Mar 13 00:02:26.797993 kubelet[2410]: I0313 00:02:26.797861 2410 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e600220d207e9f6fd8d922cc6a156c98-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-4-n-499db54055\" (UID: \"e600220d207e9f6fd8d922cc6a156c98\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-499db54055" Mar 13 00:02:26.797993 kubelet[2410]: I0313 00:02:26.797911 2410 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/308a86c98bec02064e2dcf67d5d91801-ca-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-499db54055\" (UID: \"308a86c98bec02064e2dcf67d5d91801\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-499db54055" Mar 13 00:02:26.797993 kubelet[2410]: I0313 00:02:26.797937 2410 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/308a86c98bec02064e2dcf67d5d91801-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-499db54055\" (UID: \"308a86c98bec02064e2dcf67d5d91801\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-499db54055" Mar 13 00:02:26.798649 kubelet[2410]: I0313 00:02:26.797958 2410 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/308a86c98bec02064e2dcf67d5d91801-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-4-n-499db54055\" (UID: \"308a86c98bec02064e2dcf67d5d91801\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-499db54055" Mar 13 00:02:26.798764 kubelet[2410]: I0313 00:02:26.798748 2410 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e600220d207e9f6fd8d922cc6a156c98-ca-certs\") pod \"kube-apiserver-ci-4459-2-4-n-499db54055\" (UID: \"e600220d207e9f6fd8d922cc6a156c98\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-499db54055" Mar 13 00:02:26.798896 kubelet[2410]: I0313 00:02:26.798880 2410 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/308a86c98bec02064e2dcf67d5d91801-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-4-n-499db54055\" (UID: \"308a86c98bec02064e2dcf67d5d91801\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-499db54055" Mar 13 00:02:26.885982 kubelet[2410]: I0313 00:02:26.885940 2410 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-499db54055" Mar 13 00:02:26.886491 kubelet[2410]: E0313 00:02:26.886449 2410 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://168.119.109.176:6443/api/v1/nodes\": dial tcp 168.119.109.176:6443: connect: connection refused" node="ci-4459-2-4-n-499db54055" Mar 13 00:02:26.944950 containerd[1547]: time="2026-03-13T00:02:26.944794878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-4-n-499db54055,Uid:e600220d207e9f6fd8d922cc6a156c98,Namespace:kube-system,Attempt:0,}" Mar 13 00:02:26.954879 containerd[1547]: time="2026-03-13T00:02:26.954513864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-4-n-499db54055,Uid:308a86c98bec02064e2dcf67d5d91801,Namespace:kube-system,Attempt:0,}" Mar 13 00:02:26.959029 containerd[1547]: time="2026-03-13T00:02:26.958986639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-4-n-499db54055,Uid:44d4cb533159abdfa2bb4f6afb0ccca0,Namespace:kube-system,Attempt:0,}" Mar 13 00:02:26.982997 containerd[1547]: time="2026-03-13T00:02:26.982731387Z" level=info msg="connecting to shim 718d40efceab9c548c1111cb74827cd4f8713cefc3eb29982aed260a00bb52da" address="unix:///run/containerd/s/9aa8e2b62b79874066e2acce89b36b6eca591d246844f6f09660c47d997fd3e2" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:02:27.007902 containerd[1547]: time="2026-03-13T00:02:27.007858929Z" level=info msg="connecting to shim 43bc6e90b35d8e4a1b18b54c2458eba8dc050995f93f449e168856007418e67a" address="unix:///run/containerd/s/8ba54817637e9b2d740ea831507af06073ebfa711c26b5b6103089d445ba1f60" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:02:27.008554 containerd[1547]: time="2026-03-13T00:02:27.008176087Z" level=info msg="connecting to shim f8af977aacac585f0279b7552edbb6d9fd22fd5e46b26d47e65f1349dcd9f30a" address="unix:///run/containerd/s/f7b2cfd28c691331a410806107ebabf18e218d9d1a8ac299919615ef89841db9" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:02:27.031378 systemd[1]: Started cri-containerd-718d40efceab9c548c1111cb74827cd4f8713cefc3eb29982aed260a00bb52da.scope - libcontainer container 718d40efceab9c548c1111cb74827cd4f8713cefc3eb29982aed260a00bb52da. Mar 13 00:02:27.053331 systemd[1]: Started cri-containerd-43bc6e90b35d8e4a1b18b54c2458eba8dc050995f93f449e168856007418e67a.scope - libcontainer container 43bc6e90b35d8e4a1b18b54c2458eba8dc050995f93f449e168856007418e67a. Mar 13 00:02:27.056002 systemd[1]: Started cri-containerd-f8af977aacac585f0279b7552edbb6d9fd22fd5e46b26d47e65f1349dcd9f30a.scope - libcontainer container f8af977aacac585f0279b7552edbb6d9fd22fd5e46b26d47e65f1349dcd9f30a. Mar 13 00:02:27.101225 kubelet[2410]: E0313 00:02:27.100095 2410 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.109.176:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-499db54055?timeout=10s\": dial tcp 168.119.109.176:6443: connect: connection refused" interval="800ms" Mar 13 00:02:27.117228 containerd[1547]: time="2026-03-13T00:02:27.116808120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-4-n-499db54055,Uid:e600220d207e9f6fd8d922cc6a156c98,Namespace:kube-system,Attempt:0,} returns sandbox id \"718d40efceab9c548c1111cb74827cd4f8713cefc3eb29982aed260a00bb52da\"" Mar 13 00:02:27.128236 containerd[1547]: time="2026-03-13T00:02:27.128195060Z" level=info msg="CreateContainer within sandbox \"718d40efceab9c548c1111cb74827cd4f8713cefc3eb29982aed260a00bb52da\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 13 00:02:27.131447 containerd[1547]: time="2026-03-13T00:02:27.131299564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-4-n-499db54055,Uid:44d4cb533159abdfa2bb4f6afb0ccca0,Namespace:kube-system,Attempt:0,} returns sandbox id \"43bc6e90b35d8e4a1b18b54c2458eba8dc050995f93f449e168856007418e67a\"" Mar 13 00:02:27.137953 containerd[1547]: time="2026-03-13T00:02:27.137904330Z" level=info msg="CreateContainer within sandbox \"43bc6e90b35d8e4a1b18b54c2458eba8dc050995f93f449e168856007418e67a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 13 00:02:27.140290 containerd[1547]: time="2026-03-13T00:02:27.140251677Z" level=info msg="Container 8aa10c0114f1dab93004e71575c238681b4341a71a923310f5382616b49c733e: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:02:27.151743 containerd[1547]: time="2026-03-13T00:02:27.151666658Z" level=info msg="CreateContainer within sandbox \"718d40efceab9c548c1111cb74827cd4f8713cefc3eb29982aed260a00bb52da\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8aa10c0114f1dab93004e71575c238681b4341a71a923310f5382616b49c733e\"" Mar 13 00:02:27.151868 containerd[1547]: time="2026-03-13T00:02:27.151825777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-4-n-499db54055,Uid:308a86c98bec02064e2dcf67d5d91801,Namespace:kube-system,Attempt:0,} returns sandbox id \"f8af977aacac585f0279b7552edbb6d9fd22fd5e46b26d47e65f1349dcd9f30a\"" Mar 13 00:02:27.152917 containerd[1547]: time="2026-03-13T00:02:27.152887371Z" level=info msg="StartContainer for \"8aa10c0114f1dab93004e71575c238681b4341a71a923310f5382616b49c733e\"" Mar 13 00:02:27.155751 containerd[1547]: time="2026-03-13T00:02:27.155665357Z" level=info msg="connecting to shim 8aa10c0114f1dab93004e71575c238681b4341a71a923310f5382616b49c733e" address="unix:///run/containerd/s/9aa8e2b62b79874066e2acce89b36b6eca591d246844f6f09660c47d997fd3e2" protocol=ttrpc version=3 Mar 13 00:02:27.156780 containerd[1547]: time="2026-03-13T00:02:27.156554272Z" level=info msg="Container d077d03907a664b3205bfd178a7e03e93c73ed8176e34239a49f5369047fa096: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:02:27.158369 containerd[1547]: time="2026-03-13T00:02:27.158340983Z" level=info msg="CreateContainer within sandbox \"f8af977aacac585f0279b7552edbb6d9fd22fd5e46b26d47e65f1349dcd9f30a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 13 00:02:27.163972 containerd[1547]: time="2026-03-13T00:02:27.163936834Z" level=info msg="CreateContainer within sandbox \"43bc6e90b35d8e4a1b18b54c2458eba8dc050995f93f449e168856007418e67a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d077d03907a664b3205bfd178a7e03e93c73ed8176e34239a49f5369047fa096\"" Mar 13 00:02:27.165638 containerd[1547]: time="2026-03-13T00:02:27.165611665Z" level=info msg="StartContainer for \"d077d03907a664b3205bfd178a7e03e93c73ed8176e34239a49f5369047fa096\"" Mar 13 00:02:27.171176 containerd[1547]: time="2026-03-13T00:02:27.170887317Z" level=info msg="connecting to shim d077d03907a664b3205bfd178a7e03e93c73ed8176e34239a49f5369047fa096" address="unix:///run/containerd/s/8ba54817637e9b2d740ea831507af06073ebfa711c26b5b6103089d445ba1f60" protocol=ttrpc version=3 Mar 13 00:02:27.176402 containerd[1547]: time="2026-03-13T00:02:27.176333289Z" level=info msg="Container 163e0c7cd567cb1e014c69bf731e404288f5f624368e2a376afe6edd9441f419: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:02:27.179511 systemd[1]: Started cri-containerd-8aa10c0114f1dab93004e71575c238681b4341a71a923310f5382616b49c733e.scope - libcontainer container 8aa10c0114f1dab93004e71575c238681b4341a71a923310f5382616b49c733e. Mar 13 00:02:27.189577 containerd[1547]: time="2026-03-13T00:02:27.189457940Z" level=info msg="CreateContainer within sandbox \"f8af977aacac585f0279b7552edbb6d9fd22fd5e46b26d47e65f1349dcd9f30a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"163e0c7cd567cb1e014c69bf731e404288f5f624368e2a376afe6edd9441f419\"" Mar 13 00:02:27.191255 containerd[1547]: time="2026-03-13T00:02:27.191220451Z" level=info msg="StartContainer for \"163e0c7cd567cb1e014c69bf731e404288f5f624368e2a376afe6edd9441f419\"" Mar 13 00:02:27.193005 containerd[1547]: time="2026-03-13T00:02:27.192959802Z" level=info msg="connecting to shim 163e0c7cd567cb1e014c69bf731e404288f5f624368e2a376afe6edd9441f419" address="unix:///run/containerd/s/f7b2cfd28c691331a410806107ebabf18e218d9d1a8ac299919615ef89841db9" protocol=ttrpc version=3 Mar 13 00:02:27.203359 systemd[1]: Started cri-containerd-d077d03907a664b3205bfd178a7e03e93c73ed8176e34239a49f5369047fa096.scope - libcontainer container d077d03907a664b3205bfd178a7e03e93c73ed8176e34239a49f5369047fa096. Mar 13 00:02:27.223254 systemd[1]: Started cri-containerd-163e0c7cd567cb1e014c69bf731e404288f5f624368e2a376afe6edd9441f419.scope - libcontainer container 163e0c7cd567cb1e014c69bf731e404288f5f624368e2a376afe6edd9441f419. Mar 13 00:02:27.247544 containerd[1547]: time="2026-03-13T00:02:27.247502197Z" level=info msg="StartContainer for \"8aa10c0114f1dab93004e71575c238681b4341a71a923310f5382616b49c733e\" returns successfully" Mar 13 00:02:27.277580 containerd[1547]: time="2026-03-13T00:02:27.277541600Z" level=info msg="StartContainer for \"d077d03907a664b3205bfd178a7e03e93c73ed8176e34239a49f5369047fa096\" returns successfully" Mar 13 00:02:27.289572 kubelet[2410]: I0313 00:02:27.289538 2410 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-499db54055" Mar 13 00:02:27.289927 kubelet[2410]: E0313 00:02:27.289879 2410 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://168.119.109.176:6443/api/v1/nodes\": dial tcp 168.119.109.176:6443: connect: connection refused" node="ci-4459-2-4-n-499db54055" Mar 13 00:02:27.329153 containerd[1547]: time="2026-03-13T00:02:27.328759013Z" level=info msg="StartContainer for \"163e0c7cd567cb1e014c69bf731e404288f5f624368e2a376afe6edd9441f419\" returns successfully" Mar 13 00:02:27.345446 kubelet[2410]: E0313 00:02:27.345390 2410 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://168.119.109.176:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 168.119.109.176:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 13 00:02:27.533533 kubelet[2410]: E0313 00:02:27.533437 2410 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-499db54055\" not found" node="ci-4459-2-4-n-499db54055" Mar 13 00:02:27.540697 kubelet[2410]: E0313 00:02:27.540515 2410 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-499db54055\" not found" node="ci-4459-2-4-n-499db54055" Mar 13 00:02:27.544462 kubelet[2410]: E0313 00:02:27.544397 2410 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-499db54055\" not found" node="ci-4459-2-4-n-499db54055" Mar 13 00:02:28.093026 kubelet[2410]: I0313 00:02:28.092278 2410 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-499db54055" Mar 13 00:02:28.546754 kubelet[2410]: E0313 00:02:28.546627 2410 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-499db54055\" not found" node="ci-4459-2-4-n-499db54055" Mar 13 00:02:28.549511 kubelet[2410]: E0313 00:02:28.549174 2410 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-499db54055\" not found" node="ci-4459-2-4-n-499db54055" Mar 13 00:02:29.547846 kubelet[2410]: E0313 00:02:29.547817 2410 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-499db54055\" not found" node="ci-4459-2-4-n-499db54055" Mar 13 00:02:30.065946 kubelet[2410]: E0313 00:02:30.065901 2410 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459-2-4-n-499db54055\" not found" node="ci-4459-2-4-n-499db54055" Mar 13 00:02:30.253831 kubelet[2410]: I0313 00:02:30.253624 2410 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-4-n-499db54055" Mar 13 00:02:30.253831 kubelet[2410]: E0313 00:02:30.253669 2410 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4459-2-4-n-499db54055\": node \"ci-4459-2-4-n-499db54055\" not found" Mar 13 00:02:30.295801 kubelet[2410]: I0313 00:02:30.295349 2410 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-499db54055" Mar 13 00:02:30.384199 kubelet[2410]: E0313 00:02:30.384072 2410 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-4-n-499db54055\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-499db54055" Mar 13 00:02:30.384678 kubelet[2410]: I0313 00:02:30.384459 2410 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-4-n-499db54055" Mar 13 00:02:30.389074 kubelet[2410]: E0313 00:02:30.388730 2410 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-4-n-499db54055\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-2-4-n-499db54055" Mar 13 00:02:30.389074 kubelet[2410]: I0313 00:02:30.388765 2410 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-4-n-499db54055" Mar 13 00:02:30.397130 kubelet[2410]: E0313 00:02:30.397096 2410 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-4-n-499db54055\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-2-4-n-499db54055" Mar 13 00:02:30.469780 kubelet[2410]: I0313 00:02:30.469545 2410 apiserver.go:52] "Watching apiserver" Mar 13 00:02:30.496051 kubelet[2410]: I0313 00:02:30.495464 2410 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 13 00:02:32.738986 systemd[1]: Reload requested from client PID 2691 ('systemctl') (unit session-7.scope)... Mar 13 00:02:32.739004 systemd[1]: Reloading... Mar 13 00:02:32.846099 zram_generator::config[2731]: No configuration found. Mar 13 00:02:33.073245 systemd[1]: Reloading finished in 333 ms. Mar 13 00:02:33.107718 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:02:33.121377 systemd[1]: kubelet.service: Deactivated successfully. Mar 13 00:02:33.122042 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:02:33.122273 systemd[1]: kubelet.service: Consumed 1.146s CPU time, 125.1M memory peak. Mar 13 00:02:33.125916 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:02:33.302803 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:02:33.319005 (kubelet)[2780]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 13 00:02:33.383850 kubelet[2780]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 00:02:33.383850 kubelet[2780]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 13 00:02:33.383850 kubelet[2780]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 00:02:33.384253 kubelet[2780]: I0313 00:02:33.383962 2780 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 00:02:33.397372 kubelet[2780]: I0313 00:02:33.397310 2780 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 13 00:02:33.397372 kubelet[2780]: I0313 00:02:33.397340 2780 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 00:02:33.397883 kubelet[2780]: I0313 00:02:33.397854 2780 server.go:956] "Client rotation is on, will bootstrap in background" Mar 13 00:02:33.399609 kubelet[2780]: I0313 00:02:33.399545 2780 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 13 00:02:33.404586 kubelet[2780]: I0313 00:02:33.404547 2780 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 13 00:02:33.411980 kubelet[2780]: I0313 00:02:33.410189 2780 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 00:02:33.413179 kubelet[2780]: I0313 00:02:33.413137 2780 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 13 00:02:33.413529 kubelet[2780]: I0313 00:02:33.413497 2780 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 00:02:33.413838 kubelet[2780]: I0313 00:02:33.413592 2780 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-4-n-499db54055","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 00:02:33.414014 kubelet[2780]: I0313 00:02:33.413998 2780 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 00:02:33.414185 kubelet[2780]: I0313 00:02:33.414166 2780 container_manager_linux.go:303] "Creating device plugin manager" Mar 13 00:02:33.414293 kubelet[2780]: I0313 00:02:33.414284 2780 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:02:33.414511 kubelet[2780]: I0313 00:02:33.414498 2780 kubelet.go:480] "Attempting to sync node with API server" Mar 13 00:02:33.414580 kubelet[2780]: I0313 00:02:33.414570 2780 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 00:02:33.414662 kubelet[2780]: I0313 00:02:33.414653 2780 kubelet.go:386] "Adding apiserver pod source" Mar 13 00:02:33.414726 kubelet[2780]: I0313 00:02:33.414717 2780 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 00:02:33.421480 kubelet[2780]: I0313 00:02:33.421444 2780 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 13 00:02:33.422306 kubelet[2780]: I0313 00:02:33.422164 2780 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 13 00:02:33.425234 kubelet[2780]: I0313 00:02:33.425208 2780 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 13 00:02:33.425330 kubelet[2780]: I0313 00:02:33.425273 2780 server.go:1289] "Started kubelet" Mar 13 00:02:33.425684 kubelet[2780]: I0313 00:02:33.425650 2780 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 00:02:33.425885 kubelet[2780]: I0313 00:02:33.425829 2780 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 00:02:33.426196 kubelet[2780]: I0313 00:02:33.426173 2780 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 00:02:33.428107 kubelet[2780]: I0313 00:02:33.426844 2780 server.go:317] "Adding debug handlers to kubelet server" Mar 13 00:02:33.431080 kubelet[2780]: I0313 00:02:33.430038 2780 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 00:02:33.443524 kubelet[2780]: I0313 00:02:33.443441 2780 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 13 00:02:33.445209 kubelet[2780]: I0313 00:02:33.445130 2780 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 13 00:02:33.445443 kubelet[2780]: E0313 00:02:33.445415 2780 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-499db54055\" not found" Mar 13 00:02:33.446871 kubelet[2780]: I0313 00:02:33.446373 2780 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 13 00:02:33.446871 kubelet[2780]: I0313 00:02:33.446490 2780 reconciler.go:26] "Reconciler: start to sync state" Mar 13 00:02:33.460735 kubelet[2780]: I0313 00:02:33.460693 2780 factory.go:223] Registration of the systemd container factory successfully Mar 13 00:02:33.460873 kubelet[2780]: I0313 00:02:33.460836 2780 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 13 00:02:33.472281 kubelet[2780]: I0313 00:02:33.472095 2780 factory.go:223] Registration of the containerd container factory successfully Mar 13 00:02:33.477439 kubelet[2780]: I0313 00:02:33.477320 2780 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 13 00:02:33.489080 kubelet[2780]: I0313 00:02:33.488871 2780 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 13 00:02:33.489080 kubelet[2780]: I0313 00:02:33.489019 2780 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 13 00:02:33.489080 kubelet[2780]: I0313 00:02:33.489038 2780 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 13 00:02:33.489080 kubelet[2780]: I0313 00:02:33.489045 2780 kubelet.go:2436] "Starting kubelet main sync loop" Mar 13 00:02:33.489348 kubelet[2780]: E0313 00:02:33.489123 2780 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 00:02:33.535409 kubelet[2780]: I0313 00:02:33.535380 2780 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 13 00:02:33.536209 kubelet[2780]: I0313 00:02:33.535604 2780 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 13 00:02:33.536209 kubelet[2780]: I0313 00:02:33.535627 2780 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:02:33.536209 kubelet[2780]: I0313 00:02:33.535776 2780 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 13 00:02:33.536209 kubelet[2780]: I0313 00:02:33.535788 2780 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 13 00:02:33.536209 kubelet[2780]: I0313 00:02:33.535813 2780 policy_none.go:49] "None policy: Start" Mar 13 00:02:33.536209 kubelet[2780]: I0313 00:02:33.535825 2780 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 13 00:02:33.536209 kubelet[2780]: I0313 00:02:33.535835 2780 state_mem.go:35] "Initializing new in-memory state store" Mar 13 00:02:33.536209 kubelet[2780]: I0313 00:02:33.535967 2780 state_mem.go:75] "Updated machine memory state" Mar 13 00:02:33.542220 kubelet[2780]: E0313 00:02:33.542198 2780 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 13 00:02:33.542624 kubelet[2780]: I0313 00:02:33.542609 2780 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 00:02:33.543194 kubelet[2780]: I0313 00:02:33.543153 2780 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 00:02:33.543579 kubelet[2780]: I0313 00:02:33.543558 2780 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 00:02:33.547401 kubelet[2780]: E0313 00:02:33.547364 2780 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 13 00:02:33.593314 kubelet[2780]: I0313 00:02:33.591092 2780 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-4-n-499db54055" Mar 13 00:02:33.593314 kubelet[2780]: I0313 00:02:33.591146 2780 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-499db54055" Mar 13 00:02:33.593314 kubelet[2780]: I0313 00:02:33.591572 2780 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-4-n-499db54055" Mar 13 00:02:33.650214 kubelet[2780]: I0313 00:02:33.647286 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/308a86c98bec02064e2dcf67d5d91801-ca-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-499db54055\" (UID: \"308a86c98bec02064e2dcf67d5d91801\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-499db54055" Mar 13 00:02:33.650214 kubelet[2780]: I0313 00:02:33.650055 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/308a86c98bec02064e2dcf67d5d91801-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-499db54055\" (UID: \"308a86c98bec02064e2dcf67d5d91801\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-499db54055" Mar 13 00:02:33.650460 kubelet[2780]: I0313 00:02:33.650252 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/308a86c98bec02064e2dcf67d5d91801-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-4-n-499db54055\" (UID: \"308a86c98bec02064e2dcf67d5d91801\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-499db54055" Mar 13 00:02:33.650460 kubelet[2780]: I0313 00:02:33.650330 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/308a86c98bec02064e2dcf67d5d91801-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-4-n-499db54055\" (UID: \"308a86c98bec02064e2dcf67d5d91801\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-499db54055" Mar 13 00:02:33.650460 kubelet[2780]: I0313 00:02:33.650395 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/44d4cb533159abdfa2bb4f6afb0ccca0-kubeconfig\") pod \"kube-scheduler-ci-4459-2-4-n-499db54055\" (UID: \"44d4cb533159abdfa2bb4f6afb0ccca0\") " pod="kube-system/kube-scheduler-ci-4459-2-4-n-499db54055" Mar 13 00:02:33.650460 kubelet[2780]: I0313 00:02:33.650437 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e600220d207e9f6fd8d922cc6a156c98-ca-certs\") pod \"kube-apiserver-ci-4459-2-4-n-499db54055\" (UID: \"e600220d207e9f6fd8d922cc6a156c98\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-499db54055" Mar 13 00:02:33.650670 kubelet[2780]: I0313 00:02:33.650496 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e600220d207e9f6fd8d922cc6a156c98-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-4-n-499db54055\" (UID: \"e600220d207e9f6fd8d922cc6a156c98\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-499db54055" Mar 13 00:02:33.650670 kubelet[2780]: I0313 00:02:33.650550 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/308a86c98bec02064e2dcf67d5d91801-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-4-n-499db54055\" (UID: \"308a86c98bec02064e2dcf67d5d91801\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-499db54055" Mar 13 00:02:33.650670 kubelet[2780]: I0313 00:02:33.650661 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e600220d207e9f6fd8d922cc6a156c98-k8s-certs\") pod \"kube-apiserver-ci-4459-2-4-n-499db54055\" (UID: \"e600220d207e9f6fd8d922cc6a156c98\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-499db54055" Mar 13 00:02:33.653031 kubelet[2780]: I0313 00:02:33.652609 2780 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-499db54055" Mar 13 00:02:33.662917 kubelet[2780]: I0313 00:02:33.662762 2780 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459-2-4-n-499db54055" Mar 13 00:02:33.662917 kubelet[2780]: I0313 00:02:33.662863 2780 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-4-n-499db54055" Mar 13 00:02:34.424443 kubelet[2780]: I0313 00:02:34.424246 2780 apiserver.go:52] "Watching apiserver" Mar 13 00:02:34.446610 kubelet[2780]: I0313 00:02:34.446529 2780 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 13 00:02:34.514794 kubelet[2780]: I0313 00:02:34.513527 2780 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-4-n-499db54055" Mar 13 00:02:34.534095 kubelet[2780]: E0313 00:02:34.532425 2780 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-4-n-499db54055\" already exists" pod="kube-system/kube-apiserver-ci-4459-2-4-n-499db54055" Mar 13 00:02:34.570420 kubelet[2780]: I0313 00:02:34.570349 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-2-4-n-499db54055" podStartSLOduration=1.570334495 podStartE2EDuration="1.570334495s" podCreationTimestamp="2026-03-13 00:02:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:02:34.548495928 +0000 UTC m=+1.224054342" watchObservedRunningTime="2026-03-13 00:02:34.570334495 +0000 UTC m=+1.245892909" Mar 13 00:02:34.571129 kubelet[2780]: I0313 00:02:34.570761 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-499db54055" podStartSLOduration=1.570748374 podStartE2EDuration="1.570748374s" podCreationTimestamp="2026-03-13 00:02:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:02:34.569485298 +0000 UTC m=+1.245043712" watchObservedRunningTime="2026-03-13 00:02:34.570748374 +0000 UTC m=+1.246306748" Mar 13 00:02:39.523652 kubelet[2780]: I0313 00:02:39.523613 2780 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 13 00:02:39.524674 containerd[1547]: time="2026-03-13T00:02:39.524619737Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 13 00:02:39.524945 kubelet[2780]: I0313 00:02:39.524895 2780 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 13 00:02:40.694705 kubelet[2780]: I0313 00:02:40.694360 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-2-4-n-499db54055" podStartSLOduration=7.694337825 podStartE2EDuration="7.694337825s" podCreationTimestamp="2026-03-13 00:02:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:02:34.588119836 +0000 UTC m=+1.263678290" watchObservedRunningTime="2026-03-13 00:02:40.694337825 +0000 UTC m=+7.369896239" Mar 13 00:02:40.706938 systemd[1]: Created slice kubepods-besteffort-pod52fda370_d3cb_4e3c_8559_221aa0627387.slice - libcontainer container kubepods-besteffort-pod52fda370_d3cb_4e3c_8559_221aa0627387.slice. Mar 13 00:02:40.729115 systemd[1]: Created slice kubepods-besteffort-pod45e6eaf0_4e9d_44c9_bfc6_122f18a0007e.slice - libcontainer container kubepods-besteffort-pod45e6eaf0_4e9d_44c9_bfc6_122f18a0007e.slice. Mar 13 00:02:40.797876 kubelet[2780]: I0313 00:02:40.797808 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/45e6eaf0-4e9d-44c9-bfc6-122f18a0007e-kube-proxy\") pod \"kube-proxy-tz8lf\" (UID: \"45e6eaf0-4e9d-44c9-bfc6-122f18a0007e\") " pod="kube-system/kube-proxy-tz8lf" Mar 13 00:02:40.798038 kubelet[2780]: I0313 00:02:40.797899 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6ph5\" (UniqueName: \"kubernetes.io/projected/45e6eaf0-4e9d-44c9-bfc6-122f18a0007e-kube-api-access-x6ph5\") pod \"kube-proxy-tz8lf\" (UID: \"45e6eaf0-4e9d-44c9-bfc6-122f18a0007e\") " pod="kube-system/kube-proxy-tz8lf" Mar 13 00:02:40.798038 kubelet[2780]: I0313 00:02:40.797958 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45e6eaf0-4e9d-44c9-bfc6-122f18a0007e-lib-modules\") pod \"kube-proxy-tz8lf\" (UID: \"45e6eaf0-4e9d-44c9-bfc6-122f18a0007e\") " pod="kube-system/kube-proxy-tz8lf" Mar 13 00:02:40.798038 kubelet[2780]: I0313 00:02:40.797992 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzfpg\" (UniqueName: \"kubernetes.io/projected/52fda370-d3cb-4e3c-8559-221aa0627387-kube-api-access-lzfpg\") pod \"tigera-operator-6bf85f8dd-4m22r\" (UID: \"52fda370-d3cb-4e3c-8559-221aa0627387\") " pod="tigera-operator/tigera-operator-6bf85f8dd-4m22r" Mar 13 00:02:40.798579 kubelet[2780]: I0313 00:02:40.798480 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45e6eaf0-4e9d-44c9-bfc6-122f18a0007e-xtables-lock\") pod \"kube-proxy-tz8lf\" (UID: \"45e6eaf0-4e9d-44c9-bfc6-122f18a0007e\") " pod="kube-system/kube-proxy-tz8lf" Mar 13 00:02:40.798579 kubelet[2780]: I0313 00:02:40.798532 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/52fda370-d3cb-4e3c-8559-221aa0627387-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-4m22r\" (UID: \"52fda370-d3cb-4e3c-8559-221aa0627387\") " pod="tigera-operator/tigera-operator-6bf85f8dd-4m22r" Mar 13 00:02:41.017411 containerd[1547]: time="2026-03-13T00:02:41.017210139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-4m22r,Uid:52fda370-d3cb-4e3c-8559-221aa0627387,Namespace:tigera-operator,Attempt:0,}" Mar 13 00:02:41.037081 containerd[1547]: time="2026-03-13T00:02:41.036946977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tz8lf,Uid:45e6eaf0-4e9d-44c9-bfc6-122f18a0007e,Namespace:kube-system,Attempt:0,}" Mar 13 00:02:41.046588 containerd[1547]: time="2026-03-13T00:02:41.046538997Z" level=info msg="connecting to shim 07d79d4221972e88fc4ed68f539fae3e35cb9bc1a48df9722af0dae69204a823" address="unix:///run/containerd/s/8ba09b9bbcaed8992689642b14ef0383e2eb3e7786b372474404057111cff680" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:02:41.082659 containerd[1547]: time="2026-03-13T00:02:41.082608400Z" level=info msg="connecting to shim f82eef1dc2b8cf526c655ed19a1dc15d8719e6595f9fe86960beca6d5c032bf4" address="unix:///run/containerd/s/d82810ce71a703af6a236f7adda4feb0e9728290a6ba52895bf5412a3a8445f4" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:02:41.086494 systemd[1]: Started cri-containerd-07d79d4221972e88fc4ed68f539fae3e35cb9bc1a48df9722af0dae69204a823.scope - libcontainer container 07d79d4221972e88fc4ed68f539fae3e35cb9bc1a48df9722af0dae69204a823. Mar 13 00:02:41.118233 systemd[1]: Started cri-containerd-f82eef1dc2b8cf526c655ed19a1dc15d8719e6595f9fe86960beca6d5c032bf4.scope - libcontainer container f82eef1dc2b8cf526c655ed19a1dc15d8719e6595f9fe86960beca6d5c032bf4. Mar 13 00:02:41.146904 containerd[1547]: time="2026-03-13T00:02:41.146845224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-4m22r,Uid:52fda370-d3cb-4e3c-8559-221aa0627387,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"07d79d4221972e88fc4ed68f539fae3e35cb9bc1a48df9722af0dae69204a823\"" Mar 13 00:02:41.150297 containerd[1547]: time="2026-03-13T00:02:41.150253417Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 13 00:02:41.163856 containerd[1547]: time="2026-03-13T00:02:41.163818028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tz8lf,Uid:45e6eaf0-4e9d-44c9-bfc6-122f18a0007e,Namespace:kube-system,Attempt:0,} returns sandbox id \"f82eef1dc2b8cf526c655ed19a1dc15d8719e6595f9fe86960beca6d5c032bf4\"" Mar 13 00:02:41.169248 containerd[1547]: time="2026-03-13T00:02:41.169113337Z" level=info msg="CreateContainer within sandbox \"f82eef1dc2b8cf526c655ed19a1dc15d8719e6595f9fe86960beca6d5c032bf4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 13 00:02:41.181645 containerd[1547]: time="2026-03-13T00:02:41.180429073Z" level=info msg="Container b3dc66ca7e8fb55da079dce352254941b30c6558ed32acfb69bf5fa7a6700bd5: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:02:41.191325 containerd[1547]: time="2026-03-13T00:02:41.191281970Z" level=info msg="CreateContainer within sandbox \"f82eef1dc2b8cf526c655ed19a1dc15d8719e6595f9fe86960beca6d5c032bf4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b3dc66ca7e8fb55da079dce352254941b30c6558ed32acfb69bf5fa7a6700bd5\"" Mar 13 00:02:41.192532 containerd[1547]: time="2026-03-13T00:02:41.192483648Z" level=info msg="StartContainer for \"b3dc66ca7e8fb55da079dce352254941b30c6558ed32acfb69bf5fa7a6700bd5\"" Mar 13 00:02:41.195544 containerd[1547]: time="2026-03-13T00:02:41.195482201Z" level=info msg="connecting to shim b3dc66ca7e8fb55da079dce352254941b30c6558ed32acfb69bf5fa7a6700bd5" address="unix:///run/containerd/s/d82810ce71a703af6a236f7adda4feb0e9728290a6ba52895bf5412a3a8445f4" protocol=ttrpc version=3 Mar 13 00:02:41.217276 systemd[1]: Started cri-containerd-b3dc66ca7e8fb55da079dce352254941b30c6558ed32acfb69bf5fa7a6700bd5.scope - libcontainer container b3dc66ca7e8fb55da079dce352254941b30c6558ed32acfb69bf5fa7a6700bd5. Mar 13 00:02:41.292293 containerd[1547]: time="2026-03-13T00:02:41.291958157Z" level=info msg="StartContainer for \"b3dc66ca7e8fb55da079dce352254941b30c6558ed32acfb69bf5fa7a6700bd5\" returns successfully" Mar 13 00:02:41.560493 kubelet[2780]: I0313 00:02:41.560337 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tz8lf" podStartSLOduration=1.560310469 podStartE2EDuration="1.560310469s" podCreationTimestamp="2026-03-13 00:02:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:02:41.547876416 +0000 UTC m=+8.223434830" watchObservedRunningTime="2026-03-13 00:02:41.560310469 +0000 UTC m=+8.235868883" Mar 13 00:02:42.941052 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount81493698.mount: Deactivated successfully. Mar 13 00:02:45.219482 containerd[1547]: time="2026-03-13T00:02:45.219347673Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:02:45.221042 containerd[1547]: time="2026-03-13T00:02:45.220993070Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Mar 13 00:02:45.221791 containerd[1547]: time="2026-03-13T00:02:45.221676189Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:02:45.226101 containerd[1547]: time="2026-03-13T00:02:45.226015342Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:02:45.227339 containerd[1547]: time="2026-03-13T00:02:45.226898901Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 4.076595964s" Mar 13 00:02:45.227339 containerd[1547]: time="2026-03-13T00:02:45.226940221Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Mar 13 00:02:45.232003 containerd[1547]: time="2026-03-13T00:02:45.231510333Z" level=info msg="CreateContainer within sandbox \"07d79d4221972e88fc4ed68f539fae3e35cb9bc1a48df9722af0dae69204a823\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 13 00:02:45.244174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount222598182.mount: Deactivated successfully. Mar 13 00:02:45.246330 containerd[1547]: time="2026-03-13T00:02:45.245536430Z" level=info msg="Container 58780a7b57d8d0796adf7995d71fde0e488e8f3ca8ee9ba2175a39c8d818ed7a: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:02:45.253042 containerd[1547]: time="2026-03-13T00:02:45.252996018Z" level=info msg="CreateContainer within sandbox \"07d79d4221972e88fc4ed68f539fae3e35cb9bc1a48df9722af0dae69204a823\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"58780a7b57d8d0796adf7995d71fde0e488e8f3ca8ee9ba2175a39c8d818ed7a\"" Mar 13 00:02:45.254414 containerd[1547]: time="2026-03-13T00:02:45.254371176Z" level=info msg="StartContainer for \"58780a7b57d8d0796adf7995d71fde0e488e8f3ca8ee9ba2175a39c8d818ed7a\"" Mar 13 00:02:45.255981 containerd[1547]: time="2026-03-13T00:02:45.255949253Z" level=info msg="connecting to shim 58780a7b57d8d0796adf7995d71fde0e488e8f3ca8ee9ba2175a39c8d818ed7a" address="unix:///run/containerd/s/8ba09b9bbcaed8992689642b14ef0383e2eb3e7786b372474404057111cff680" protocol=ttrpc version=3 Mar 13 00:02:45.283834 systemd[1]: Started cri-containerd-58780a7b57d8d0796adf7995d71fde0e488e8f3ca8ee9ba2175a39c8d818ed7a.scope - libcontainer container 58780a7b57d8d0796adf7995d71fde0e488e8f3ca8ee9ba2175a39c8d818ed7a. Mar 13 00:02:45.319657 containerd[1547]: time="2026-03-13T00:02:45.319556709Z" level=info msg="StartContainer for \"58780a7b57d8d0796adf7995d71fde0e488e8f3ca8ee9ba2175a39c8d818ed7a\" returns successfully" Mar 13 00:02:51.600138 sudo[1816]: pam_unix(sudo:session): session closed for user root Mar 13 00:02:51.694763 sshd[1815]: Connection closed by 20.161.92.111 port 49486 Mar 13 00:02:51.694660 sshd-session[1812]: pam_unix(sshd:session): session closed for user core Mar 13 00:02:51.701952 systemd[1]: sshd@6-168.119.109.176:22-20.161.92.111:49486.service: Deactivated successfully. Mar 13 00:02:51.709431 systemd[1]: session-7.scope: Deactivated successfully. Mar 13 00:02:51.711224 systemd[1]: session-7.scope: Consumed 7.874s CPU time, 222.9M memory peak. Mar 13 00:02:51.717699 systemd-logind[1528]: Session 7 logged out. Waiting for processes to exit. Mar 13 00:02:51.720336 systemd-logind[1528]: Removed session 7. Mar 13 00:02:54.320514 systemd[1]: Started sshd@7-168.119.109.176:22-203.121.106.56:29116.service - OpenSSH per-connection server daemon (203.121.106.56:29116). Mar 13 00:02:54.725109 sshd[3174]: Connection closed by 203.121.106.56 port 29116 [preauth] Mar 13 00:02:54.726780 systemd[1]: sshd@7-168.119.109.176:22-203.121.106.56:29116.service: Deactivated successfully. Mar 13 00:02:58.789508 kubelet[2780]: I0313 00:02:58.789113 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-4m22r" podStartSLOduration=14.709813544 podStartE2EDuration="18.789090222s" podCreationTimestamp="2026-03-13 00:02:40 +0000 UTC" firstStartedPulling="2026-03-13 00:02:41.148487541 +0000 UTC m=+7.824045955" lastFinishedPulling="2026-03-13 00:02:45.227764219 +0000 UTC m=+11.903322633" observedRunningTime="2026-03-13 00:02:45.559004518 +0000 UTC m=+12.234562932" watchObservedRunningTime="2026-03-13 00:02:58.789090222 +0000 UTC m=+25.464648676" Mar 13 00:02:58.804134 systemd[1]: Created slice kubepods-besteffort-poddcb5d512_246e_4c77_84d3_b33859523c95.slice - libcontainer container kubepods-besteffort-poddcb5d512_246e_4c77_84d3_b33859523c95.slice. Mar 13 00:02:58.820926 kubelet[2780]: I0313 00:02:58.820877 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcb5d512-246e-4c77-84d3-b33859523c95-tigera-ca-bundle\") pod \"calico-typha-55bf4bd796-6j5lv\" (UID: \"dcb5d512-246e-4c77-84d3-b33859523c95\") " pod="calico-system/calico-typha-55bf4bd796-6j5lv" Mar 13 00:02:58.820926 kubelet[2780]: I0313 00:02:58.820926 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/dcb5d512-246e-4c77-84d3-b33859523c95-typha-certs\") pod \"calico-typha-55bf4bd796-6j5lv\" (UID: \"dcb5d512-246e-4c77-84d3-b33859523c95\") " pod="calico-system/calico-typha-55bf4bd796-6j5lv" Mar 13 00:02:58.820926 kubelet[2780]: I0313 00:02:58.820945 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2bnr\" (UniqueName: \"kubernetes.io/projected/dcb5d512-246e-4c77-84d3-b33859523c95-kube-api-access-q2bnr\") pod \"calico-typha-55bf4bd796-6j5lv\" (UID: \"dcb5d512-246e-4c77-84d3-b33859523c95\") " pod="calico-system/calico-typha-55bf4bd796-6j5lv" Mar 13 00:02:58.915418 systemd[1]: Created slice kubepods-besteffort-pod86216013_c732_43e3_8cff_b66a7b79c742.slice - libcontainer container kubepods-besteffort-pod86216013_c732_43e3_8cff_b66a7b79c742.slice. Mar 13 00:02:58.921958 kubelet[2780]: I0313 00:02:58.921895 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/86216013-c732-43e3-8cff-b66a7b79c742-sys-fs\") pod \"calico-node-xj95r\" (UID: \"86216013-c732-43e3-8cff-b66a7b79c742\") " pod="calico-system/calico-node-xj95r" Mar 13 00:02:58.921958 kubelet[2780]: I0313 00:02:58.921957 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kdh7\" (UniqueName: \"kubernetes.io/projected/86216013-c732-43e3-8cff-b66a7b79c742-kube-api-access-7kdh7\") pod \"calico-node-xj95r\" (UID: \"86216013-c732-43e3-8cff-b66a7b79c742\") " pod="calico-system/calico-node-xj95r" Mar 13 00:02:58.922447 kubelet[2780]: I0313 00:02:58.922024 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/86216013-c732-43e3-8cff-b66a7b79c742-var-lib-calico\") pod \"calico-node-xj95r\" (UID: \"86216013-c732-43e3-8cff-b66a7b79c742\") " pod="calico-system/calico-node-xj95r" Mar 13 00:02:58.922667 kubelet[2780]: I0313 00:02:58.922453 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/86216013-c732-43e3-8cff-b66a7b79c742-flexvol-driver-host\") pod \"calico-node-xj95r\" (UID: \"86216013-c732-43e3-8cff-b66a7b79c742\") " pod="calico-system/calico-node-xj95r" Mar 13 00:02:58.922667 kubelet[2780]: I0313 00:02:58.922477 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/86216013-c732-43e3-8cff-b66a7b79c742-cni-bin-dir\") pod \"calico-node-xj95r\" (UID: \"86216013-c732-43e3-8cff-b66a7b79c742\") " pod="calico-system/calico-node-xj95r" Mar 13 00:02:58.922667 kubelet[2780]: I0313 00:02:58.922524 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/86216013-c732-43e3-8cff-b66a7b79c742-cni-log-dir\") pod \"calico-node-xj95r\" (UID: \"86216013-c732-43e3-8cff-b66a7b79c742\") " pod="calico-system/calico-node-xj95r" Mar 13 00:02:58.922667 kubelet[2780]: I0313 00:02:58.922538 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/86216013-c732-43e3-8cff-b66a7b79c742-policysync\") pod \"calico-node-xj95r\" (UID: \"86216013-c732-43e3-8cff-b66a7b79c742\") " pod="calico-system/calico-node-xj95r" Mar 13 00:02:58.922667 kubelet[2780]: I0313 00:02:58.922552 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86216013-c732-43e3-8cff-b66a7b79c742-xtables-lock\") pod \"calico-node-xj95r\" (UID: \"86216013-c732-43e3-8cff-b66a7b79c742\") " pod="calico-system/calico-node-xj95r" Mar 13 00:02:58.922960 kubelet[2780]: I0313 00:02:58.922576 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86216013-c732-43e3-8cff-b66a7b79c742-lib-modules\") pod \"calico-node-xj95r\" (UID: \"86216013-c732-43e3-8cff-b66a7b79c742\") " pod="calico-system/calico-node-xj95r" Mar 13 00:02:58.922960 kubelet[2780]: I0313 00:02:58.922589 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86216013-c732-43e3-8cff-b66a7b79c742-tigera-ca-bundle\") pod \"calico-node-xj95r\" (UID: \"86216013-c732-43e3-8cff-b66a7b79c742\") " pod="calico-system/calico-node-xj95r" Mar 13 00:02:58.922960 kubelet[2780]: I0313 00:02:58.922603 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/86216013-c732-43e3-8cff-b66a7b79c742-node-certs\") pod \"calico-node-xj95r\" (UID: \"86216013-c732-43e3-8cff-b66a7b79c742\") " pod="calico-system/calico-node-xj95r" Mar 13 00:02:58.923479 kubelet[2780]: I0313 00:02:58.923181 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/86216013-c732-43e3-8cff-b66a7b79c742-var-run-calico\") pod \"calico-node-xj95r\" (UID: \"86216013-c732-43e3-8cff-b66a7b79c742\") " pod="calico-system/calico-node-xj95r" Mar 13 00:02:58.923479 kubelet[2780]: I0313 00:02:58.923213 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/86216013-c732-43e3-8cff-b66a7b79c742-cni-net-dir\") pod \"calico-node-xj95r\" (UID: \"86216013-c732-43e3-8cff-b66a7b79c742\") " pod="calico-system/calico-node-xj95r" Mar 13 00:02:58.923479 kubelet[2780]: I0313 00:02:58.923230 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/86216013-c732-43e3-8cff-b66a7b79c742-bpffs\") pod \"calico-node-xj95r\" (UID: \"86216013-c732-43e3-8cff-b66a7b79c742\") " pod="calico-system/calico-node-xj95r" Mar 13 00:02:58.923479 kubelet[2780]: I0313 00:02:58.923245 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/86216013-c732-43e3-8cff-b66a7b79c742-nodeproc\") pod \"calico-node-xj95r\" (UID: \"86216013-c732-43e3-8cff-b66a7b79c742\") " pod="calico-system/calico-node-xj95r" Mar 13 00:02:59.016267 kubelet[2780]: E0313 00:02:59.016132 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v6ht6" podUID="00be1f59-88c4-4438-b27d-917295571b53" Mar 13 00:02:59.044174 kubelet[2780]: E0313 00:02:59.043954 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.044174 kubelet[2780]: W0313 00:02:59.043979 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.044174 kubelet[2780]: E0313 00:02:59.044013 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.061057 kubelet[2780]: E0313 00:02:59.061021 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.061281 kubelet[2780]: W0313 00:02:59.061199 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.061281 kubelet[2780]: E0313 00:02:59.061231 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.110017 kubelet[2780]: E0313 00:02:59.109973 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.110017 kubelet[2780]: W0313 00:02:59.110009 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.110277 kubelet[2780]: E0313 00:02:59.110041 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.110380 containerd[1547]: time="2026-03-13T00:02:59.110255280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55bf4bd796-6j5lv,Uid:dcb5d512-246e-4c77-84d3-b33859523c95,Namespace:calico-system,Attempt:0,}" Mar 13 00:02:59.111252 kubelet[2780]: E0313 00:02:59.111211 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.111652 kubelet[2780]: W0313 00:02:59.111560 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.111652 kubelet[2780]: E0313 00:02:59.111641 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.112178 kubelet[2780]: E0313 00:02:59.112140 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.112178 kubelet[2780]: W0313 00:02:59.112160 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.112178 kubelet[2780]: E0313 00:02:59.112177 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.112610 kubelet[2780]: E0313 00:02:59.112442 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.112610 kubelet[2780]: W0313 00:02:59.112549 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.112610 kubelet[2780]: E0313 00:02:59.112583 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.112948 kubelet[2780]: E0313 00:02:59.112908 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.112948 kubelet[2780]: W0313 00:02:59.112940 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.112948 kubelet[2780]: E0313 00:02:59.112952 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.113418 kubelet[2780]: E0313 00:02:59.113120 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.113418 kubelet[2780]: W0313 00:02:59.113129 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.113418 kubelet[2780]: E0313 00:02:59.113137 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.113707 kubelet[2780]: E0313 00:02:59.113679 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.113707 kubelet[2780]: W0313 00:02:59.113696 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.113851 kubelet[2780]: E0313 00:02:59.113708 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.114455 kubelet[2780]: E0313 00:02:59.114184 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.114455 kubelet[2780]: W0313 00:02:59.114381 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.114455 kubelet[2780]: E0313 00:02:59.114398 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.114865 kubelet[2780]: E0313 00:02:59.114834 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.114865 kubelet[2780]: W0313 00:02:59.114864 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.114940 kubelet[2780]: E0313 00:02:59.114876 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.115146 kubelet[2780]: E0313 00:02:59.115133 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.115146 kubelet[2780]: W0313 00:02:59.115146 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.115242 kubelet[2780]: E0313 00:02:59.115157 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.115462 kubelet[2780]: E0313 00:02:59.115447 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.115498 kubelet[2780]: W0313 00:02:59.115462 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.115498 kubelet[2780]: E0313 00:02:59.115473 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.115799 kubelet[2780]: E0313 00:02:59.115784 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.115799 kubelet[2780]: W0313 00:02:59.115798 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.116131 kubelet[2780]: E0313 00:02:59.116114 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.116404 kubelet[2780]: E0313 00:02:59.116389 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.116404 kubelet[2780]: W0313 00:02:59.116404 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.116488 kubelet[2780]: E0313 00:02:59.116415 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.116682 kubelet[2780]: E0313 00:02:59.116668 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.116714 kubelet[2780]: W0313 00:02:59.116682 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.116714 kubelet[2780]: E0313 00:02:59.116693 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.116901 kubelet[2780]: E0313 00:02:59.116888 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.116935 kubelet[2780]: W0313 00:02:59.116902 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.116935 kubelet[2780]: E0313 00:02:59.116912 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.117185 kubelet[2780]: E0313 00:02:59.117173 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.117185 kubelet[2780]: W0313 00:02:59.117184 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.117469 kubelet[2780]: E0313 00:02:59.117194 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.117501 kubelet[2780]: E0313 00:02:59.117469 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.117501 kubelet[2780]: W0313 00:02:59.117481 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.117501 kubelet[2780]: E0313 00:02:59.117491 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.117859 kubelet[2780]: E0313 00:02:59.117844 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.117894 kubelet[2780]: W0313 00:02:59.117859 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.117894 kubelet[2780]: E0313 00:02:59.117870 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.118253 kubelet[2780]: E0313 00:02:59.118236 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.118333 kubelet[2780]: W0313 00:02:59.118253 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.118333 kubelet[2780]: E0313 00:02:59.118263 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.118530 kubelet[2780]: E0313 00:02:59.118513 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.118530 kubelet[2780]: W0313 00:02:59.118524 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.118636 kubelet[2780]: E0313 00:02:59.118534 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.128219 kubelet[2780]: E0313 00:02:59.128178 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.128588 kubelet[2780]: W0313 00:02:59.128380 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.128588 kubelet[2780]: E0313 00:02:59.128416 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.128588 kubelet[2780]: I0313 00:02:59.128542 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/00be1f59-88c4-4438-b27d-917295571b53-socket-dir\") pod \"csi-node-driver-v6ht6\" (UID: \"00be1f59-88c4-4438-b27d-917295571b53\") " pod="calico-system/csi-node-driver-v6ht6" Mar 13 00:02:59.130295 kubelet[2780]: E0313 00:02:59.130180 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.130295 kubelet[2780]: W0313 00:02:59.130203 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.130295 kubelet[2780]: E0313 00:02:59.130223 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.131684 kubelet[2780]: E0313 00:02:59.131485 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.131992 kubelet[2780]: W0313 00:02:59.131788 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.131992 kubelet[2780]: E0313 00:02:59.131816 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.133382 kubelet[2780]: E0313 00:02:59.132572 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.133382 kubelet[2780]: W0313 00:02:59.133109 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.133382 kubelet[2780]: E0313 00:02:59.133129 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.133382 kubelet[2780]: I0313 00:02:59.133170 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/00be1f59-88c4-4438-b27d-917295571b53-registration-dir\") pod \"csi-node-driver-v6ht6\" (UID: \"00be1f59-88c4-4438-b27d-917295571b53\") " pod="calico-system/csi-node-driver-v6ht6" Mar 13 00:02:59.133620 kubelet[2780]: E0313 00:02:59.133604 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.133706 kubelet[2780]: W0313 00:02:59.133693 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.133765 kubelet[2780]: E0313 00:02:59.133754 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.133905 kubelet[2780]: I0313 00:02:59.133891 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/00be1f59-88c4-4438-b27d-917295571b53-kubelet-dir\") pod \"csi-node-driver-v6ht6\" (UID: \"00be1f59-88c4-4438-b27d-917295571b53\") " pod="calico-system/csi-node-driver-v6ht6" Mar 13 00:02:59.134744 kubelet[2780]: E0313 00:02:59.134727 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.134963 kubelet[2780]: W0313 00:02:59.134888 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.134963 kubelet[2780]: E0313 00:02:59.134911 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.135224 kubelet[2780]: E0313 00:02:59.135211 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.135383 kubelet[2780]: W0313 00:02:59.135287 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.135383 kubelet[2780]: E0313 00:02:59.135317 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.135576 kubelet[2780]: E0313 00:02:59.135564 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.135678 kubelet[2780]: W0313 00:02:59.135629 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.135858 kubelet[2780]: E0313 00:02:59.135731 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.135858 kubelet[2780]: I0313 00:02:59.135755 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9xb7\" (UniqueName: \"kubernetes.io/projected/00be1f59-88c4-4438-b27d-917295571b53-kube-api-access-g9xb7\") pod \"csi-node-driver-v6ht6\" (UID: \"00be1f59-88c4-4438-b27d-917295571b53\") " pod="calico-system/csi-node-driver-v6ht6" Mar 13 00:02:59.136030 kubelet[2780]: E0313 00:02:59.136018 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.136117 kubelet[2780]: W0313 00:02:59.136104 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.136187 kubelet[2780]: E0313 00:02:59.136168 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.136283 kubelet[2780]: I0313 00:02:59.136262 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/00be1f59-88c4-4438-b27d-917295571b53-varrun\") pod \"csi-node-driver-v6ht6\" (UID: \"00be1f59-88c4-4438-b27d-917295571b53\") " pod="calico-system/csi-node-driver-v6ht6" Mar 13 00:02:59.136631 kubelet[2780]: E0313 00:02:59.136566 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.136631 kubelet[2780]: W0313 00:02:59.136579 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.136631 kubelet[2780]: E0313 00:02:59.136589 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.137168 kubelet[2780]: E0313 00:02:59.137152 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.137403 kubelet[2780]: W0313 00:02:59.137233 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.137464 containerd[1547]: time="2026-03-13T00:02:59.137276142Z" level=info msg="connecting to shim dd0db016f5c398e57de2e975af3defea8c5a73c1a915ad937653823bd97f3b06" address="unix:///run/containerd/s/2b5902574b2e539d8f468b7fff00480793761bf1db366276b4b47ca1824e3618" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:02:59.137597 kubelet[2780]: E0313 00:02:59.137517 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.137852 kubelet[2780]: E0313 00:02:59.137838 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.137928 kubelet[2780]: W0313 00:02:59.137916 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.138017 kubelet[2780]: E0313 00:02:59.138005 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.138738 kubelet[2780]: E0313 00:02:59.138723 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.138904 kubelet[2780]: W0313 00:02:59.138821 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.138904 kubelet[2780]: E0313 00:02:59.138838 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.139329 kubelet[2780]: E0313 00:02:59.139281 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.139530 kubelet[2780]: W0313 00:02:59.139405 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.139530 kubelet[2780]: E0313 00:02:59.139423 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.139734 kubelet[2780]: E0313 00:02:59.139721 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.140134 kubelet[2780]: W0313 00:02:59.139789 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.140134 kubelet[2780]: E0313 00:02:59.139805 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.165511 systemd[1]: Started cri-containerd-dd0db016f5c398e57de2e975af3defea8c5a73c1a915ad937653823bd97f3b06.scope - libcontainer container dd0db016f5c398e57de2e975af3defea8c5a73c1a915ad937653823bd97f3b06. Mar 13 00:02:59.209009 containerd[1547]: time="2026-03-13T00:02:59.208865375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55bf4bd796-6j5lv,Uid:dcb5d512-246e-4c77-84d3-b33859523c95,Namespace:calico-system,Attempt:0,} returns sandbox id \"dd0db016f5c398e57de2e975af3defea8c5a73c1a915ad937653823bd97f3b06\"" Mar 13 00:02:59.210897 containerd[1547]: time="2026-03-13T00:02:59.210854374Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 13 00:02:59.219992 containerd[1547]: time="2026-03-13T00:02:59.219945408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xj95r,Uid:86216013-c732-43e3-8cff-b66a7b79c742,Namespace:calico-system,Attempt:0,}" Mar 13 00:02:59.238496 kubelet[2780]: E0313 00:02:59.238455 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.238496 kubelet[2780]: W0313 00:02:59.238491 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.238825 kubelet[2780]: E0313 00:02:59.238512 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.239014 kubelet[2780]: E0313 00:02:59.238849 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.239014 kubelet[2780]: W0313 00:02:59.238867 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.239014 kubelet[2780]: E0313 00:02:59.238878 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.239582 kubelet[2780]: E0313 00:02:59.239121 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.239582 kubelet[2780]: W0313 00:02:59.239132 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.239582 kubelet[2780]: E0313 00:02:59.239148 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.239582 kubelet[2780]: E0313 00:02:59.239389 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.239582 kubelet[2780]: W0313 00:02:59.239408 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.239582 kubelet[2780]: E0313 00:02:59.239423 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.239987 kubelet[2780]: E0313 00:02:59.239770 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.239987 kubelet[2780]: W0313 00:02:59.239783 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.239987 kubelet[2780]: E0313 00:02:59.239794 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.240498 kubelet[2780]: E0313 00:02:59.240196 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.240498 kubelet[2780]: W0313 00:02:59.240208 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.240498 kubelet[2780]: E0313 00:02:59.240221 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.241271 kubelet[2780]: E0313 00:02:59.241253 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.241361 kubelet[2780]: W0313 00:02:59.241348 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.241428 kubelet[2780]: E0313 00:02:59.241416 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.242425 kubelet[2780]: E0313 00:02:59.242182 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.242425 kubelet[2780]: W0313 00:02:59.242200 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.242425 kubelet[2780]: E0313 00:02:59.242214 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.242801 kubelet[2780]: E0313 00:02:59.242629 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.242801 kubelet[2780]: W0313 00:02:59.242674 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.242801 kubelet[2780]: E0313 00:02:59.242687 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.243052 kubelet[2780]: E0313 00:02:59.242949 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.243052 kubelet[2780]: W0313 00:02:59.242962 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.243052 kubelet[2780]: E0313 00:02:59.242973 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.243469 kubelet[2780]: E0313 00:02:59.243440 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.243629 kubelet[2780]: W0313 00:02:59.243527 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.243629 kubelet[2780]: E0313 00:02:59.243563 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.243942 kubelet[2780]: E0313 00:02:59.243926 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.244289 kubelet[2780]: W0313 00:02:59.244127 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.244289 kubelet[2780]: E0313 00:02:59.244150 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.245332 kubelet[2780]: E0313 00:02:59.245121 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.245332 kubelet[2780]: W0313 00:02:59.245141 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.245332 kubelet[2780]: E0313 00:02:59.245157 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.245733 kubelet[2780]: E0313 00:02:59.245538 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.245733 kubelet[2780]: W0313 00:02:59.245552 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.245733 kubelet[2780]: E0313 00:02:59.245567 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.246056 kubelet[2780]: E0313 00:02:59.245876 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.246056 kubelet[2780]: W0313 00:02:59.245889 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.246056 kubelet[2780]: E0313 00:02:59.245901 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.246398 kubelet[2780]: E0313 00:02:59.246221 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.246398 kubelet[2780]: W0313 00:02:59.246263 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.246398 kubelet[2780]: E0313 00:02:59.246275 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.246935 kubelet[2780]: E0313 00:02:59.246785 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.246935 kubelet[2780]: W0313 00:02:59.246799 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.246935 kubelet[2780]: E0313 00:02:59.246811 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.247547 kubelet[2780]: E0313 00:02:59.247389 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.247547 kubelet[2780]: W0313 00:02:59.247405 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.247547 kubelet[2780]: E0313 00:02:59.247418 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.248210 kubelet[2780]: E0313 00:02:59.247973 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.248210 kubelet[2780]: W0313 00:02:59.248013 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.248210 kubelet[2780]: E0313 00:02:59.248026 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.248484 kubelet[2780]: E0313 00:02:59.248438 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.248484 kubelet[2780]: W0313 00:02:59.248455 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.248683 kubelet[2780]: E0313 00:02:59.248566 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.249453 kubelet[2780]: E0313 00:02:59.249210 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.249453 kubelet[2780]: W0313 00:02:59.249252 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.249453 kubelet[2780]: E0313 00:02:59.249265 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.249827 kubelet[2780]: E0313 00:02:59.249635 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.249827 kubelet[2780]: W0313 00:02:59.249649 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.249827 kubelet[2780]: E0313 00:02:59.249659 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.251396 kubelet[2780]: E0313 00:02:59.251170 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.251396 kubelet[2780]: W0313 00:02:59.251190 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.251396 kubelet[2780]: E0313 00:02:59.251206 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.251670 kubelet[2780]: E0313 00:02:59.251650 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.251670 kubelet[2780]: W0313 00:02:59.251666 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.251904 kubelet[2780]: E0313 00:02:59.251675 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.252284 kubelet[2780]: E0313 00:02:59.252263 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.252284 kubelet[2780]: W0313 00:02:59.252279 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.252491 kubelet[2780]: E0313 00:02:59.252290 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.258660 containerd[1547]: time="2026-03-13T00:02:59.258610782Z" level=info msg="connecting to shim 6e0731c113eb0e69c14567506c31d5a2d625d5b0b29828901adbca96079b96f2" address="unix:///run/containerd/s/d09269e22b774544b93ef1ef6686ee98d52cffa2c91ad39baa01961601c386cb" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:02:59.262727 kubelet[2780]: E0313 00:02:59.262234 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:02:59.262727 kubelet[2780]: W0313 00:02:59.262260 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:02:59.262727 kubelet[2780]: E0313 00:02:59.262281 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:02:59.286446 systemd[1]: Started cri-containerd-6e0731c113eb0e69c14567506c31d5a2d625d5b0b29828901adbca96079b96f2.scope - libcontainer container 6e0731c113eb0e69c14567506c31d5a2d625d5b0b29828901adbca96079b96f2. Mar 13 00:02:59.318461 containerd[1547]: time="2026-03-13T00:02:59.318206302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xj95r,Uid:86216013-c732-43e3-8cff-b66a7b79c742,Namespace:calico-system,Attempt:0,} returns sandbox id \"6e0731c113eb0e69c14567506c31d5a2d625d5b0b29828901adbca96079b96f2\"" Mar 13 00:03:00.489704 kubelet[2780]: E0313 00:03:00.489602 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v6ht6" podUID="00be1f59-88c4-4438-b27d-917295571b53" Mar 13 00:03:00.762546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4270302452.mount: Deactivated successfully. Mar 13 00:03:01.550284 containerd[1547]: time="2026-03-13T00:03:01.550199670Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:03:01.551453 containerd[1547]: time="2026-03-13T00:03:01.551288389Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Mar 13 00:03:01.552735 containerd[1547]: time="2026-03-13T00:03:01.552682588Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:03:01.556463 containerd[1547]: time="2026-03-13T00:03:01.556136466Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:03:01.557982 containerd[1547]: time="2026-03-13T00:03:01.557931505Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 2.346917612s" Mar 13 00:03:01.558362 containerd[1547]: time="2026-03-13T00:03:01.558186305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Mar 13 00:03:01.559655 containerd[1547]: time="2026-03-13T00:03:01.559617784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 13 00:03:01.581809 containerd[1547]: time="2026-03-13T00:03:01.581758891Z" level=info msg="CreateContainer within sandbox \"dd0db016f5c398e57de2e975af3defea8c5a73c1a915ad937653823bd97f3b06\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 13 00:03:01.594120 containerd[1547]: time="2026-03-13T00:03:01.592472845Z" level=info msg="Container c4a10660af11addf649440b2bad310c8a434f4451e373fd4c6cea1b6f8ad5c56: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:03:01.606086 containerd[1547]: time="2026-03-13T00:03:01.605576238Z" level=info msg="CreateContainer within sandbox \"dd0db016f5c398e57de2e975af3defea8c5a73c1a915ad937653823bd97f3b06\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c4a10660af11addf649440b2bad310c8a434f4451e373fd4c6cea1b6f8ad5c56\"" Mar 13 00:03:01.608092 containerd[1547]: time="2026-03-13T00:03:01.607582876Z" level=info msg="StartContainer for \"c4a10660af11addf649440b2bad310c8a434f4451e373fd4c6cea1b6f8ad5c56\"" Mar 13 00:03:01.611290 containerd[1547]: time="2026-03-13T00:03:01.611236754Z" level=info msg="connecting to shim c4a10660af11addf649440b2bad310c8a434f4451e373fd4c6cea1b6f8ad5c56" address="unix:///run/containerd/s/2b5902574b2e539d8f468b7fff00480793761bf1db366276b4b47ca1824e3618" protocol=ttrpc version=3 Mar 13 00:03:01.634316 systemd[1]: Started cri-containerd-c4a10660af11addf649440b2bad310c8a434f4451e373fd4c6cea1b6f8ad5c56.scope - libcontainer container c4a10660af11addf649440b2bad310c8a434f4451e373fd4c6cea1b6f8ad5c56. Mar 13 00:03:01.688327 containerd[1547]: time="2026-03-13T00:03:01.688185069Z" level=info msg="StartContainer for \"c4a10660af11addf649440b2bad310c8a434f4451e373fd4c6cea1b6f8ad5c56\" returns successfully" Mar 13 00:03:02.490144 kubelet[2780]: E0313 00:03:02.490054 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v6ht6" podUID="00be1f59-88c4-4438-b27d-917295571b53" Mar 13 00:03:02.622952 kubelet[2780]: I0313 00:03:02.622723 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-55bf4bd796-6j5lv" podStartSLOduration=2.273992377 podStartE2EDuration="4.622700268s" podCreationTimestamp="2026-03-13 00:02:58 +0000 UTC" firstStartedPulling="2026-03-13 00:02:59.210437054 +0000 UTC m=+25.885995468" lastFinishedPulling="2026-03-13 00:03:01.559144945 +0000 UTC m=+28.234703359" observedRunningTime="2026-03-13 00:03:02.620952709 +0000 UTC m=+29.296511163" watchObservedRunningTime="2026-03-13 00:03:02.622700268 +0000 UTC m=+29.298258682" Mar 13 00:03:02.640879 kubelet[2780]: E0313 00:03:02.640800 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:03:02.640879 kubelet[2780]: W0313 00:03:02.640857 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:03:02.640879 kubelet[2780]: E0313 00:03:02.640888 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:03:02.641266 kubelet[2780]: E0313 00:03:02.641231 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:03:02.641335 kubelet[2780]: W0313 00:03:02.641273 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:03:02.641335 kubelet[2780]: E0313 00:03:02.641293 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:03:02.641683 kubelet[2780]: E0313 00:03:02.641543 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:03:02.641683 kubelet[2780]: W0313 00:03:02.641565 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:03:02.641683 kubelet[2780]: E0313 00:03:02.641591 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:03:02.641923 kubelet[2780]: E0313 00:03:02.641799 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:03:02.641923 kubelet[2780]: W0313 00:03:02.641810 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:03:02.641923 kubelet[2780]: E0313 00:03:02.641843 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:03:02.642151 kubelet[2780]: E0313 00:03:02.642125 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:03:02.642251 kubelet[2780]: W0313 00:03:02.642145 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:03:02.642251 kubelet[2780]: E0313 00:03:02.642180 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:03:02.642528 kubelet[2780]: E0313 00:03:02.642468 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:03:02.642528 kubelet[2780]: W0313 00:03:02.642489 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:03:02.642528 kubelet[2780]: E0313 00:03:02.642520 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:03:02.642813 kubelet[2780]: E0313 00:03:02.642762 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:03:02.642813 kubelet[2780]: W0313 00:03:02.642782 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:03:02.642813 kubelet[2780]: E0313 00:03:02.642799 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:03:02.643163 kubelet[2780]: E0313 00:03:02.643034 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:03:02.643163 kubelet[2780]: W0313 00:03:02.643053 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:03:02.643163 kubelet[2780]: E0313 00:03:02.643095 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:03:02.643673 kubelet[2780]: E0313 00:03:02.643386 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:03:02.643673 kubelet[2780]: W0313 00:03:02.643409 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:03:02.643673 kubelet[2780]: E0313 00:03:02.643448 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:03:02.643814 kubelet[2780]: E0313 00:03:02.643706 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:03:02.643814 kubelet[2780]: W0313 00:03:02.643718 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:03:02.643814 kubelet[2780]: E0313 00:03:02.643734 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:03:02.644070 kubelet[2780]: E0313 00:03:02.643925 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:03:02.644070 kubelet[2780]: W0313 00:03:02.643948 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:03:02.644070 kubelet[2780]: E0313 00:03:02.643962 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:03:02.644203 kubelet[2780]: E0313 00:03:02.644188 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:03:02.644260 kubelet[2780]: W0313 00:03:02.644203 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:03:02.644260 kubelet[2780]: E0313 00:03:02.644231 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:03:02.644520 kubelet[2780]: E0313 00:03:02.644501 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:03:02.644520 kubelet[2780]: W0313 00:03:02.644514 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:03:02.644605 kubelet[2780]: E0313 00:03:02.644523 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:03:02.644672 kubelet[2780]: E0313 00:03:02.644659 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:03:02.644672 kubelet[2780]: W0313 00:03:02.644669 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:03:02.644722 kubelet[2780]: E0313 00:03:02.644678 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:03:02.644808 kubelet[2780]: E0313 00:03:02.644797 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:03:02.644832 kubelet[2780]: W0313 00:03:02.644807 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:03:02.644832 kubelet[2780]: E0313 00:03:02.644815 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:03:02.673293 kubelet[2780]: E0313 00:03:02.673147 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:03:02.673293 kubelet[2780]: W0313 00:03:02.673188 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:03:02.673293 kubelet[2780]: E0313 00:03:02.673285 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:03:02.674117 kubelet[2780]: E0313 00:03:02.673687 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:03:02.674117 kubelet[2780]: W0313 00:03:02.673726 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:03:02.674117 kubelet[2780]: E0313 00:03:02.673748 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:03:02.675510 kubelet[2780]: E0313 00:03:02.674356 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:03:02.675510 kubelet[2780]: W0313 00:03:02.674376 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:03:02.675510 kubelet[2780]: E0313 00:03:02.674400 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:03:02.675510 kubelet[2780]: E0313 00:03:02.674759 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:03:02.675510 kubelet[2780]: W0313 00:03:02.674772 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:03:02.675510 kubelet[2780]: E0313 00:03:02.674789 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:03:02.675510 kubelet[2780]: E0313 00:03:02.675111 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:03:02.675510 kubelet[2780]: W0313 00:03:02.675124 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:03:02.675510 kubelet[2780]: E0313 00:03:02.675137 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:03:02.675510 kubelet[2780]: E0313 00:03:02.675438 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:03:02.676940 kubelet[2780]: W0313 00:03:02.675451 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:03:02.676940 kubelet[2780]: E0313 00:03:02.675465 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:03:02.676940 kubelet[2780]: E0313 00:03:02.675688 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:03:02.676940 kubelet[2780]: W0313 00:03:02.675715 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:03:02.676940 kubelet[2780]: E0313 00:03:02.675728 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:03:02.676940 kubelet[2780]: E0313 00:03:02.676041 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:03:02.676940 kubelet[2780]: W0313 00:03:02.676054 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:03:02.676940 kubelet[2780]: E0313 00:03:02.676090 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:03:02.676940 kubelet[2780]: E0313 00:03:02.676548 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:03:02.676940 kubelet[2780]: W0313 00:03:02.676579 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:03:02.678002 kubelet[2780]: E0313 00:03:02.676594 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:03:02.678002 kubelet[2780]: E0313 00:03:02.677018 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:03:02.678002 kubelet[2780]: W0313 00:03:02.677050 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:03:02.678002 kubelet[2780]: E0313 00:03:02.677092 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:03:02.678002 kubelet[2780]: E0313 00:03:02.677454 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:03:02.678002 kubelet[2780]: W0313 00:03:02.677467 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:03:02.678002 kubelet[2780]: E0313 00:03:02.677479 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:03:02.678002 kubelet[2780]: E0313 00:03:02.678002 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:03:02.678002 kubelet[2780]: W0313 00:03:02.678016 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:03:02.678303 kubelet[2780]: E0313 00:03:02.678030 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:03:02.678678 kubelet[2780]: E0313 00:03:02.678653 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:03:02.678678 kubelet[2780]: W0313 00:03:02.678670 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:03:02.678678 kubelet[2780]: E0313 00:03:02.678681 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:03:02.678902 kubelet[2780]: E0313 00:03:02.678888 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:03:02.678902 kubelet[2780]: W0313 00:03:02.678901 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:03:02.678972 kubelet[2780]: E0313 00:03:02.678910 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:03:02.679076 kubelet[2780]: E0313 00:03:02.679053 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:03:02.679112 kubelet[2780]: W0313 00:03:02.679096 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:03:02.679112 kubelet[2780]: E0313 00:03:02.679105 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:03:02.679269 kubelet[2780]: E0313 00:03:02.679256 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:03:02.679269 kubelet[2780]: W0313 00:03:02.679269 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:03:02.679401 kubelet[2780]: E0313 00:03:02.679278 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:03:02.679444 kubelet[2780]: E0313 00:03:02.679426 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:03:02.679444 kubelet[2780]: W0313 00:03:02.679439 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:03:02.679495 kubelet[2780]: E0313 00:03:02.679448 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:03:02.679921 kubelet[2780]: E0313 00:03:02.679904 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:03:02.679921 kubelet[2780]: W0313 00:03:02.679920 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:03:02.679989 kubelet[2780]: E0313 00:03:02.679930 2780 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:03:03.164807 containerd[1547]: time="2026-03-13T00:03:03.164091178Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:03:03.165945 containerd[1547]: time="2026-03-13T00:03:03.165913297Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Mar 13 00:03:03.167428 containerd[1547]: time="2026-03-13T00:03:03.167397497Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:03:03.170156 containerd[1547]: time="2026-03-13T00:03:03.170105615Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:03:03.170854 containerd[1547]: time="2026-03-13T00:03:03.170814935Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.611161071s" Mar 13 00:03:03.170854 containerd[1547]: time="2026-03-13T00:03:03.170853295Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Mar 13 00:03:03.179400 containerd[1547]: time="2026-03-13T00:03:03.178702691Z" level=info msg="CreateContainer within sandbox \"6e0731c113eb0e69c14567506c31d5a2d625d5b0b29828901adbca96079b96f2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 13 00:03:03.187940 containerd[1547]: time="2026-03-13T00:03:03.186546367Z" level=info msg="Container 40a264745a3453462eb5b6c8ee1701ee2591df580e2825a70bc00b1ff7086d66: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:03:03.199754 containerd[1547]: time="2026-03-13T00:03:03.199694680Z" level=info msg="CreateContainer within sandbox \"6e0731c113eb0e69c14567506c31d5a2d625d5b0b29828901adbca96079b96f2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"40a264745a3453462eb5b6c8ee1701ee2591df580e2825a70bc00b1ff7086d66\"" Mar 13 00:03:03.200450 containerd[1547]: time="2026-03-13T00:03:03.200403800Z" level=info msg="StartContainer for \"40a264745a3453462eb5b6c8ee1701ee2591df580e2825a70bc00b1ff7086d66\"" Mar 13 00:03:03.202505 containerd[1547]: time="2026-03-13T00:03:03.202474839Z" level=info msg="connecting to shim 40a264745a3453462eb5b6c8ee1701ee2591df580e2825a70bc00b1ff7086d66" address="unix:///run/containerd/s/d09269e22b774544b93ef1ef6686ee98d52cffa2c91ad39baa01961601c386cb" protocol=ttrpc version=3 Mar 13 00:03:03.230303 systemd[1]: Started cri-containerd-40a264745a3453462eb5b6c8ee1701ee2591df580e2825a70bc00b1ff7086d66.scope - libcontainer container 40a264745a3453462eb5b6c8ee1701ee2591df580e2825a70bc00b1ff7086d66. Mar 13 00:03:03.307491 containerd[1547]: time="2026-03-13T00:03:03.307397985Z" level=info msg="StartContainer for \"40a264745a3453462eb5b6c8ee1701ee2591df580e2825a70bc00b1ff7086d66\" returns successfully" Mar 13 00:03:03.322253 systemd[1]: cri-containerd-40a264745a3453462eb5b6c8ee1701ee2591df580e2825a70bc00b1ff7086d66.scope: Deactivated successfully. Mar 13 00:03:03.328614 containerd[1547]: time="2026-03-13T00:03:03.328539334Z" level=info msg="received container exit event container_id:\"40a264745a3453462eb5b6c8ee1701ee2591df580e2825a70bc00b1ff7086d66\" id:\"40a264745a3453462eb5b6c8ee1701ee2591df580e2825a70bc00b1ff7086d66\" pid:3454 exited_at:{seconds:1773360183 nanos:327435615}" Mar 13 00:03:03.355348 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-40a264745a3453462eb5b6c8ee1701ee2591df580e2825a70bc00b1ff7086d66-rootfs.mount: Deactivated successfully. Mar 13 00:03:03.611143 kubelet[2780]: I0313 00:03:03.609891 2780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:03:03.613395 containerd[1547]: time="2026-03-13T00:03:03.613112589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 13 00:03:04.490976 kubelet[2780]: E0313 00:03:04.490547 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v6ht6" podUID="00be1f59-88c4-4438-b27d-917295571b53" Mar 13 00:03:06.490091 kubelet[2780]: E0313 00:03:06.490001 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v6ht6" podUID="00be1f59-88c4-4438-b27d-917295571b53" Mar 13 00:03:08.489495 kubelet[2780]: E0313 00:03:08.489440 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v6ht6" podUID="00be1f59-88c4-4438-b27d-917295571b53" Mar 13 00:03:09.984350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount868860792.mount: Deactivated successfully. Mar 13 00:03:10.013088 containerd[1547]: time="2026-03-13T00:03:10.012582607Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:03:10.014806 containerd[1547]: time="2026-03-13T00:03:10.014775507Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Mar 13 00:03:10.015864 containerd[1547]: time="2026-03-13T00:03:10.015808474Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:03:10.019681 containerd[1547]: time="2026-03-13T00:03:10.019273792Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:03:10.019681 containerd[1547]: time="2026-03-13T00:03:10.019559204Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 6.406411655s" Mar 13 00:03:10.019681 containerd[1547]: time="2026-03-13T00:03:10.019589126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Mar 13 00:03:10.023824 containerd[1547]: time="2026-03-13T00:03:10.023739555Z" level=info msg="CreateContainer within sandbox \"6e0731c113eb0e69c14567506c31d5a2d625d5b0b29828901adbca96079b96f2\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 13 00:03:10.036428 containerd[1547]: time="2026-03-13T00:03:10.035560092Z" level=info msg="Container 49dc4ec0066472adce5a017a272e4797ee51ded27f076d76130f41f7fc0e48c8: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:03:10.051105 containerd[1547]: time="2026-03-13T00:03:10.050543534Z" level=info msg="CreateContainer within sandbox \"6e0731c113eb0e69c14567506c31d5a2d625d5b0b29828901adbca96079b96f2\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"49dc4ec0066472adce5a017a272e4797ee51ded27f076d76130f41f7fc0e48c8\"" Mar 13 00:03:10.056680 containerd[1547]: time="2026-03-13T00:03:10.056638691Z" level=info msg="StartContainer for \"49dc4ec0066472adce5a017a272e4797ee51ded27f076d76130f41f7fc0e48c8\"" Mar 13 00:03:10.059786 containerd[1547]: time="2026-03-13T00:03:10.059742112Z" level=info msg="connecting to shim 49dc4ec0066472adce5a017a272e4797ee51ded27f076d76130f41f7fc0e48c8" address="unix:///run/containerd/s/d09269e22b774544b93ef1ef6686ee98d52cffa2c91ad39baa01961601c386cb" protocol=ttrpc version=3 Mar 13 00:03:10.085362 systemd[1]: Started cri-containerd-49dc4ec0066472adce5a017a272e4797ee51ded27f076d76130f41f7fc0e48c8.scope - libcontainer container 49dc4ec0066472adce5a017a272e4797ee51ded27f076d76130f41f7fc0e48c8. Mar 13 00:03:10.175108 containerd[1547]: time="2026-03-13T00:03:10.175037955Z" level=info msg="StartContainer for \"49dc4ec0066472adce5a017a272e4797ee51ded27f076d76130f41f7fc0e48c8\" returns successfully" Mar 13 00:03:10.319902 systemd[1]: cri-containerd-49dc4ec0066472adce5a017a272e4797ee51ded27f076d76130f41f7fc0e48c8.scope: Deactivated successfully. Mar 13 00:03:10.322455 containerd[1547]: time="2026-03-13T00:03:10.321209842Z" level=info msg="received container exit event container_id:\"49dc4ec0066472adce5a017a272e4797ee51ded27f076d76130f41f7fc0e48c8\" id:\"49dc4ec0066472adce5a017a272e4797ee51ded27f076d76130f41f7fc0e48c8\" pid:3510 exited_at:{seconds:1773360190 nanos:320759902}" Mar 13 00:03:10.490567 kubelet[2780]: E0313 00:03:10.490375 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v6ht6" podUID="00be1f59-88c4-4438-b27d-917295571b53" Mar 13 00:03:10.638537 containerd[1547]: time="2026-03-13T00:03:10.638483590Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 13 00:03:10.984583 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-49dc4ec0066472adce5a017a272e4797ee51ded27f076d76130f41f7fc0e48c8-rootfs.mount: Deactivated successfully. Mar 13 00:03:11.155517 kubelet[2780]: I0313 00:03:11.155437 2780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:03:12.490617 kubelet[2780]: E0313 00:03:12.490515 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v6ht6" podUID="00be1f59-88c4-4438-b27d-917295571b53" Mar 13 00:03:13.994163 containerd[1547]: time="2026-03-13T00:03:13.994096299Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:03:13.996099 containerd[1547]: time="2026-03-13T00:03:13.995890254Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Mar 13 00:03:13.997133 containerd[1547]: time="2026-03-13T00:03:13.997057983Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:03:14.000017 containerd[1547]: time="2026-03-13T00:03:13.999813178Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:03:14.001018 containerd[1547]: time="2026-03-13T00:03:14.000981947Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 3.362251186s" Mar 13 00:03:14.001396 containerd[1547]: time="2026-03-13T00:03:14.001086471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Mar 13 00:03:14.007618 containerd[1547]: time="2026-03-13T00:03:14.007571335Z" level=info msg="CreateContainer within sandbox \"6e0731c113eb0e69c14567506c31d5a2d625d5b0b29828901adbca96079b96f2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 13 00:03:14.021814 containerd[1547]: time="2026-03-13T00:03:14.020878637Z" level=info msg="Container 56a35dddcbb0f36a387031230adb1e439711cc90e9121fe78f81db9ec75804a1: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:03:14.036274 containerd[1547]: time="2026-03-13T00:03:14.036139619Z" level=info msg="CreateContainer within sandbox \"6e0731c113eb0e69c14567506c31d5a2d625d5b0b29828901adbca96079b96f2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"56a35dddcbb0f36a387031230adb1e439711cc90e9121fe78f81db9ec75804a1\"" Mar 13 00:03:14.037568 containerd[1547]: time="2026-03-13T00:03:14.037260424Z" level=info msg="StartContainer for \"56a35dddcbb0f36a387031230adb1e439711cc90e9121fe78f81db9ec75804a1\"" Mar 13 00:03:14.039598 containerd[1547]: time="2026-03-13T00:03:14.039564358Z" level=info msg="connecting to shim 56a35dddcbb0f36a387031230adb1e439711cc90e9121fe78f81db9ec75804a1" address="unix:///run/containerd/s/d09269e22b774544b93ef1ef6686ee98d52cffa2c91ad39baa01961601c386cb" protocol=ttrpc version=3 Mar 13 00:03:14.071308 systemd[1]: Started cri-containerd-56a35dddcbb0f36a387031230adb1e439711cc90e9121fe78f81db9ec75804a1.scope - libcontainer container 56a35dddcbb0f36a387031230adb1e439711cc90e9121fe78f81db9ec75804a1. Mar 13 00:03:14.158812 containerd[1547]: time="2026-03-13T00:03:14.158631247Z" level=info msg="StartContainer for \"56a35dddcbb0f36a387031230adb1e439711cc90e9121fe78f81db9ec75804a1\" returns successfully" Mar 13 00:03:14.490883 kubelet[2780]: E0313 00:03:14.490819 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v6ht6" podUID="00be1f59-88c4-4438-b27d-917295571b53" Mar 13 00:03:14.824195 systemd[1]: cri-containerd-56a35dddcbb0f36a387031230adb1e439711cc90e9121fe78f81db9ec75804a1.scope: Deactivated successfully. Mar 13 00:03:14.824537 systemd[1]: cri-containerd-56a35dddcbb0f36a387031230adb1e439711cc90e9121fe78f81db9ec75804a1.scope: Consumed 525ms CPU time, 192.3M memory peak, 171.3M written to disk. Mar 13 00:03:14.833107 containerd[1547]: time="2026-03-13T00:03:14.832945069Z" level=info msg="received container exit event container_id:\"56a35dddcbb0f36a387031230adb1e439711cc90e9121fe78f81db9ec75804a1\" id:\"56a35dddcbb0f36a387031230adb1e439711cc90e9121fe78f81db9ec75804a1\" pid:3571 exited_at:{seconds:1773360194 nanos:832751421}" Mar 13 00:03:14.865568 kubelet[2780]: I0313 00:03:14.864017 2780 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 13 00:03:14.927948 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-56a35dddcbb0f36a387031230adb1e439711cc90e9121fe78f81db9ec75804a1-rootfs.mount: Deactivated successfully. Mar 13 00:03:14.944024 systemd[1]: Created slice kubepods-burstable-pod2342ff31_0324_4c15_ba8c_63477bccb715.slice - libcontainer container kubepods-burstable-pod2342ff31_0324_4c15_ba8c_63477bccb715.slice. Mar 13 00:03:14.964791 systemd[1]: Created slice kubepods-besteffort-poda121ceba_000c_4da5_9bb5_f183423a4619.slice - libcontainer container kubepods-besteffort-poda121ceba_000c_4da5_9bb5_f183423a4619.slice. Mar 13 00:03:14.982084 kubelet[2780]: I0313 00:03:14.981924 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4bb0fab-39ff-4cb6-8657-a4e8f09c7b1f-config-volume\") pod \"coredns-674b8bbfcf-qsps2\" (UID: \"f4bb0fab-39ff-4cb6-8657-a4e8f09c7b1f\") " pod="kube-system/coredns-674b8bbfcf-qsps2" Mar 13 00:03:14.982084 kubelet[2780]: I0313 00:03:14.981989 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5blp\" (UniqueName: \"kubernetes.io/projected/f4bb0fab-39ff-4cb6-8657-a4e8f09c7b1f-kube-api-access-n5blp\") pod \"coredns-674b8bbfcf-qsps2\" (UID: \"f4bb0fab-39ff-4cb6-8657-a4e8f09c7b1f\") " pod="kube-system/coredns-674b8bbfcf-qsps2" Mar 13 00:03:14.991272 systemd[1]: Created slice kubepods-burstable-podf4bb0fab_39ff_4cb6_8657_a4e8f09c7b1f.slice - libcontainer container kubepods-burstable-podf4bb0fab_39ff_4cb6_8657_a4e8f09c7b1f.slice. Mar 13 00:03:15.003490 systemd[1]: Created slice kubepods-besteffort-pod91d0abaf_4815_4aea_819a_473a1eb106b8.slice - libcontainer container kubepods-besteffort-pod91d0abaf_4815_4aea_819a_473a1eb106b8.slice. Mar 13 00:03:15.013592 systemd[1]: Created slice kubepods-besteffort-pod0d1cef36_5dde_4c02_a3f5_f4b49f0d9528.slice - libcontainer container kubepods-besteffort-pod0d1cef36_5dde_4c02_a3f5_f4b49f0d9528.slice. Mar 13 00:03:15.020869 systemd[1]: Created slice kubepods-besteffort-pod60e1da4c_b449_490a_8fd7_8c8e37b407af.slice - libcontainer container kubepods-besteffort-pod60e1da4c_b449_490a_8fd7_8c8e37b407af.slice. Mar 13 00:03:15.030265 systemd[1]: Created slice kubepods-besteffort-pod4c6f0879_0241_4332_9161_877f63c69a41.slice - libcontainer container kubepods-besteffort-pod4c6f0879_0241_4332_9161_877f63c69a41.slice. Mar 13 00:03:15.083021 kubelet[2780]: I0313 00:03:15.082888 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dprds\" (UniqueName: \"kubernetes.io/projected/0d1cef36-5dde-4c02-a3f5-f4b49f0d9528-kube-api-access-dprds\") pod \"calico-apiserver-6f9d647bcf-kt45l\" (UID: \"0d1cef36-5dde-4c02-a3f5-f4b49f0d9528\") " pod="calico-system/calico-apiserver-6f9d647bcf-kt45l" Mar 13 00:03:15.083343 kubelet[2780]: I0313 00:03:15.083213 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2342ff31-0324-4c15-ba8c-63477bccb715-config-volume\") pod \"coredns-674b8bbfcf-ps5pt\" (UID: \"2342ff31-0324-4c15-ba8c-63477bccb715\") " pod="kube-system/coredns-674b8bbfcf-ps5pt" Mar 13 00:03:15.083343 kubelet[2780]: I0313 00:03:15.083283 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6wc8\" (UniqueName: \"kubernetes.io/projected/2342ff31-0324-4c15-ba8c-63477bccb715-kube-api-access-v6wc8\") pod \"coredns-674b8bbfcf-ps5pt\" (UID: \"2342ff31-0324-4c15-ba8c-63477bccb715\") " pod="kube-system/coredns-674b8bbfcf-ps5pt" Mar 13 00:03:15.083343 kubelet[2780]: I0313 00:03:15.083306 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs76h\" (UniqueName: \"kubernetes.io/projected/60e1da4c-b449-490a-8fd7-8c8e37b407af-kube-api-access-cs76h\") pod \"goldmane-5b85766d88-psgzh\" (UID: \"60e1da4c-b449-490a-8fd7-8c8e37b407af\") " pod="calico-system/goldmane-5b85766d88-psgzh" Mar 13 00:03:15.083700 kubelet[2780]: I0313 00:03:15.083328 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/91d0abaf-4815-4aea-819a-473a1eb106b8-whisker-backend-key-pair\") pod \"whisker-74bc4cbd96-b7qwl\" (UID: \"91d0abaf-4815-4aea-819a-473a1eb106b8\") " pod="calico-system/whisker-74bc4cbd96-b7qwl" Mar 13 00:03:15.083700 kubelet[2780]: I0313 00:03:15.083598 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91d0abaf-4815-4aea-819a-473a1eb106b8-whisker-ca-bundle\") pod \"whisker-74bc4cbd96-b7qwl\" (UID: \"91d0abaf-4815-4aea-819a-473a1eb106b8\") " pod="calico-system/whisker-74bc4cbd96-b7qwl" Mar 13 00:03:15.083700 kubelet[2780]: I0313 00:03:15.083651 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c556g\" (UniqueName: \"kubernetes.io/projected/91d0abaf-4815-4aea-819a-473a1eb106b8-kube-api-access-c556g\") pod \"whisker-74bc4cbd96-b7qwl\" (UID: \"91d0abaf-4815-4aea-819a-473a1eb106b8\") " pod="calico-system/whisker-74bc4cbd96-b7qwl" Mar 13 00:03:15.084100 kubelet[2780]: I0313 00:03:15.083951 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60e1da4c-b449-490a-8fd7-8c8e37b407af-config\") pod \"goldmane-5b85766d88-psgzh\" (UID: \"60e1da4c-b449-490a-8fd7-8c8e37b407af\") " pod="calico-system/goldmane-5b85766d88-psgzh" Mar 13 00:03:15.084321 kubelet[2780]: I0313 00:03:15.084268 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/60e1da4c-b449-490a-8fd7-8c8e37b407af-goldmane-key-pair\") pod \"goldmane-5b85766d88-psgzh\" (UID: \"60e1da4c-b449-490a-8fd7-8c8e37b407af\") " pod="calico-system/goldmane-5b85766d88-psgzh" Mar 13 00:03:15.084575 kubelet[2780]: I0313 00:03:15.084525 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60e1da4c-b449-490a-8fd7-8c8e37b407af-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-psgzh\" (UID: \"60e1da4c-b449-490a-8fd7-8c8e37b407af\") " pod="calico-system/goldmane-5b85766d88-psgzh" Mar 13 00:03:15.084688 kubelet[2780]: I0313 00:03:15.084669 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a121ceba-000c-4da5-9bb5-f183423a4619-tigera-ca-bundle\") pod \"calico-kube-controllers-7995888b6d-w882j\" (UID: \"a121ceba-000c-4da5-9bb5-f183423a4619\") " pod="calico-system/calico-kube-controllers-7995888b6d-w882j" Mar 13 00:03:15.084874 kubelet[2780]: I0313 00:03:15.084832 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/91d0abaf-4815-4aea-819a-473a1eb106b8-nginx-config\") pod \"whisker-74bc4cbd96-b7qwl\" (UID: \"91d0abaf-4815-4aea-819a-473a1eb106b8\") " pod="calico-system/whisker-74bc4cbd96-b7qwl" Mar 13 00:03:15.086137 kubelet[2780]: I0313 00:03:15.085442 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4c6f0879-0241-4332-9161-877f63c69a41-calico-apiserver-certs\") pod \"calico-apiserver-6f9d647bcf-qvp9s\" (UID: \"4c6f0879-0241-4332-9161-877f63c69a41\") " pod="calico-system/calico-apiserver-6f9d647bcf-qvp9s" Mar 13 00:03:15.086137 kubelet[2780]: I0313 00:03:15.085492 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxstb\" (UniqueName: \"kubernetes.io/projected/4c6f0879-0241-4332-9161-877f63c69a41-kube-api-access-fxstb\") pod \"calico-apiserver-6f9d647bcf-qvp9s\" (UID: \"4c6f0879-0241-4332-9161-877f63c69a41\") " pod="calico-system/calico-apiserver-6f9d647bcf-qvp9s" Mar 13 00:03:15.086137 kubelet[2780]: I0313 00:03:15.085518 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxkkf\" (UniqueName: \"kubernetes.io/projected/a121ceba-000c-4da5-9bb5-f183423a4619-kube-api-access-vxkkf\") pod \"calico-kube-controllers-7995888b6d-w882j\" (UID: \"a121ceba-000c-4da5-9bb5-f183423a4619\") " pod="calico-system/calico-kube-controllers-7995888b6d-w882j" Mar 13 00:03:15.086137 kubelet[2780]: I0313 00:03:15.085585 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0d1cef36-5dde-4c02-a3f5-f4b49f0d9528-calico-apiserver-certs\") pod \"calico-apiserver-6f9d647bcf-kt45l\" (UID: \"0d1cef36-5dde-4c02-a3f5-f4b49f0d9528\") " pod="calico-system/calico-apiserver-6f9d647bcf-kt45l" Mar 13 00:03:15.283299 containerd[1547]: time="2026-03-13T00:03:15.283080772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ps5pt,Uid:2342ff31-0324-4c15-ba8c-63477bccb715,Namespace:kube-system,Attempt:0,}" Mar 13 00:03:15.287253 containerd[1547]: time="2026-03-13T00:03:15.286738717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7995888b6d-w882j,Uid:a121ceba-000c-4da5-9bb5-f183423a4619,Namespace:calico-system,Attempt:0,}" Mar 13 00:03:15.300047 containerd[1547]: time="2026-03-13T00:03:15.299998403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qsps2,Uid:f4bb0fab-39ff-4cb6-8657-a4e8f09c7b1f,Namespace:kube-system,Attempt:0,}" Mar 13 00:03:15.314126 containerd[1547]: time="2026-03-13T00:03:15.313035159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74bc4cbd96-b7qwl,Uid:91d0abaf-4815-4aea-819a-473a1eb106b8,Namespace:calico-system,Attempt:0,}" Mar 13 00:03:15.324102 containerd[1547]: time="2026-03-13T00:03:15.322764545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f9d647bcf-kt45l,Uid:0d1cef36-5dde-4c02-a3f5-f4b49f0d9528,Namespace:calico-system,Attempt:0,}" Mar 13 00:03:15.328637 containerd[1547]: time="2026-03-13T00:03:15.328584416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-psgzh,Uid:60e1da4c-b449-490a-8fd7-8c8e37b407af,Namespace:calico-system,Attempt:0,}" Mar 13 00:03:15.335458 containerd[1547]: time="2026-03-13T00:03:15.335259680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f9d647bcf-qvp9s,Uid:4c6f0879-0241-4332-9161-877f63c69a41,Namespace:calico-system,Attempt:0,}" Mar 13 00:03:15.553353 containerd[1547]: time="2026-03-13T00:03:15.553284280Z" level=error msg="Failed to destroy network for sandbox \"6f4e102a8f7d2fb7762f38846514a57ec938627b3a458de2ba22451e4b645bb2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:03:15.556149 containerd[1547]: time="2026-03-13T00:03:15.556053630Z" level=error msg="Failed to destroy network for sandbox \"dba295fc652761e6b4bf65b0c3756a310de98fd5dce46b9f9aa548a0cc4a3b23\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:03:15.558670 containerd[1547]: time="2026-03-13T00:03:15.558608531Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ps5pt,Uid:2342ff31-0324-4c15-ba8c-63477bccb715,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f4e102a8f7d2fb7762f38846514a57ec938627b3a458de2ba22451e4b645bb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:03:15.558950 kubelet[2780]: E0313 00:03:15.558891 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f4e102a8f7d2fb7762f38846514a57ec938627b3a458de2ba22451e4b645bb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:03:15.559301 kubelet[2780]: E0313 00:03:15.558984 2780 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f4e102a8f7d2fb7762f38846514a57ec938627b3a458de2ba22451e4b645bb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ps5pt" Mar 13 00:03:15.559301 kubelet[2780]: E0313 00:03:15.559005 2780 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f4e102a8f7d2fb7762f38846514a57ec938627b3a458de2ba22451e4b645bb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ps5pt" Mar 13 00:03:15.559301 kubelet[2780]: E0313 00:03:15.559096 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-ps5pt_kube-system(2342ff31-0324-4c15-ba8c-63477bccb715)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-ps5pt_kube-system(2342ff31-0324-4c15-ba8c-63477bccb715)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6f4e102a8f7d2fb7762f38846514a57ec938627b3a458de2ba22451e4b645bb2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-ps5pt" podUID="2342ff31-0324-4c15-ba8c-63477bccb715" Mar 13 00:03:15.562057 containerd[1547]: time="2026-03-13T00:03:15.561997466Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qsps2,Uid:f4bb0fab-39ff-4cb6-8657-a4e8f09c7b1f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dba295fc652761e6b4bf65b0c3756a310de98fd5dce46b9f9aa548a0cc4a3b23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:03:15.562701 kubelet[2780]: E0313 00:03:15.562649 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dba295fc652761e6b4bf65b0c3756a310de98fd5dce46b9f9aa548a0cc4a3b23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:03:15.562813 kubelet[2780]: E0313 00:03:15.562716 2780 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dba295fc652761e6b4bf65b0c3756a310de98fd5dce46b9f9aa548a0cc4a3b23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-qsps2" Mar 13 00:03:15.562813 kubelet[2780]: E0313 00:03:15.562739 2780 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dba295fc652761e6b4bf65b0c3756a310de98fd5dce46b9f9aa548a0cc4a3b23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-qsps2" Mar 13 00:03:15.562813 kubelet[2780]: E0313 00:03:15.562788 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-qsps2_kube-system(f4bb0fab-39ff-4cb6-8657-a4e8f09c7b1f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-qsps2_kube-system(f4bb0fab-39ff-4cb6-8657-a4e8f09c7b1f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dba295fc652761e6b4bf65b0c3756a310de98fd5dce46b9f9aa548a0cc4a3b23\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-qsps2" podUID="f4bb0fab-39ff-4cb6-8657-a4e8f09c7b1f" Mar 13 00:03:15.570624 containerd[1547]: time="2026-03-13T00:03:15.570561725Z" level=error msg="Failed to destroy network for sandbox \"f0fabf8fa0f77e950383220dac60975f4c4f262d9f7bc37571b6f5a4c09af7fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:03:15.574475 containerd[1547]: time="2026-03-13T00:03:15.574355515Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-psgzh,Uid:60e1da4c-b449-490a-8fd7-8c8e37b407af,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0fabf8fa0f77e950383220dac60975f4c4f262d9f7bc37571b6f5a4c09af7fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:03:15.574736 kubelet[2780]: E0313 00:03:15.574686 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0fabf8fa0f77e950383220dac60975f4c4f262d9f7bc37571b6f5a4c09af7fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:03:15.574806 kubelet[2780]: E0313 00:03:15.574794 2780 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0fabf8fa0f77e950383220dac60975f4c4f262d9f7bc37571b6f5a4c09af7fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-psgzh" Mar 13 00:03:15.574833 kubelet[2780]: E0313 00:03:15.574815 2780 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0fabf8fa0f77e950383220dac60975f4c4f262d9f7bc37571b6f5a4c09af7fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-psgzh" Mar 13 00:03:15.575094 kubelet[2780]: E0313 00:03:15.574860 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-psgzh_calico-system(60e1da4c-b449-490a-8fd7-8c8e37b407af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-psgzh_calico-system(60e1da4c-b449-490a-8fd7-8c8e37b407af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0fabf8fa0f77e950383220dac60975f4c4f262d9f7bc37571b6f5a4c09af7fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-psgzh" podUID="60e1da4c-b449-490a-8fd7-8c8e37b407af" Mar 13 00:03:15.576571 containerd[1547]: time="2026-03-13T00:03:15.576518281Z" level=error msg="Failed to destroy network for sandbox \"2611543af239b08a998acb4b49e048ae2a4fe82399112c2caba8478090544499\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:03:15.579480 containerd[1547]: time="2026-03-13T00:03:15.579419956Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7995888b6d-w882j,Uid:a121ceba-000c-4da5-9bb5-f183423a4619,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2611543af239b08a998acb4b49e048ae2a4fe82399112c2caba8478090544499\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:03:15.579889 kubelet[2780]: E0313 00:03:15.579841 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2611543af239b08a998acb4b49e048ae2a4fe82399112c2caba8478090544499\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:03:15.580203 containerd[1547]: time="2026-03-13T00:03:15.580171546Z" level=error msg="Failed to destroy network for sandbox \"4d61dca7ffa02ce58e07e550cdf76cfb5b55eb0183a927f24ba4240436a6c705\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:03:15.580295 kubelet[2780]: E0313 00:03:15.580058 2780 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2611543af239b08a998acb4b49e048ae2a4fe82399112c2caba8478090544499\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7995888b6d-w882j" Mar 13 00:03:15.580295 kubelet[2780]: E0313 00:03:15.580243 2780 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2611543af239b08a998acb4b49e048ae2a4fe82399112c2caba8478090544499\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7995888b6d-w882j" Mar 13 00:03:15.580514 kubelet[2780]: E0313 00:03:15.580321 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7995888b6d-w882j_calico-system(a121ceba-000c-4da5-9bb5-f183423a4619)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7995888b6d-w882j_calico-system(a121ceba-000c-4da5-9bb5-f183423a4619)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2611543af239b08a998acb4b49e048ae2a4fe82399112c2caba8478090544499\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7995888b6d-w882j" podUID="a121ceba-000c-4da5-9bb5-f183423a4619" Mar 13 00:03:15.583095 containerd[1547]: time="2026-03-13T00:03:15.582995578Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f9d647bcf-kt45l,Uid:0d1cef36-5dde-4c02-a3f5-f4b49f0d9528,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d61dca7ffa02ce58e07e550cdf76cfb5b55eb0183a927f24ba4240436a6c705\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:03:15.585021 kubelet[2780]: E0313 00:03:15.584632 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d61dca7ffa02ce58e07e550cdf76cfb5b55eb0183a927f24ba4240436a6c705\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:03:15.585021 kubelet[2780]: E0313 00:03:15.584701 2780 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d61dca7ffa02ce58e07e550cdf76cfb5b55eb0183a927f24ba4240436a6c705\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6f9d647bcf-kt45l" Mar 13 00:03:15.585021 kubelet[2780]: E0313 00:03:15.584749 2780 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d61dca7ffa02ce58e07e550cdf76cfb5b55eb0183a927f24ba4240436a6c705\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6f9d647bcf-kt45l" Mar 13 00:03:15.585295 kubelet[2780]: E0313 00:03:15.584800 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f9d647bcf-kt45l_calico-system(0d1cef36-5dde-4c02-a3f5-f4b49f0d9528)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f9d647bcf-kt45l_calico-system(0d1cef36-5dde-4c02-a3f5-f4b49f0d9528)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d61dca7ffa02ce58e07e550cdf76cfb5b55eb0183a927f24ba4240436a6c705\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6f9d647bcf-kt45l" podUID="0d1cef36-5dde-4c02-a3f5-f4b49f0d9528" Mar 13 00:03:15.587559 containerd[1547]: time="2026-03-13T00:03:15.586991136Z" level=error msg="Failed to destroy network for sandbox \"4b8c99ac28c42970d338e66c6a5edd06a716f0c769e5c70e08be98216f4a96bd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:03:15.590502 containerd[1547]: time="2026-03-13T00:03:15.590448193Z" level=error msg="Failed to destroy network for sandbox \"5a16d7b88e26847734df2e7b82f8c7a17b30384f126d5b457a40ffd60a9500fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:03:15.591307 containerd[1547]: time="2026-03-13T00:03:15.591247905Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f9d647bcf-qvp9s,Uid:4c6f0879-0241-4332-9161-877f63c69a41,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b8c99ac28c42970d338e66c6a5edd06a716f0c769e5c70e08be98216f4a96bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:03:15.592247 kubelet[2780]: E0313 00:03:15.592194 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b8c99ac28c42970d338e66c6a5edd06a716f0c769e5c70e08be98216f4a96bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:03:15.592433 kubelet[2780]: E0313 00:03:15.592360 2780 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b8c99ac28c42970d338e66c6a5edd06a716f0c769e5c70e08be98216f4a96bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6f9d647bcf-qvp9s" Mar 13 00:03:15.592559 kubelet[2780]: E0313 00:03:15.592499 2780 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b8c99ac28c42970d338e66c6a5edd06a716f0c769e5c70e08be98216f4a96bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6f9d647bcf-qvp9s" Mar 13 00:03:15.592696 kubelet[2780]: E0313 00:03:15.592650 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f9d647bcf-qvp9s_calico-system(4c6f0879-0241-4332-9161-877f63c69a41)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f9d647bcf-qvp9s_calico-system(4c6f0879-0241-4332-9161-877f63c69a41)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4b8c99ac28c42970d338e66c6a5edd06a716f0c769e5c70e08be98216f4a96bd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6f9d647bcf-qvp9s" podUID="4c6f0879-0241-4332-9161-877f63c69a41" Mar 13 00:03:15.593710 containerd[1547]: time="2026-03-13T00:03:15.593661720Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74bc4cbd96-b7qwl,Uid:91d0abaf-4815-4aea-819a-473a1eb106b8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a16d7b88e26847734df2e7b82f8c7a17b30384f126d5b457a40ffd60a9500fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:03:15.594102 kubelet[2780]: E0313 00:03:15.593975 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a16d7b88e26847734df2e7b82f8c7a17b30384f126d5b457a40ffd60a9500fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:03:15.594269 kubelet[2780]: E0313 00:03:15.594238 2780 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a16d7b88e26847734df2e7b82f8c7a17b30384f126d5b457a40ffd60a9500fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-74bc4cbd96-b7qwl" Mar 13 00:03:15.594348 kubelet[2780]: E0313 00:03:15.594331 2780 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a16d7b88e26847734df2e7b82f8c7a17b30384f126d5b457a40ffd60a9500fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-74bc4cbd96-b7qwl" Mar 13 00:03:15.594583 kubelet[2780]: E0313 00:03:15.594490 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-74bc4cbd96-b7qwl_calico-system(91d0abaf-4815-4aea-819a-473a1eb106b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-74bc4cbd96-b7qwl_calico-system(91d0abaf-4815-4aea-819a-473a1eb106b8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5a16d7b88e26847734df2e7b82f8c7a17b30384f126d5b457a40ffd60a9500fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-74bc4cbd96-b7qwl" podUID="91d0abaf-4815-4aea-819a-473a1eb106b8" Mar 13 00:03:15.688169 containerd[1547]: time="2026-03-13T00:03:15.688012100Z" level=info msg="CreateContainer within sandbox \"6e0731c113eb0e69c14567506c31d5a2d625d5b0b29828901adbca96079b96f2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 13 00:03:15.697266 containerd[1547]: time="2026-03-13T00:03:15.696755126Z" level=info msg="Container ec4e9d4b49c51483f9512de7b276d895da8bbd4496fe4b4af2335f27de75d03c: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:03:15.730676 containerd[1547]: time="2026-03-13T00:03:15.730516664Z" level=info msg="CreateContainer within sandbox \"6e0731c113eb0e69c14567506c31d5a2d625d5b0b29828901adbca96079b96f2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ec4e9d4b49c51483f9512de7b276d895da8bbd4496fe4b4af2335f27de75d03c\"" Mar 13 00:03:15.731979 containerd[1547]: time="2026-03-13T00:03:15.731705791Z" level=info msg="StartContainer for \"ec4e9d4b49c51483f9512de7b276d895da8bbd4496fe4b4af2335f27de75d03c\"" Mar 13 00:03:15.734986 containerd[1547]: time="2026-03-13T00:03:15.734917838Z" level=info msg="connecting to shim ec4e9d4b49c51483f9512de7b276d895da8bbd4496fe4b4af2335f27de75d03c" address="unix:///run/containerd/s/d09269e22b774544b93ef1ef6686ee98d52cffa2c91ad39baa01961601c386cb" protocol=ttrpc version=3 Mar 13 00:03:15.759311 systemd[1]: Started cri-containerd-ec4e9d4b49c51483f9512de7b276d895da8bbd4496fe4b4af2335f27de75d03c.scope - libcontainer container ec4e9d4b49c51483f9512de7b276d895da8bbd4496fe4b4af2335f27de75d03c. Mar 13 00:03:15.859220 containerd[1547]: time="2026-03-13T00:03:15.858359890Z" level=info msg="StartContainer for \"ec4e9d4b49c51483f9512de7b276d895da8bbd4496fe4b4af2335f27de75d03c\" returns successfully" Mar 13 00:03:16.093411 kubelet[2780]: I0313 00:03:16.093227 2780 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/91d0abaf-4815-4aea-819a-473a1eb106b8-whisker-backend-key-pair\") pod \"91d0abaf-4815-4aea-819a-473a1eb106b8\" (UID: \"91d0abaf-4815-4aea-819a-473a1eb106b8\") " Mar 13 00:03:16.093411 kubelet[2780]: I0313 00:03:16.093310 2780 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91d0abaf-4815-4aea-819a-473a1eb106b8-whisker-ca-bundle\") pod \"91d0abaf-4815-4aea-819a-473a1eb106b8\" (UID: \"91d0abaf-4815-4aea-819a-473a1eb106b8\") " Mar 13 00:03:16.094844 kubelet[2780]: I0313 00:03:16.093636 2780 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c556g\" (UniqueName: \"kubernetes.io/projected/91d0abaf-4815-4aea-819a-473a1eb106b8-kube-api-access-c556g\") pod \"91d0abaf-4815-4aea-819a-473a1eb106b8\" (UID: \"91d0abaf-4815-4aea-819a-473a1eb106b8\") " Mar 13 00:03:16.094844 kubelet[2780]: I0313 00:03:16.093679 2780 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/91d0abaf-4815-4aea-819a-473a1eb106b8-nginx-config\") pod \"91d0abaf-4815-4aea-819a-473a1eb106b8\" (UID: \"91d0abaf-4815-4aea-819a-473a1eb106b8\") " Mar 13 00:03:16.094844 kubelet[2780]: I0313 00:03:16.094109 2780 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91d0abaf-4815-4aea-819a-473a1eb106b8-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "91d0abaf-4815-4aea-819a-473a1eb106b8" (UID: "91d0abaf-4815-4aea-819a-473a1eb106b8"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 13 00:03:16.094844 kubelet[2780]: I0313 00:03:16.094465 2780 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91d0abaf-4815-4aea-819a-473a1eb106b8-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "91d0abaf-4815-4aea-819a-473a1eb106b8" (UID: "91d0abaf-4815-4aea-819a-473a1eb106b8"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 13 00:03:16.099619 kubelet[2780]: I0313 00:03:16.099558 2780 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91d0abaf-4815-4aea-819a-473a1eb106b8-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "91d0abaf-4815-4aea-819a-473a1eb106b8" (UID: "91d0abaf-4815-4aea-819a-473a1eb106b8"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 13 00:03:16.102126 kubelet[2780]: I0313 00:03:16.100227 2780 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91d0abaf-4815-4aea-819a-473a1eb106b8-kube-api-access-c556g" (OuterVolumeSpecName: "kube-api-access-c556g") pod "91d0abaf-4815-4aea-819a-473a1eb106b8" (UID: "91d0abaf-4815-4aea-819a-473a1eb106b8"). InnerVolumeSpecName "kube-api-access-c556g". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 13 00:03:16.108418 systemd[1]: var-lib-kubelet-pods-91d0abaf\x2d4815\x2d4aea\x2d819a\x2d473a1eb106b8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc556g.mount: Deactivated successfully. Mar 13 00:03:16.108526 systemd[1]: var-lib-kubelet-pods-91d0abaf\x2d4815\x2d4aea\x2d819a\x2d473a1eb106b8-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 13 00:03:16.194775 kubelet[2780]: I0313 00:03:16.194102 2780 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c556g\" (UniqueName: \"kubernetes.io/projected/91d0abaf-4815-4aea-819a-473a1eb106b8-kube-api-access-c556g\") on node \"ci-4459-2-4-n-499db54055\" DevicePath \"\"" Mar 13 00:03:16.194775 kubelet[2780]: I0313 00:03:16.194162 2780 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/91d0abaf-4815-4aea-819a-473a1eb106b8-nginx-config\") on node \"ci-4459-2-4-n-499db54055\" DevicePath \"\"" Mar 13 00:03:16.194775 kubelet[2780]: I0313 00:03:16.194177 2780 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/91d0abaf-4815-4aea-819a-473a1eb106b8-whisker-backend-key-pair\") on node \"ci-4459-2-4-n-499db54055\" DevicePath \"\"" Mar 13 00:03:16.194775 kubelet[2780]: I0313 00:03:16.194188 2780 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91d0abaf-4815-4aea-819a-473a1eb106b8-whisker-ca-bundle\") on node \"ci-4459-2-4-n-499db54055\" DevicePath \"\"" Mar 13 00:03:16.502302 systemd[1]: Created slice kubepods-besteffort-pod00be1f59_88c4_4438_b27d_917295571b53.slice - libcontainer container kubepods-besteffort-pod00be1f59_88c4_4438_b27d_917295571b53.slice. Mar 13 00:03:16.505110 containerd[1547]: time="2026-03-13T00:03:16.505052344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v6ht6,Uid:00be1f59-88c4-4438-b27d-917295571b53,Namespace:calico-system,Attempt:0,}" Mar 13 00:03:16.699223 systemd[1]: Removed slice kubepods-besteffort-pod91d0abaf_4815_4aea_819a_473a1eb106b8.slice - libcontainer container kubepods-besteffort-pod91d0abaf_4815_4aea_819a_473a1eb106b8.slice. Mar 13 00:03:16.716121 systemd-networkd[1419]: cali63e231e3845: Link UP Mar 13 00:03:16.717355 systemd-networkd[1419]: cali63e231e3845: Gained carrier Mar 13 00:03:16.731199 kubelet[2780]: I0313 00:03:16.729848 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-xj95r" podStartSLOduration=4.047241074 podStartE2EDuration="18.729833373s" podCreationTimestamp="2026-03-13 00:02:58 +0000 UTC" firstStartedPulling="2026-03-13 00:02:59.319704581 +0000 UTC m=+25.995262995" lastFinishedPulling="2026-03-13 00:03:14.00229688 +0000 UTC m=+40.677855294" observedRunningTime="2026-03-13 00:03:16.729615245 +0000 UTC m=+43.405173659" watchObservedRunningTime="2026-03-13 00:03:16.729833373 +0000 UTC m=+43.405391787" Mar 13 00:03:16.744816 containerd[1547]: 2026-03-13 00:03:16.538 [ERROR][3852] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 13 00:03:16.744816 containerd[1547]: 2026-03-13 00:03:16.567 [INFO][3852] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--499db54055-k8s-csi--node--driver--v6ht6-eth0 csi-node-driver- calico-system 00be1f59-88c4-4438-b27d-917295571b53 743 0 2026-03-13 00:02:58 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459-2-4-n-499db54055 csi-node-driver-v6ht6 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali63e231e3845 [] [] }} ContainerID="67d030cd6f13c397241beb55c9c496bef8ef39a54874ffcf6dc4b813f5746883" Namespace="calico-system" Pod="csi-node-driver-v6ht6" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-csi--node--driver--v6ht6-" Mar 13 00:03:16.744816 containerd[1547]: 2026-03-13 00:03:16.567 [INFO][3852] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="67d030cd6f13c397241beb55c9c496bef8ef39a54874ffcf6dc4b813f5746883" Namespace="calico-system" Pod="csi-node-driver-v6ht6" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-csi--node--driver--v6ht6-eth0" Mar 13 00:03:16.744816 containerd[1547]: 2026-03-13 00:03:16.615 [INFO][3863] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="67d030cd6f13c397241beb55c9c496bef8ef39a54874ffcf6dc4b813f5746883" HandleID="k8s-pod-network.67d030cd6f13c397241beb55c9c496bef8ef39a54874ffcf6dc4b813f5746883" Workload="ci--4459--2--4--n--499db54055-k8s-csi--node--driver--v6ht6-eth0" Mar 13 00:03:16.745046 containerd[1547]: 2026-03-13 00:03:16.633 [INFO][3863] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="67d030cd6f13c397241beb55c9c496bef8ef39a54874ffcf6dc4b813f5746883" HandleID="k8s-pod-network.67d030cd6f13c397241beb55c9c496bef8ef39a54874ffcf6dc4b813f5746883" Workload="ci--4459--2--4--n--499db54055-k8s-csi--node--driver--v6ht6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000273eb0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-n-499db54055", "pod":"csi-node-driver-v6ht6", "timestamp":"2026-03-13 00:03:16.615500243 +0000 UTC"}, Hostname:"ci-4459-2-4-n-499db54055", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400010e2c0)} Mar 13 00:03:16.745046 containerd[1547]: 2026-03-13 00:03:16.633 [INFO][3863] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:03:16.745046 containerd[1547]: 2026-03-13 00:03:16.633 [INFO][3863] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:03:16.745046 containerd[1547]: 2026-03-13 00:03:16.633 [INFO][3863] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-499db54055' Mar 13 00:03:16.745046 containerd[1547]: 2026-03-13 00:03:16.638 [INFO][3863] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.67d030cd6f13c397241beb55c9c496bef8ef39a54874ffcf6dc4b813f5746883" host="ci-4459-2-4-n-499db54055" Mar 13 00:03:16.745046 containerd[1547]: 2026-03-13 00:03:16.645 [INFO][3863] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-499db54055" Mar 13 00:03:16.745046 containerd[1547]: 2026-03-13 00:03:16.651 [INFO][3863] ipam/ipam.go 526: Trying affinity for 192.168.58.192/26 host="ci-4459-2-4-n-499db54055" Mar 13 00:03:16.745046 containerd[1547]: 2026-03-13 00:03:16.657 [INFO][3863] ipam/ipam.go 160: Attempting to load block cidr=192.168.58.192/26 host="ci-4459-2-4-n-499db54055" Mar 13 00:03:16.745046 containerd[1547]: 2026-03-13 00:03:16.666 [INFO][3863] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.58.192/26 host="ci-4459-2-4-n-499db54055" Mar 13 00:03:16.745257 containerd[1547]: 2026-03-13 00:03:16.666 [INFO][3863] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.58.192/26 handle="k8s-pod-network.67d030cd6f13c397241beb55c9c496bef8ef39a54874ffcf6dc4b813f5746883" host="ci-4459-2-4-n-499db54055" Mar 13 00:03:16.745257 containerd[1547]: 2026-03-13 00:03:16.668 [INFO][3863] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.67d030cd6f13c397241beb55c9c496bef8ef39a54874ffcf6dc4b813f5746883 Mar 13 00:03:16.745257 containerd[1547]: 2026-03-13 00:03:16.675 [INFO][3863] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.58.192/26 handle="k8s-pod-network.67d030cd6f13c397241beb55c9c496bef8ef39a54874ffcf6dc4b813f5746883" host="ci-4459-2-4-n-499db54055" Mar 13 00:03:16.745257 containerd[1547]: 2026-03-13 00:03:16.689 [INFO][3863] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.58.193/26] block=192.168.58.192/26 handle="k8s-pod-network.67d030cd6f13c397241beb55c9c496bef8ef39a54874ffcf6dc4b813f5746883" host="ci-4459-2-4-n-499db54055" Mar 13 00:03:16.745257 containerd[1547]: 2026-03-13 00:03:16.689 [INFO][3863] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.58.193/26] handle="k8s-pod-network.67d030cd6f13c397241beb55c9c496bef8ef39a54874ffcf6dc4b813f5746883" host="ci-4459-2-4-n-499db54055" Mar 13 00:03:16.745257 containerd[1547]: 2026-03-13 00:03:16.689 [INFO][3863] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:03:16.745257 containerd[1547]: 2026-03-13 00:03:16.689 [INFO][3863] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.58.193/26] IPv6=[] ContainerID="67d030cd6f13c397241beb55c9c496bef8ef39a54874ffcf6dc4b813f5746883" HandleID="k8s-pod-network.67d030cd6f13c397241beb55c9c496bef8ef39a54874ffcf6dc4b813f5746883" Workload="ci--4459--2--4--n--499db54055-k8s-csi--node--driver--v6ht6-eth0" Mar 13 00:03:16.745379 containerd[1547]: 2026-03-13 00:03:16.697 [INFO][3852] cni-plugin/k8s.go 418: Populated endpoint ContainerID="67d030cd6f13c397241beb55c9c496bef8ef39a54874ffcf6dc4b813f5746883" Namespace="calico-system" Pod="csi-node-driver-v6ht6" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-csi--node--driver--v6ht6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--499db54055-k8s-csi--node--driver--v6ht6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"00be1f59-88c4-4438-b27d-917295571b53", ResourceVersion:"743", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 2, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-499db54055", ContainerID:"", Pod:"csi-node-driver-v6ht6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.58.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali63e231e3845", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:03:16.745459 containerd[1547]: 2026-03-13 00:03:16.700 [INFO][3852] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.193/32] ContainerID="67d030cd6f13c397241beb55c9c496bef8ef39a54874ffcf6dc4b813f5746883" Namespace="calico-system" Pod="csi-node-driver-v6ht6" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-csi--node--driver--v6ht6-eth0" Mar 13 00:03:16.745459 containerd[1547]: 2026-03-13 00:03:16.700 [INFO][3852] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali63e231e3845 ContainerID="67d030cd6f13c397241beb55c9c496bef8ef39a54874ffcf6dc4b813f5746883" Namespace="calico-system" Pod="csi-node-driver-v6ht6" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-csi--node--driver--v6ht6-eth0" Mar 13 00:03:16.745459 containerd[1547]: 2026-03-13 00:03:16.717 [INFO][3852] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="67d030cd6f13c397241beb55c9c496bef8ef39a54874ffcf6dc4b813f5746883" Namespace="calico-system" Pod="csi-node-driver-v6ht6" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-csi--node--driver--v6ht6-eth0" Mar 13 00:03:16.745520 containerd[1547]: 2026-03-13 00:03:16.717 [INFO][3852] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="67d030cd6f13c397241beb55c9c496bef8ef39a54874ffcf6dc4b813f5746883" Namespace="calico-system" Pod="csi-node-driver-v6ht6" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-csi--node--driver--v6ht6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--499db54055-k8s-csi--node--driver--v6ht6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"00be1f59-88c4-4438-b27d-917295571b53", ResourceVersion:"743", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 2, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-499db54055", ContainerID:"67d030cd6f13c397241beb55c9c496bef8ef39a54874ffcf6dc4b813f5746883", Pod:"csi-node-driver-v6ht6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.58.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali63e231e3845", MAC:"72:98:3f:2c:42:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:03:16.745574 containerd[1547]: 2026-03-13 00:03:16.737 [INFO][3852] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="67d030cd6f13c397241beb55c9c496bef8ef39a54874ffcf6dc4b813f5746883" Namespace="calico-system" Pod="csi-node-driver-v6ht6" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-csi--node--driver--v6ht6-eth0" Mar 13 00:03:16.831820 containerd[1547]: time="2026-03-13T00:03:16.831256485Z" level=info msg="connecting to shim 67d030cd6f13c397241beb55c9c496bef8ef39a54874ffcf6dc4b813f5746883" address="unix:///run/containerd/s/e39266c2291bd39fbae5e1c68adb581a5737504a00727276e5bafa24becdcc0c" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:03:16.876331 systemd[1]: Started cri-containerd-67d030cd6f13c397241beb55c9c496bef8ef39a54874ffcf6dc4b813f5746883.scope - libcontainer container 67d030cd6f13c397241beb55c9c496bef8ef39a54874ffcf6dc4b813f5746883. Mar 13 00:03:16.878617 systemd[1]: Created slice kubepods-besteffort-podf3c0492d_0274_4e58_bd5a_70c71b18d76b.slice - libcontainer container kubepods-besteffort-podf3c0492d_0274_4e58_bd5a_70c71b18d76b.slice. Mar 13 00:03:16.915884 containerd[1547]: time="2026-03-13T00:03:16.915806866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v6ht6,Uid:00be1f59-88c4-4438-b27d-917295571b53,Namespace:calico-system,Attempt:0,} returns sandbox id \"67d030cd6f13c397241beb55c9c496bef8ef39a54874ffcf6dc4b813f5746883\"" Mar 13 00:03:16.918934 containerd[1547]: time="2026-03-13T00:03:16.918902025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 13 00:03:17.002324 kubelet[2780]: I0313 00:03:17.001940 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f3c0492d-0274-4e58-bd5a-70c71b18d76b-whisker-backend-key-pair\") pod \"whisker-98749464f-vc9cd\" (UID: \"f3c0492d-0274-4e58-bd5a-70c71b18d76b\") " pod="calico-system/whisker-98749464f-vc9cd" Mar 13 00:03:17.002324 kubelet[2780]: I0313 00:03:17.002130 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x77w\" (UniqueName: \"kubernetes.io/projected/f3c0492d-0274-4e58-bd5a-70c71b18d76b-kube-api-access-6x77w\") pod \"whisker-98749464f-vc9cd\" (UID: \"f3c0492d-0274-4e58-bd5a-70c71b18d76b\") " pod="calico-system/whisker-98749464f-vc9cd" Mar 13 00:03:17.002324 kubelet[2780]: I0313 00:03:17.002224 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/f3c0492d-0274-4e58-bd5a-70c71b18d76b-nginx-config\") pod \"whisker-98749464f-vc9cd\" (UID: \"f3c0492d-0274-4e58-bd5a-70c71b18d76b\") " pod="calico-system/whisker-98749464f-vc9cd" Mar 13 00:03:17.002324 kubelet[2780]: I0313 00:03:17.002322 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3c0492d-0274-4e58-bd5a-70c71b18d76b-whisker-ca-bundle\") pod \"whisker-98749464f-vc9cd\" (UID: \"f3c0492d-0274-4e58-bd5a-70c71b18d76b\") " pod="calico-system/whisker-98749464f-vc9cd" Mar 13 00:03:17.189681 containerd[1547]: time="2026-03-13T00:03:17.189506748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-98749464f-vc9cd,Uid:f3c0492d-0274-4e58-bd5a-70c71b18d76b,Namespace:calico-system,Attempt:0,}" Mar 13 00:03:17.347244 systemd-networkd[1419]: cali23dda28191e: Link UP Mar 13 00:03:17.349342 systemd-networkd[1419]: cali23dda28191e: Gained carrier Mar 13 00:03:17.369491 containerd[1547]: 2026-03-13 00:03:17.219 [ERROR][3927] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 13 00:03:17.369491 containerd[1547]: 2026-03-13 00:03:17.235 [INFO][3927] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--499db54055-k8s-whisker--98749464f--vc9cd-eth0 whisker-98749464f- calico-system f3c0492d-0274-4e58-bd5a-70c71b18d76b 931 0 2026-03-13 00:03:16 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:98749464f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459-2-4-n-499db54055 whisker-98749464f-vc9cd eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali23dda28191e [] [] }} ContainerID="8cbb9e3515c5fde9e0e0eef7498f8cbb9186528c2b6b5176068fc0a94e4e29f0" Namespace="calico-system" Pod="whisker-98749464f-vc9cd" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-whisker--98749464f--vc9cd-" Mar 13 00:03:17.369491 containerd[1547]: 2026-03-13 00:03:17.235 [INFO][3927] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8cbb9e3515c5fde9e0e0eef7498f8cbb9186528c2b6b5176068fc0a94e4e29f0" Namespace="calico-system" Pod="whisker-98749464f-vc9cd" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-whisker--98749464f--vc9cd-eth0" Mar 13 00:03:17.369491 containerd[1547]: 2026-03-13 00:03:17.265 [INFO][3937] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8cbb9e3515c5fde9e0e0eef7498f8cbb9186528c2b6b5176068fc0a94e4e29f0" HandleID="k8s-pod-network.8cbb9e3515c5fde9e0e0eef7498f8cbb9186528c2b6b5176068fc0a94e4e29f0" Workload="ci--4459--2--4--n--499db54055-k8s-whisker--98749464f--vc9cd-eth0" Mar 13 00:03:17.369708 containerd[1547]: 2026-03-13 00:03:17.280 [INFO][3937] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8cbb9e3515c5fde9e0e0eef7498f8cbb9186528c2b6b5176068fc0a94e4e29f0" HandleID="k8s-pod-network.8cbb9e3515c5fde9e0e0eef7498f8cbb9186528c2b6b5176068fc0a94e4e29f0" Workload="ci--4459--2--4--n--499db54055-k8s-whisker--98749464f--vc9cd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ed4b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-n-499db54055", "pod":"whisker-98749464f-vc9cd", "timestamp":"2026-03-13 00:03:17.265833853 +0000 UTC"}, Hostname:"ci-4459-2-4-n-499db54055", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400030cf20)} Mar 13 00:03:17.369708 containerd[1547]: 2026-03-13 00:03:17.280 [INFO][3937] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:03:17.369708 containerd[1547]: 2026-03-13 00:03:17.281 [INFO][3937] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:03:17.369708 containerd[1547]: 2026-03-13 00:03:17.281 [INFO][3937] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-499db54055' Mar 13 00:03:17.369708 containerd[1547]: 2026-03-13 00:03:17.284 [INFO][3937] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8cbb9e3515c5fde9e0e0eef7498f8cbb9186528c2b6b5176068fc0a94e4e29f0" host="ci-4459-2-4-n-499db54055" Mar 13 00:03:17.369708 containerd[1547]: 2026-03-13 00:03:17.291 [INFO][3937] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-499db54055" Mar 13 00:03:17.369708 containerd[1547]: 2026-03-13 00:03:17.306 [INFO][3937] ipam/ipam.go 526: Trying affinity for 192.168.58.192/26 host="ci-4459-2-4-n-499db54055" Mar 13 00:03:17.369708 containerd[1547]: 2026-03-13 00:03:17.311 [INFO][3937] ipam/ipam.go 160: Attempting to load block cidr=192.168.58.192/26 host="ci-4459-2-4-n-499db54055" Mar 13 00:03:17.369708 containerd[1547]: 2026-03-13 00:03:17.314 [INFO][3937] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.58.192/26 host="ci-4459-2-4-n-499db54055" Mar 13 00:03:17.369891 containerd[1547]: 2026-03-13 00:03:17.314 [INFO][3937] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.58.192/26 handle="k8s-pod-network.8cbb9e3515c5fde9e0e0eef7498f8cbb9186528c2b6b5176068fc0a94e4e29f0" host="ci-4459-2-4-n-499db54055" Mar 13 00:03:17.369891 containerd[1547]: 2026-03-13 00:03:17.317 [INFO][3937] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8cbb9e3515c5fde9e0e0eef7498f8cbb9186528c2b6b5176068fc0a94e4e29f0 Mar 13 00:03:17.369891 containerd[1547]: 2026-03-13 00:03:17.327 [INFO][3937] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.58.192/26 handle="k8s-pod-network.8cbb9e3515c5fde9e0e0eef7498f8cbb9186528c2b6b5176068fc0a94e4e29f0" host="ci-4459-2-4-n-499db54055" Mar 13 00:03:17.369891 containerd[1547]: 2026-03-13 00:03:17.335 [INFO][3937] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.58.194/26] block=192.168.58.192/26 handle="k8s-pod-network.8cbb9e3515c5fde9e0e0eef7498f8cbb9186528c2b6b5176068fc0a94e4e29f0" host="ci-4459-2-4-n-499db54055" Mar 13 00:03:17.369891 containerd[1547]: 2026-03-13 00:03:17.336 [INFO][3937] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.58.194/26] handle="k8s-pod-network.8cbb9e3515c5fde9e0e0eef7498f8cbb9186528c2b6b5176068fc0a94e4e29f0" host="ci-4459-2-4-n-499db54055" Mar 13 00:03:17.369891 containerd[1547]: 2026-03-13 00:03:17.336 [INFO][3937] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:03:17.369891 containerd[1547]: 2026-03-13 00:03:17.336 [INFO][3937] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.58.194/26] IPv6=[] ContainerID="8cbb9e3515c5fde9e0e0eef7498f8cbb9186528c2b6b5176068fc0a94e4e29f0" HandleID="k8s-pod-network.8cbb9e3515c5fde9e0e0eef7498f8cbb9186528c2b6b5176068fc0a94e4e29f0" Workload="ci--4459--2--4--n--499db54055-k8s-whisker--98749464f--vc9cd-eth0" Mar 13 00:03:17.370017 containerd[1547]: 2026-03-13 00:03:17.340 [INFO][3927] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8cbb9e3515c5fde9e0e0eef7498f8cbb9186528c2b6b5176068fc0a94e4e29f0" Namespace="calico-system" Pod="whisker-98749464f-vc9cd" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-whisker--98749464f--vc9cd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--499db54055-k8s-whisker--98749464f--vc9cd-eth0", GenerateName:"whisker-98749464f-", Namespace:"calico-system", SelfLink:"", UID:"f3c0492d-0274-4e58-bd5a-70c71b18d76b", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 3, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"98749464f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-499db54055", ContainerID:"", Pod:"whisker-98749464f-vc9cd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.58.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali23dda28191e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:03:17.370017 containerd[1547]: 2026-03-13 00:03:17.341 [INFO][3927] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.194/32] ContainerID="8cbb9e3515c5fde9e0e0eef7498f8cbb9186528c2b6b5176068fc0a94e4e29f0" Namespace="calico-system" Pod="whisker-98749464f-vc9cd" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-whisker--98749464f--vc9cd-eth0" Mar 13 00:03:17.370102 containerd[1547]: 2026-03-13 00:03:17.341 [INFO][3927] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali23dda28191e ContainerID="8cbb9e3515c5fde9e0e0eef7498f8cbb9186528c2b6b5176068fc0a94e4e29f0" Namespace="calico-system" Pod="whisker-98749464f-vc9cd" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-whisker--98749464f--vc9cd-eth0" Mar 13 00:03:17.370102 containerd[1547]: 2026-03-13 00:03:17.349 [INFO][3927] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8cbb9e3515c5fde9e0e0eef7498f8cbb9186528c2b6b5176068fc0a94e4e29f0" Namespace="calico-system" Pod="whisker-98749464f-vc9cd" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-whisker--98749464f--vc9cd-eth0" Mar 13 00:03:17.370144 containerd[1547]: 2026-03-13 00:03:17.350 [INFO][3927] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8cbb9e3515c5fde9e0e0eef7498f8cbb9186528c2b6b5176068fc0a94e4e29f0" Namespace="calico-system" Pod="whisker-98749464f-vc9cd" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-whisker--98749464f--vc9cd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--499db54055-k8s-whisker--98749464f--vc9cd-eth0", GenerateName:"whisker-98749464f-", Namespace:"calico-system", SelfLink:"", UID:"f3c0492d-0274-4e58-bd5a-70c71b18d76b", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 3, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"98749464f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-499db54055", ContainerID:"8cbb9e3515c5fde9e0e0eef7498f8cbb9186528c2b6b5176068fc0a94e4e29f0", Pod:"whisker-98749464f-vc9cd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.58.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali23dda28191e", MAC:"26:83:be:62:ca:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:03:17.370191 containerd[1547]: 2026-03-13 00:03:17.364 [INFO][3927] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8cbb9e3515c5fde9e0e0eef7498f8cbb9186528c2b6b5176068fc0a94e4e29f0" Namespace="calico-system" Pod="whisker-98749464f-vc9cd" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-whisker--98749464f--vc9cd-eth0" Mar 13 00:03:17.417869 containerd[1547]: time="2026-03-13T00:03:17.417806598Z" level=info msg="connecting to shim 8cbb9e3515c5fde9e0e0eef7498f8cbb9186528c2b6b5176068fc0a94e4e29f0" address="unix:///run/containerd/s/702a2eb1d267c3e1b6f13d6225379d01e2a24e5b82cdde14269827ebd39d39e8" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:03:17.477300 systemd[1]: Started cri-containerd-8cbb9e3515c5fde9e0e0eef7498f8cbb9186528c2b6b5176068fc0a94e4e29f0.scope - libcontainer container 8cbb9e3515c5fde9e0e0eef7498f8cbb9186528c2b6b5176068fc0a94e4e29f0. Mar 13 00:03:17.495846 kubelet[2780]: I0313 00:03:17.495792 2780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91d0abaf-4815-4aea-819a-473a1eb106b8" path="/var/lib/kubelet/pods/91d0abaf-4815-4aea-819a-473a1eb106b8/volumes" Mar 13 00:03:17.570801 containerd[1547]: time="2026-03-13T00:03:17.570741179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-98749464f-vc9cd,Uid:f3c0492d-0274-4e58-bd5a-70c71b18d76b,Namespace:calico-system,Attempt:0,} returns sandbox id \"8cbb9e3515c5fde9e0e0eef7498f8cbb9186528c2b6b5176068fc0a94e4e29f0\"" Mar 13 00:03:17.696917 kubelet[2780]: I0313 00:03:17.696887 2780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:03:17.840295 systemd-networkd[1419]: cali63e231e3845: Gained IPv6LL Mar 13 00:03:18.247548 systemd-networkd[1419]: vxlan.calico: Link UP Mar 13 00:03:18.247559 systemd-networkd[1419]: vxlan.calico: Gained carrier Mar 13 00:03:18.608639 systemd-networkd[1419]: cali23dda28191e: Gained IPv6LL Mar 13 00:03:19.024989 containerd[1547]: time="2026-03-13T00:03:19.024667916Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:03:19.026809 containerd[1547]: time="2026-03-13T00:03:19.026723869Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Mar 13 00:03:19.027900 containerd[1547]: time="2026-03-13T00:03:19.027852509Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:03:19.031085 containerd[1547]: time="2026-03-13T00:03:19.030933419Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:03:19.031994 containerd[1547]: time="2026-03-13T00:03:19.031920094Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 2.112924746s" Mar 13 00:03:19.031994 containerd[1547]: time="2026-03-13T00:03:19.031960175Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Mar 13 00:03:19.036089 containerd[1547]: time="2026-03-13T00:03:19.035603585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 13 00:03:19.039379 containerd[1547]: time="2026-03-13T00:03:19.039220674Z" level=info msg="CreateContainer within sandbox \"67d030cd6f13c397241beb55c9c496bef8ef39a54874ffcf6dc4b813f5746883\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 13 00:03:19.071141 containerd[1547]: time="2026-03-13T00:03:19.069284423Z" level=info msg="Container e52fd0bbeb28132ff70d3a1ab4e146dd80c9bc0908492bbae6e8f1023c8ce0ca: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:03:19.070659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount480455363.mount: Deactivated successfully. Mar 13 00:03:19.088958 containerd[1547]: time="2026-03-13T00:03:19.088763956Z" level=info msg="CreateContainer within sandbox \"67d030cd6f13c397241beb55c9c496bef8ef39a54874ffcf6dc4b813f5746883\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e52fd0bbeb28132ff70d3a1ab4e146dd80c9bc0908492bbae6e8f1023c8ce0ca\"" Mar 13 00:03:19.091371 containerd[1547]: time="2026-03-13T00:03:19.091157562Z" level=info msg="StartContainer for \"e52fd0bbeb28132ff70d3a1ab4e146dd80c9bc0908492bbae6e8f1023c8ce0ca\"" Mar 13 00:03:19.095053 containerd[1547]: time="2026-03-13T00:03:19.094878014Z" level=info msg="connecting to shim e52fd0bbeb28132ff70d3a1ab4e146dd80c9bc0908492bbae6e8f1023c8ce0ca" address="unix:///run/containerd/s/e39266c2291bd39fbae5e1c68adb581a5737504a00727276e5bafa24becdcc0c" protocol=ttrpc version=3 Mar 13 00:03:19.121334 systemd[1]: Started cri-containerd-e52fd0bbeb28132ff70d3a1ab4e146dd80c9bc0908492bbae6e8f1023c8ce0ca.scope - libcontainer container e52fd0bbeb28132ff70d3a1ab4e146dd80c9bc0908492bbae6e8f1023c8ce0ca. Mar 13 00:03:19.202283 containerd[1547]: time="2026-03-13T00:03:19.202183072Z" level=info msg="StartContainer for \"e52fd0bbeb28132ff70d3a1ab4e146dd80c9bc0908492bbae6e8f1023c8ce0ca\" returns successfully" Mar 13 00:03:19.632475 systemd-networkd[1419]: vxlan.calico: Gained IPv6LL Mar 13 00:03:20.694088 containerd[1547]: time="2026-03-13T00:03:20.693741570Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:03:20.695494 containerd[1547]: time="2026-03-13T00:03:20.695432989Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Mar 13 00:03:20.697161 containerd[1547]: time="2026-03-13T00:03:20.696737874Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:03:20.699748 containerd[1547]: time="2026-03-13T00:03:20.699709537Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:03:20.701374 containerd[1547]: time="2026-03-13T00:03:20.701335073Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.665689407s" Mar 13 00:03:20.701711 containerd[1547]: time="2026-03-13T00:03:20.701585122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Mar 13 00:03:20.705110 containerd[1547]: time="2026-03-13T00:03:20.705055762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 13 00:03:20.709901 containerd[1547]: time="2026-03-13T00:03:20.709696683Z" level=info msg="CreateContainer within sandbox \"8cbb9e3515c5fde9e0e0eef7498f8cbb9186528c2b6b5176068fc0a94e4e29f0\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 13 00:03:20.721317 containerd[1547]: time="2026-03-13T00:03:20.721266484Z" level=info msg="Container ed4a5e3144e07c0ffdcf55bf5eb492b6ab762b0584b09f352c76832810484c69: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:03:20.727706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2827871581.mount: Deactivated successfully. Mar 13 00:03:20.738105 containerd[1547]: time="2026-03-13T00:03:20.738006103Z" level=info msg="CreateContainer within sandbox \"8cbb9e3515c5fde9e0e0eef7498f8cbb9186528c2b6b5176068fc0a94e4e29f0\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"ed4a5e3144e07c0ffdcf55bf5eb492b6ab762b0584b09f352c76832810484c69\"" Mar 13 00:03:20.740362 containerd[1547]: time="2026-03-13T00:03:20.740258621Z" level=info msg="StartContainer for \"ed4a5e3144e07c0ffdcf55bf5eb492b6ab762b0584b09f352c76832810484c69\"" Mar 13 00:03:20.743106 containerd[1547]: time="2026-03-13T00:03:20.742984636Z" level=info msg="connecting to shim ed4a5e3144e07c0ffdcf55bf5eb492b6ab762b0584b09f352c76832810484c69" address="unix:///run/containerd/s/702a2eb1d267c3e1b6f13d6225379d01e2a24e5b82cdde14269827ebd39d39e8" protocol=ttrpc version=3 Mar 13 00:03:20.767279 systemd[1]: Started cri-containerd-ed4a5e3144e07c0ffdcf55bf5eb492b6ab762b0584b09f352c76832810484c69.scope - libcontainer container ed4a5e3144e07c0ffdcf55bf5eb492b6ab762b0584b09f352c76832810484c69. Mar 13 00:03:20.827625 containerd[1547]: time="2026-03-13T00:03:20.827545445Z" level=info msg="StartContainer for \"ed4a5e3144e07c0ffdcf55bf5eb492b6ab762b0584b09f352c76832810484c69\" returns successfully" Mar 13 00:03:22.213451 kubelet[2780]: I0313 00:03:22.213383 2780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:03:22.542589 containerd[1547]: time="2026-03-13T00:03:22.541874323Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:03:22.543547 containerd[1547]: time="2026-03-13T00:03:22.543520338Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Mar 13 00:03:22.545257 containerd[1547]: time="2026-03-13T00:03:22.545228394Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:03:22.549817 containerd[1547]: time="2026-03-13T00:03:22.549773903Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:03:22.550612 containerd[1547]: time="2026-03-13T00:03:22.550403124Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 1.845104193s" Mar 13 00:03:22.550612 containerd[1547]: time="2026-03-13T00:03:22.550439165Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Mar 13 00:03:22.552578 containerd[1547]: time="2026-03-13T00:03:22.552289106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 13 00:03:22.557181 containerd[1547]: time="2026-03-13T00:03:22.557141785Z" level=info msg="CreateContainer within sandbox \"67d030cd6f13c397241beb55c9c496bef8ef39a54874ffcf6dc4b813f5746883\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 13 00:03:22.568858 containerd[1547]: time="2026-03-13T00:03:22.568800448Z" level=info msg="Container 6bdaf32ddec99b0b65ad108e2a1f8ddeb4b663419bd6f80fbea00322ba089297: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:03:22.575143 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1392848903.mount: Deactivated successfully. Mar 13 00:03:22.582073 containerd[1547]: time="2026-03-13T00:03:22.582013162Z" level=info msg="CreateContainer within sandbox \"67d030cd6f13c397241beb55c9c496bef8ef39a54874ffcf6dc4b813f5746883\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6bdaf32ddec99b0b65ad108e2a1f8ddeb4b663419bd6f80fbea00322ba089297\"" Mar 13 00:03:22.586537 containerd[1547]: time="2026-03-13T00:03:22.586491670Z" level=info msg="StartContainer for \"6bdaf32ddec99b0b65ad108e2a1f8ddeb4b663419bd6f80fbea00322ba089297\"" Mar 13 00:03:22.588764 containerd[1547]: time="2026-03-13T00:03:22.588658541Z" level=info msg="connecting to shim 6bdaf32ddec99b0b65ad108e2a1f8ddeb4b663419bd6f80fbea00322ba089297" address="unix:///run/containerd/s/e39266c2291bd39fbae5e1c68adb581a5737504a00727276e5bafa24becdcc0c" protocol=ttrpc version=3 Mar 13 00:03:22.621599 systemd[1]: Started cri-containerd-6bdaf32ddec99b0b65ad108e2a1f8ddeb4b663419bd6f80fbea00322ba089297.scope - libcontainer container 6bdaf32ddec99b0b65ad108e2a1f8ddeb4b663419bd6f80fbea00322ba089297. Mar 13 00:03:22.704222 containerd[1547]: time="2026-03-13T00:03:22.704179617Z" level=info msg="StartContainer for \"6bdaf32ddec99b0b65ad108e2a1f8ddeb4b663419bd6f80fbea00322ba089297\" returns successfully" Mar 13 00:03:22.766251 kubelet[2780]: I0313 00:03:22.766122 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-v6ht6" podStartSLOduration=19.132140146 podStartE2EDuration="24.766101531s" podCreationTimestamp="2026-03-13 00:02:58 +0000 UTC" firstStartedPulling="2026-03-13 00:03:16.917919947 +0000 UTC m=+43.593478401" lastFinishedPulling="2026-03-13 00:03:22.551881372 +0000 UTC m=+49.227439786" observedRunningTime="2026-03-13 00:03:22.762947988 +0000 UTC m=+49.438506442" watchObservedRunningTime="2026-03-13 00:03:22.766101531 +0000 UTC m=+49.441660065" Mar 13 00:03:23.581522 kubelet[2780]: I0313 00:03:23.581355 2780 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 13 00:03:23.581522 kubelet[2780]: I0313 00:03:23.581411 2780 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 13 00:03:24.826717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1642418785.mount: Deactivated successfully. Mar 13 00:03:24.853089 containerd[1547]: time="2026-03-13T00:03:24.851916235Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:03:24.853440 containerd[1547]: time="2026-03-13T00:03:24.853092431Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Mar 13 00:03:24.854696 containerd[1547]: time="2026-03-13T00:03:24.854646160Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:03:24.857480 containerd[1547]: time="2026-03-13T00:03:24.857435167Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:03:24.859230 containerd[1547]: time="2026-03-13T00:03:24.859112979Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 2.306787073s" Mar 13 00:03:24.859230 containerd[1547]: time="2026-03-13T00:03:24.859170101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Mar 13 00:03:24.866445 containerd[1547]: time="2026-03-13T00:03:24.866397366Z" level=info msg="CreateContainer within sandbox \"8cbb9e3515c5fde9e0e0eef7498f8cbb9186528c2b6b5176068fc0a94e4e29f0\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 13 00:03:24.879780 containerd[1547]: time="2026-03-13T00:03:24.879697621Z" level=info msg="Container ee8012117406766edccd79c36a39a2e07a9944aaa887e02346c59cf2ea164b13: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:03:24.907275 containerd[1547]: time="2026-03-13T00:03:24.907217959Z" level=info msg="CreateContainer within sandbox \"8cbb9e3515c5fde9e0e0eef7498f8cbb9186528c2b6b5176068fc0a94e4e29f0\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"ee8012117406766edccd79c36a39a2e07a9944aaa887e02346c59cf2ea164b13\"" Mar 13 00:03:24.910750 containerd[1547]: time="2026-03-13T00:03:24.909262423Z" level=info msg="StartContainer for \"ee8012117406766edccd79c36a39a2e07a9944aaa887e02346c59cf2ea164b13\"" Mar 13 00:03:24.911992 containerd[1547]: time="2026-03-13T00:03:24.911937987Z" level=info msg="connecting to shim ee8012117406766edccd79c36a39a2e07a9944aaa887e02346c59cf2ea164b13" address="unix:///run/containerd/s/702a2eb1d267c3e1b6f13d6225379d01e2a24e5b82cdde14269827ebd39d39e8" protocol=ttrpc version=3 Mar 13 00:03:24.946308 systemd[1]: Started cri-containerd-ee8012117406766edccd79c36a39a2e07a9944aaa887e02346c59cf2ea164b13.scope - libcontainer container ee8012117406766edccd79c36a39a2e07a9944aaa887e02346c59cf2ea164b13. Mar 13 00:03:25.007532 containerd[1547]: time="2026-03-13T00:03:25.007469041Z" level=info msg="StartContainer for \"ee8012117406766edccd79c36a39a2e07a9944aaa887e02346c59cf2ea164b13\" returns successfully" Mar 13 00:03:25.777807 kubelet[2780]: I0313 00:03:25.776618 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-98749464f-vc9cd" podStartSLOduration=2.490928163 podStartE2EDuration="9.776600532s" podCreationTimestamp="2026-03-13 00:03:16 +0000 UTC" firstStartedPulling="2026-03-13 00:03:17.574943937 +0000 UTC m=+44.250502351" lastFinishedPulling="2026-03-13 00:03:24.860616306 +0000 UTC m=+51.536174720" observedRunningTime="2026-03-13 00:03:25.77488412 +0000 UTC m=+52.450442534" watchObservedRunningTime="2026-03-13 00:03:25.776600532 +0000 UTC m=+52.452158946" Mar 13 00:03:26.491894 containerd[1547]: time="2026-03-13T00:03:26.491828847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qsps2,Uid:f4bb0fab-39ff-4cb6-8657-a4e8f09c7b1f,Namespace:kube-system,Attempt:0,}" Mar 13 00:03:26.704484 systemd-networkd[1419]: calib8f2e8e0c24: Link UP Mar 13 00:03:26.705799 systemd-networkd[1419]: calib8f2e8e0c24: Gained carrier Mar 13 00:03:26.735544 containerd[1547]: 2026-03-13 00:03:26.591 [INFO][4426] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--499db54055-k8s-coredns--674b8bbfcf--qsps2-eth0 coredns-674b8bbfcf- kube-system f4bb0fab-39ff-4cb6-8657-a4e8f09c7b1f 876 0 2026-03-13 00:02:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-2-4-n-499db54055 coredns-674b8bbfcf-qsps2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib8f2e8e0c24 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="785ea6d5ce332a02ee51c3031f226beb2bfa473ec2d1bd58295b90c4c5a1b4cf" Namespace="kube-system" Pod="coredns-674b8bbfcf-qsps2" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-coredns--674b8bbfcf--qsps2-" Mar 13 00:03:26.735544 containerd[1547]: 2026-03-13 00:03:26.591 [INFO][4426] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="785ea6d5ce332a02ee51c3031f226beb2bfa473ec2d1bd58295b90c4c5a1b4cf" Namespace="kube-system" Pod="coredns-674b8bbfcf-qsps2" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-coredns--674b8bbfcf--qsps2-eth0" Mar 13 00:03:26.735544 containerd[1547]: 2026-03-13 00:03:26.630 [INFO][4437] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="785ea6d5ce332a02ee51c3031f226beb2bfa473ec2d1bd58295b90c4c5a1b4cf" HandleID="k8s-pod-network.785ea6d5ce332a02ee51c3031f226beb2bfa473ec2d1bd58295b90c4c5a1b4cf" Workload="ci--4459--2--4--n--499db54055-k8s-coredns--674b8bbfcf--qsps2-eth0" Mar 13 00:03:26.735767 containerd[1547]: 2026-03-13 00:03:26.643 [INFO][4437] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="785ea6d5ce332a02ee51c3031f226beb2bfa473ec2d1bd58295b90c4c5a1b4cf" HandleID="k8s-pod-network.785ea6d5ce332a02ee51c3031f226beb2bfa473ec2d1bd58295b90c4c5a1b4cf" Workload="ci--4459--2--4--n--499db54055-k8s-coredns--674b8bbfcf--qsps2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f9e80), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-2-4-n-499db54055", "pod":"coredns-674b8bbfcf-qsps2", "timestamp":"2026-03-13 00:03:26.630138703 +0000 UTC"}, Hostname:"ci-4459-2-4-n-499db54055", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000186840)} Mar 13 00:03:26.735767 containerd[1547]: 2026-03-13 00:03:26.644 [INFO][4437] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:03:26.735767 containerd[1547]: 2026-03-13 00:03:26.644 [INFO][4437] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:03:26.735767 containerd[1547]: 2026-03-13 00:03:26.644 [INFO][4437] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-499db54055' Mar 13 00:03:26.735767 containerd[1547]: 2026-03-13 00:03:26.648 [INFO][4437] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.785ea6d5ce332a02ee51c3031f226beb2bfa473ec2d1bd58295b90c4c5a1b4cf" host="ci-4459-2-4-n-499db54055" Mar 13 00:03:26.735767 containerd[1547]: 2026-03-13 00:03:26.659 [INFO][4437] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-499db54055" Mar 13 00:03:26.735767 containerd[1547]: 2026-03-13 00:03:26.668 [INFO][4437] ipam/ipam.go 526: Trying affinity for 192.168.58.192/26 host="ci-4459-2-4-n-499db54055" Mar 13 00:03:26.735767 containerd[1547]: 2026-03-13 00:03:26.672 [INFO][4437] ipam/ipam.go 160: Attempting to load block cidr=192.168.58.192/26 host="ci-4459-2-4-n-499db54055" Mar 13 00:03:26.735767 containerd[1547]: 2026-03-13 00:03:26.678 [INFO][4437] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.58.192/26 host="ci-4459-2-4-n-499db54055" Mar 13 00:03:26.736008 containerd[1547]: 2026-03-13 00:03:26.678 [INFO][4437] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.58.192/26 handle="k8s-pod-network.785ea6d5ce332a02ee51c3031f226beb2bfa473ec2d1bd58295b90c4c5a1b4cf" host="ci-4459-2-4-n-499db54055" Mar 13 00:03:26.736008 containerd[1547]: 2026-03-13 00:03:26.681 [INFO][4437] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.785ea6d5ce332a02ee51c3031f226beb2bfa473ec2d1bd58295b90c4c5a1b4cf Mar 13 00:03:26.736008 containerd[1547]: 2026-03-13 00:03:26.688 [INFO][4437] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.58.192/26 handle="k8s-pod-network.785ea6d5ce332a02ee51c3031f226beb2bfa473ec2d1bd58295b90c4c5a1b4cf" host="ci-4459-2-4-n-499db54055" Mar 13 00:03:26.736008 containerd[1547]: 2026-03-13 00:03:26.696 [INFO][4437] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.58.195/26] block=192.168.58.192/26 handle="k8s-pod-network.785ea6d5ce332a02ee51c3031f226beb2bfa473ec2d1bd58295b90c4c5a1b4cf" host="ci-4459-2-4-n-499db54055" Mar 13 00:03:26.736008 containerd[1547]: 2026-03-13 00:03:26.696 [INFO][4437] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.58.195/26] handle="k8s-pod-network.785ea6d5ce332a02ee51c3031f226beb2bfa473ec2d1bd58295b90c4c5a1b4cf" host="ci-4459-2-4-n-499db54055" Mar 13 00:03:26.736008 containerd[1547]: 2026-03-13 00:03:26.696 [INFO][4437] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:03:26.736008 containerd[1547]: 2026-03-13 00:03:26.696 [INFO][4437] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.58.195/26] IPv6=[] ContainerID="785ea6d5ce332a02ee51c3031f226beb2bfa473ec2d1bd58295b90c4c5a1b4cf" HandleID="k8s-pod-network.785ea6d5ce332a02ee51c3031f226beb2bfa473ec2d1bd58295b90c4c5a1b4cf" Workload="ci--4459--2--4--n--499db54055-k8s-coredns--674b8bbfcf--qsps2-eth0" Mar 13 00:03:26.737457 containerd[1547]: 2026-03-13 00:03:26.699 [INFO][4426] cni-plugin/k8s.go 418: Populated endpoint ContainerID="785ea6d5ce332a02ee51c3031f226beb2bfa473ec2d1bd58295b90c4c5a1b4cf" Namespace="kube-system" Pod="coredns-674b8bbfcf-qsps2" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-coredns--674b8bbfcf--qsps2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--499db54055-k8s-coredns--674b8bbfcf--qsps2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f4bb0fab-39ff-4cb6-8657-a4e8f09c7b1f", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 2, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-499db54055", ContainerID:"", Pod:"coredns-674b8bbfcf-qsps2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib8f2e8e0c24", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:03:26.737457 containerd[1547]: 2026-03-13 00:03:26.699 [INFO][4426] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.195/32] ContainerID="785ea6d5ce332a02ee51c3031f226beb2bfa473ec2d1bd58295b90c4c5a1b4cf" Namespace="kube-system" Pod="coredns-674b8bbfcf-qsps2" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-coredns--674b8bbfcf--qsps2-eth0" Mar 13 00:03:26.737457 containerd[1547]: 2026-03-13 00:03:26.699 [INFO][4426] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib8f2e8e0c24 ContainerID="785ea6d5ce332a02ee51c3031f226beb2bfa473ec2d1bd58295b90c4c5a1b4cf" Namespace="kube-system" Pod="coredns-674b8bbfcf-qsps2" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-coredns--674b8bbfcf--qsps2-eth0" Mar 13 00:03:26.737457 containerd[1547]: 2026-03-13 00:03:26.705 [INFO][4426] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="785ea6d5ce332a02ee51c3031f226beb2bfa473ec2d1bd58295b90c4c5a1b4cf" Namespace="kube-system" Pod="coredns-674b8bbfcf-qsps2" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-coredns--674b8bbfcf--qsps2-eth0" Mar 13 00:03:26.737457 containerd[1547]: 2026-03-13 00:03:26.707 [INFO][4426] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="785ea6d5ce332a02ee51c3031f226beb2bfa473ec2d1bd58295b90c4c5a1b4cf" Namespace="kube-system" Pod="coredns-674b8bbfcf-qsps2" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-coredns--674b8bbfcf--qsps2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--499db54055-k8s-coredns--674b8bbfcf--qsps2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f4bb0fab-39ff-4cb6-8657-a4e8f09c7b1f", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 2, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-499db54055", ContainerID:"785ea6d5ce332a02ee51c3031f226beb2bfa473ec2d1bd58295b90c4c5a1b4cf", Pod:"coredns-674b8bbfcf-qsps2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib8f2e8e0c24", MAC:"96:2a:97:76:2f:e3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:03:26.737457 containerd[1547]: 2026-03-13 00:03:26.731 [INFO][4426] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="785ea6d5ce332a02ee51c3031f226beb2bfa473ec2d1bd58295b90c4c5a1b4cf" Namespace="kube-system" Pod="coredns-674b8bbfcf-qsps2" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-coredns--674b8bbfcf--qsps2-eth0" Mar 13 00:03:26.766353 containerd[1547]: time="2026-03-13T00:03:26.766221573Z" level=info msg="connecting to shim 785ea6d5ce332a02ee51c3031f226beb2bfa473ec2d1bd58295b90c4c5a1b4cf" address="unix:///run/containerd/s/e89798d5ef075ef3c7dd63c9f09d79e7688550dbbaf99ad43345d750c7248582" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:03:26.824294 systemd[1]: Started cri-containerd-785ea6d5ce332a02ee51c3031f226beb2bfa473ec2d1bd58295b90c4c5a1b4cf.scope - libcontainer container 785ea6d5ce332a02ee51c3031f226beb2bfa473ec2d1bd58295b90c4c5a1b4cf. Mar 13 00:03:26.896492 containerd[1547]: time="2026-03-13T00:03:26.896424989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qsps2,Uid:f4bb0fab-39ff-4cb6-8657-a4e8f09c7b1f,Namespace:kube-system,Attempt:0,} returns sandbox id \"785ea6d5ce332a02ee51c3031f226beb2bfa473ec2d1bd58295b90c4c5a1b4cf\"" Mar 13 00:03:26.913095 containerd[1547]: time="2026-03-13T00:03:26.913025960Z" level=info msg="CreateContainer within sandbox \"785ea6d5ce332a02ee51c3031f226beb2bfa473ec2d1bd58295b90c4c5a1b4cf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 13 00:03:26.933952 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount720281044.mount: Deactivated successfully. Mar 13 00:03:26.937088 containerd[1547]: time="2026-03-13T00:03:26.936487615Z" level=info msg="Container 6e29ec53d6b3901d2ada63f1d7a4ddbd905e4c922f205e2f2d4c6be5d968ee15: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:03:26.943823 containerd[1547]: time="2026-03-13T00:03:26.943759031Z" level=info msg="CreateContainer within sandbox \"785ea6d5ce332a02ee51c3031f226beb2bfa473ec2d1bd58295b90c4c5a1b4cf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6e29ec53d6b3901d2ada63f1d7a4ddbd905e4c922f205e2f2d4c6be5d968ee15\"" Mar 13 00:03:26.946289 containerd[1547]: time="2026-03-13T00:03:26.946250984Z" level=info msg="StartContainer for \"6e29ec53d6b3901d2ada63f1d7a4ddbd905e4c922f205e2f2d4c6be5d968ee15\"" Mar 13 00:03:26.947707 containerd[1547]: time="2026-03-13T00:03:26.947667346Z" level=info msg="connecting to shim 6e29ec53d6b3901d2ada63f1d7a4ddbd905e4c922f205e2f2d4c6be5d968ee15" address="unix:///run/containerd/s/e89798d5ef075ef3c7dd63c9f09d79e7688550dbbaf99ad43345d750c7248582" protocol=ttrpc version=3 Mar 13 00:03:26.970562 systemd[1]: Started cri-containerd-6e29ec53d6b3901d2ada63f1d7a4ddbd905e4c922f205e2f2d4c6be5d968ee15.scope - libcontainer container 6e29ec53d6b3901d2ada63f1d7a4ddbd905e4c922f205e2f2d4c6be5d968ee15. Mar 13 00:03:27.041748 containerd[1547]: time="2026-03-13T00:03:27.041485934Z" level=info msg="StartContainer for \"6e29ec53d6b3901d2ada63f1d7a4ddbd905e4c922f205e2f2d4c6be5d968ee15\" returns successfully" Mar 13 00:03:27.492094 containerd[1547]: time="2026-03-13T00:03:27.491154114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f9d647bcf-kt45l,Uid:0d1cef36-5dde-4c02-a3f5-f4b49f0d9528,Namespace:calico-system,Attempt:0,}" Mar 13 00:03:27.505953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount81508303.mount: Deactivated successfully. Mar 13 00:03:27.643579 systemd-networkd[1419]: calie05e86c26bf: Link UP Mar 13 00:03:27.644182 systemd-networkd[1419]: calie05e86c26bf: Gained carrier Mar 13 00:03:27.662965 containerd[1547]: 2026-03-13 00:03:27.548 [INFO][4545] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--499db54055-k8s-calico--apiserver--6f9d647bcf--kt45l-eth0 calico-apiserver-6f9d647bcf- calico-system 0d1cef36-5dde-4c02-a3f5-f4b49f0d9528 880 0 2026-03-13 00:02:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f9d647bcf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-4-n-499db54055 calico-apiserver-6f9d647bcf-kt45l eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calie05e86c26bf [] [] }} ContainerID="3da15eb1010aa4433af0759e4c8b0ee8336a26d35a2717893329dac231fa7d14" Namespace="calico-system" Pod="calico-apiserver-6f9d647bcf-kt45l" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-calico--apiserver--6f9d647bcf--kt45l-" Mar 13 00:03:27.662965 containerd[1547]: 2026-03-13 00:03:27.548 [INFO][4545] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3da15eb1010aa4433af0759e4c8b0ee8336a26d35a2717893329dac231fa7d14" Namespace="calico-system" Pod="calico-apiserver-6f9d647bcf-kt45l" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-calico--apiserver--6f9d647bcf--kt45l-eth0" Mar 13 00:03:27.662965 containerd[1547]: 2026-03-13 00:03:27.582 [INFO][4557] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3da15eb1010aa4433af0759e4c8b0ee8336a26d35a2717893329dac231fa7d14" HandleID="k8s-pod-network.3da15eb1010aa4433af0759e4c8b0ee8336a26d35a2717893329dac231fa7d14" Workload="ci--4459--2--4--n--499db54055-k8s-calico--apiserver--6f9d647bcf--kt45l-eth0" Mar 13 00:03:27.662965 containerd[1547]: 2026-03-13 00:03:27.596 [INFO][4557] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3da15eb1010aa4433af0759e4c8b0ee8336a26d35a2717893329dac231fa7d14" HandleID="k8s-pod-network.3da15eb1010aa4433af0759e4c8b0ee8336a26d35a2717893329dac231fa7d14" Workload="ci--4459--2--4--n--499db54055-k8s-calico--apiserver--6f9d647bcf--kt45l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000273350), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-n-499db54055", "pod":"calico-apiserver-6f9d647bcf-kt45l", "timestamp":"2026-03-13 00:03:27.582960044 +0000 UTC"}, Hostname:"ci-4459-2-4-n-499db54055", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002a5080)} Mar 13 00:03:27.662965 containerd[1547]: 2026-03-13 00:03:27.596 [INFO][4557] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:03:27.662965 containerd[1547]: 2026-03-13 00:03:27.596 [INFO][4557] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:03:27.662965 containerd[1547]: 2026-03-13 00:03:27.596 [INFO][4557] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-499db54055' Mar 13 00:03:27.662965 containerd[1547]: 2026-03-13 00:03:27.600 [INFO][4557] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3da15eb1010aa4433af0759e4c8b0ee8336a26d35a2717893329dac231fa7d14" host="ci-4459-2-4-n-499db54055" Mar 13 00:03:27.662965 containerd[1547]: 2026-03-13 00:03:27.607 [INFO][4557] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-499db54055" Mar 13 00:03:27.662965 containerd[1547]: 2026-03-13 00:03:27.613 [INFO][4557] ipam/ipam.go 526: Trying affinity for 192.168.58.192/26 host="ci-4459-2-4-n-499db54055" Mar 13 00:03:27.662965 containerd[1547]: 2026-03-13 00:03:27.615 [INFO][4557] ipam/ipam.go 160: Attempting to load block cidr=192.168.58.192/26 host="ci-4459-2-4-n-499db54055" Mar 13 00:03:27.662965 containerd[1547]: 2026-03-13 00:03:27.618 [INFO][4557] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.58.192/26 host="ci-4459-2-4-n-499db54055" Mar 13 00:03:27.662965 containerd[1547]: 2026-03-13 00:03:27.618 [INFO][4557] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.58.192/26 handle="k8s-pod-network.3da15eb1010aa4433af0759e4c8b0ee8336a26d35a2717893329dac231fa7d14" host="ci-4459-2-4-n-499db54055" Mar 13 00:03:27.662965 containerd[1547]: 2026-03-13 00:03:27.620 [INFO][4557] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3da15eb1010aa4433af0759e4c8b0ee8336a26d35a2717893329dac231fa7d14 Mar 13 00:03:27.662965 containerd[1547]: 2026-03-13 00:03:27.626 [INFO][4557] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.58.192/26 handle="k8s-pod-network.3da15eb1010aa4433af0759e4c8b0ee8336a26d35a2717893329dac231fa7d14" host="ci-4459-2-4-n-499db54055" Mar 13 00:03:27.662965 containerd[1547]: 2026-03-13 00:03:27.633 [INFO][4557] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.58.196/26] block=192.168.58.192/26 handle="k8s-pod-network.3da15eb1010aa4433af0759e4c8b0ee8336a26d35a2717893329dac231fa7d14" host="ci-4459-2-4-n-499db54055" Mar 13 00:03:27.662965 containerd[1547]: 2026-03-13 00:03:27.633 [INFO][4557] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.58.196/26] handle="k8s-pod-network.3da15eb1010aa4433af0759e4c8b0ee8336a26d35a2717893329dac231fa7d14" host="ci-4459-2-4-n-499db54055" Mar 13 00:03:27.662965 containerd[1547]: 2026-03-13 00:03:27.634 [INFO][4557] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:03:27.662965 containerd[1547]: 2026-03-13 00:03:27.634 [INFO][4557] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.58.196/26] IPv6=[] ContainerID="3da15eb1010aa4433af0759e4c8b0ee8336a26d35a2717893329dac231fa7d14" HandleID="k8s-pod-network.3da15eb1010aa4433af0759e4c8b0ee8336a26d35a2717893329dac231fa7d14" Workload="ci--4459--2--4--n--499db54055-k8s-calico--apiserver--6f9d647bcf--kt45l-eth0" Mar 13 00:03:27.663833 containerd[1547]: 2026-03-13 00:03:27.638 [INFO][4545] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3da15eb1010aa4433af0759e4c8b0ee8336a26d35a2717893329dac231fa7d14" Namespace="calico-system" Pod="calico-apiserver-6f9d647bcf-kt45l" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-calico--apiserver--6f9d647bcf--kt45l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--499db54055-k8s-calico--apiserver--6f9d647bcf--kt45l-eth0", GenerateName:"calico-apiserver-6f9d647bcf-", Namespace:"calico-system", SelfLink:"", UID:"0d1cef36-5dde-4c02-a3f5-f4b49f0d9528", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 2, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f9d647bcf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-499db54055", ContainerID:"", Pod:"calico-apiserver-6f9d647bcf-kt45l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie05e86c26bf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:03:27.663833 containerd[1547]: 2026-03-13 00:03:27.638 [INFO][4545] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.196/32] ContainerID="3da15eb1010aa4433af0759e4c8b0ee8336a26d35a2717893329dac231fa7d14" Namespace="calico-system" Pod="calico-apiserver-6f9d647bcf-kt45l" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-calico--apiserver--6f9d647bcf--kt45l-eth0" Mar 13 00:03:27.663833 containerd[1547]: 2026-03-13 00:03:27.639 [INFO][4545] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie05e86c26bf ContainerID="3da15eb1010aa4433af0759e4c8b0ee8336a26d35a2717893329dac231fa7d14" Namespace="calico-system" Pod="calico-apiserver-6f9d647bcf-kt45l" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-calico--apiserver--6f9d647bcf--kt45l-eth0" Mar 13 00:03:27.663833 containerd[1547]: 2026-03-13 00:03:27.644 [INFO][4545] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3da15eb1010aa4433af0759e4c8b0ee8336a26d35a2717893329dac231fa7d14" Namespace="calico-system" Pod="calico-apiserver-6f9d647bcf-kt45l" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-calico--apiserver--6f9d647bcf--kt45l-eth0" Mar 13 00:03:27.663833 containerd[1547]: 2026-03-13 00:03:27.645 [INFO][4545] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3da15eb1010aa4433af0759e4c8b0ee8336a26d35a2717893329dac231fa7d14" Namespace="calico-system" Pod="calico-apiserver-6f9d647bcf-kt45l" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-calico--apiserver--6f9d647bcf--kt45l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--499db54055-k8s-calico--apiserver--6f9d647bcf--kt45l-eth0", GenerateName:"calico-apiserver-6f9d647bcf-", Namespace:"calico-system", SelfLink:"", UID:"0d1cef36-5dde-4c02-a3f5-f4b49f0d9528", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 2, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f9d647bcf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-499db54055", ContainerID:"3da15eb1010aa4433af0759e4c8b0ee8336a26d35a2717893329dac231fa7d14", Pod:"calico-apiserver-6f9d647bcf-kt45l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie05e86c26bf", MAC:"2e:75:ed:51:a6:30", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:03:27.663833 containerd[1547]: 2026-03-13 00:03:27.659 [INFO][4545] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3da15eb1010aa4433af0759e4c8b0ee8336a26d35a2717893329dac231fa7d14" Namespace="calico-system" Pod="calico-apiserver-6f9d647bcf-kt45l" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-calico--apiserver--6f9d647bcf--kt45l-eth0" Mar 13 00:03:27.693738 containerd[1547]: time="2026-03-13T00:03:27.693674760Z" level=info msg="connecting to shim 3da15eb1010aa4433af0759e4c8b0ee8336a26d35a2717893329dac231fa7d14" address="unix:///run/containerd/s/df44dcde68c247adbb8b7d884731686e9ca239c74cdb8af3843dc4df100b9e33" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:03:27.731343 systemd[1]: Started cri-containerd-3da15eb1010aa4433af0759e4c8b0ee8336a26d35a2717893329dac231fa7d14.scope - libcontainer container 3da15eb1010aa4433af0759e4c8b0ee8336a26d35a2717893329dac231fa7d14. Mar 13 00:03:27.798682 kubelet[2780]: I0313 00:03:27.798249 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-qsps2" podStartSLOduration=47.798230818 podStartE2EDuration="47.798230818s" podCreationTimestamp="2026-03-13 00:02:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:03:27.79761656 +0000 UTC m=+54.473174974" watchObservedRunningTime="2026-03-13 00:03:27.798230818 +0000 UTC m=+54.473789232" Mar 13 00:03:27.801560 containerd[1547]: time="2026-03-13T00:03:27.800885135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f9d647bcf-kt45l,Uid:0d1cef36-5dde-4c02-a3f5-f4b49f0d9528,Namespace:calico-system,Attempt:0,} returns sandbox id \"3da15eb1010aa4433af0759e4c8b0ee8336a26d35a2717893329dac231fa7d14\"" Mar 13 00:03:27.805954 containerd[1547]: time="2026-03-13T00:03:27.805917760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 13 00:03:28.080283 systemd-networkd[1419]: calib8f2e8e0c24: Gained IPv6LL Mar 13 00:03:28.491267 containerd[1547]: time="2026-03-13T00:03:28.491004460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-psgzh,Uid:60e1da4c-b449-490a-8fd7-8c8e37b407af,Namespace:calico-system,Attempt:0,}" Mar 13 00:03:28.492105 containerd[1547]: time="2026-03-13T00:03:28.491973007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ps5pt,Uid:2342ff31-0324-4c15-ba8c-63477bccb715,Namespace:kube-system,Attempt:0,}" Mar 13 00:03:28.702135 systemd-networkd[1419]: calib01bd8fd273: Link UP Mar 13 00:03:28.703951 systemd-networkd[1419]: calib01bd8fd273: Gained carrier Mar 13 00:03:28.723306 containerd[1547]: 2026-03-13 00:03:28.586 [INFO][4643] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--499db54055-k8s-coredns--674b8bbfcf--ps5pt-eth0 coredns-674b8bbfcf- kube-system 2342ff31-0324-4c15-ba8c-63477bccb715 869 0 2026-03-13 00:02:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-2-4-n-499db54055 coredns-674b8bbfcf-ps5pt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib01bd8fd273 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="cbcc83212f9b844fffcd16bb4b2f5c2dd4e352be71735e78cb414e7155e2d7b5" Namespace="kube-system" Pod="coredns-674b8bbfcf-ps5pt" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-coredns--674b8bbfcf--ps5pt-" Mar 13 00:03:28.723306 containerd[1547]: 2026-03-13 00:03:28.586 [INFO][4643] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cbcc83212f9b844fffcd16bb4b2f5c2dd4e352be71735e78cb414e7155e2d7b5" Namespace="kube-system" Pod="coredns-674b8bbfcf-ps5pt" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-coredns--674b8bbfcf--ps5pt-eth0" Mar 13 00:03:28.723306 containerd[1547]: 2026-03-13 00:03:28.623 [INFO][4672] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cbcc83212f9b844fffcd16bb4b2f5c2dd4e352be71735e78cb414e7155e2d7b5" HandleID="k8s-pod-network.cbcc83212f9b844fffcd16bb4b2f5c2dd4e352be71735e78cb414e7155e2d7b5" Workload="ci--4459--2--4--n--499db54055-k8s-coredns--674b8bbfcf--ps5pt-eth0" Mar 13 00:03:28.723306 containerd[1547]: 2026-03-13 00:03:28.637 [INFO][4672] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="cbcc83212f9b844fffcd16bb4b2f5c2dd4e352be71735e78cb414e7155e2d7b5" HandleID="k8s-pod-network.cbcc83212f9b844fffcd16bb4b2f5c2dd4e352be71735e78cb414e7155e2d7b5" Workload="ci--4459--2--4--n--499db54055-k8s-coredns--674b8bbfcf--ps5pt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002734e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-2-4-n-499db54055", "pod":"coredns-674b8bbfcf-ps5pt", "timestamp":"2026-03-13 00:03:28.623474268 +0000 UTC"}, Hostname:"ci-4459-2-4-n-499db54055", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001f4f20)} Mar 13 00:03:28.723306 containerd[1547]: 2026-03-13 00:03:28.638 [INFO][4672] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:03:28.723306 containerd[1547]: 2026-03-13 00:03:28.638 [INFO][4672] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:03:28.723306 containerd[1547]: 2026-03-13 00:03:28.638 [INFO][4672] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-499db54055' Mar 13 00:03:28.723306 containerd[1547]: 2026-03-13 00:03:28.641 [INFO][4672] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.cbcc83212f9b844fffcd16bb4b2f5c2dd4e352be71735e78cb414e7155e2d7b5" host="ci-4459-2-4-n-499db54055" Mar 13 00:03:28.723306 containerd[1547]: 2026-03-13 00:03:28.648 [INFO][4672] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-499db54055" Mar 13 00:03:28.723306 containerd[1547]: 2026-03-13 00:03:28.655 [INFO][4672] ipam/ipam.go 526: Trying affinity for 192.168.58.192/26 host="ci-4459-2-4-n-499db54055" Mar 13 00:03:28.723306 containerd[1547]: 2026-03-13 00:03:28.666 [INFO][4672] ipam/ipam.go 160: Attempting to load block cidr=192.168.58.192/26 host="ci-4459-2-4-n-499db54055" Mar 13 00:03:28.723306 containerd[1547]: 2026-03-13 00:03:28.670 [INFO][4672] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.58.192/26 host="ci-4459-2-4-n-499db54055" Mar 13 00:03:28.723306 containerd[1547]: 2026-03-13 00:03:28.670 [INFO][4672] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.58.192/26 handle="k8s-pod-network.cbcc83212f9b844fffcd16bb4b2f5c2dd4e352be71735e78cb414e7155e2d7b5" host="ci-4459-2-4-n-499db54055" Mar 13 00:03:28.723306 containerd[1547]: 2026-03-13 00:03:28.673 [INFO][4672] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.cbcc83212f9b844fffcd16bb4b2f5c2dd4e352be71735e78cb414e7155e2d7b5 Mar 13 00:03:28.723306 containerd[1547]: 2026-03-13 00:03:28.680 [INFO][4672] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.58.192/26 handle="k8s-pod-network.cbcc83212f9b844fffcd16bb4b2f5c2dd4e352be71735e78cb414e7155e2d7b5" host="ci-4459-2-4-n-499db54055" Mar 13 00:03:28.723306 containerd[1547]: 2026-03-13 00:03:28.691 [INFO][4672] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.58.197/26] block=192.168.58.192/26 handle="k8s-pod-network.cbcc83212f9b844fffcd16bb4b2f5c2dd4e352be71735e78cb414e7155e2d7b5" host="ci-4459-2-4-n-499db54055" Mar 13 00:03:28.723306 containerd[1547]: 2026-03-13 00:03:28.691 [INFO][4672] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.58.197/26] handle="k8s-pod-network.cbcc83212f9b844fffcd16bb4b2f5c2dd4e352be71735e78cb414e7155e2d7b5" host="ci-4459-2-4-n-499db54055" Mar 13 00:03:28.723306 containerd[1547]: 2026-03-13 00:03:28.691 [INFO][4672] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:03:28.723306 containerd[1547]: 2026-03-13 00:03:28.691 [INFO][4672] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.58.197/26] IPv6=[] ContainerID="cbcc83212f9b844fffcd16bb4b2f5c2dd4e352be71735e78cb414e7155e2d7b5" HandleID="k8s-pod-network.cbcc83212f9b844fffcd16bb4b2f5c2dd4e352be71735e78cb414e7155e2d7b5" Workload="ci--4459--2--4--n--499db54055-k8s-coredns--674b8bbfcf--ps5pt-eth0" Mar 13 00:03:28.724439 containerd[1547]: 2026-03-13 00:03:28.695 [INFO][4643] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cbcc83212f9b844fffcd16bb4b2f5c2dd4e352be71735e78cb414e7155e2d7b5" Namespace="kube-system" Pod="coredns-674b8bbfcf-ps5pt" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-coredns--674b8bbfcf--ps5pt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--499db54055-k8s-coredns--674b8bbfcf--ps5pt-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2342ff31-0324-4c15-ba8c-63477bccb715", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 2, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-499db54055", ContainerID:"", Pod:"coredns-674b8bbfcf-ps5pt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib01bd8fd273", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:03:28.724439 containerd[1547]: 2026-03-13 00:03:28.696 [INFO][4643] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.197/32] ContainerID="cbcc83212f9b844fffcd16bb4b2f5c2dd4e352be71735e78cb414e7155e2d7b5" Namespace="kube-system" Pod="coredns-674b8bbfcf-ps5pt" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-coredns--674b8bbfcf--ps5pt-eth0" Mar 13 00:03:28.724439 containerd[1547]: 2026-03-13 00:03:28.696 [INFO][4643] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib01bd8fd273 ContainerID="cbcc83212f9b844fffcd16bb4b2f5c2dd4e352be71735e78cb414e7155e2d7b5" Namespace="kube-system" Pod="coredns-674b8bbfcf-ps5pt" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-coredns--674b8bbfcf--ps5pt-eth0" Mar 13 00:03:28.724439 containerd[1547]: 2026-03-13 00:03:28.703 [INFO][4643] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cbcc83212f9b844fffcd16bb4b2f5c2dd4e352be71735e78cb414e7155e2d7b5" Namespace="kube-system" Pod="coredns-674b8bbfcf-ps5pt" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-coredns--674b8bbfcf--ps5pt-eth0" Mar 13 00:03:28.724439 containerd[1547]: 2026-03-13 00:03:28.704 [INFO][4643] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cbcc83212f9b844fffcd16bb4b2f5c2dd4e352be71735e78cb414e7155e2d7b5" Namespace="kube-system" Pod="coredns-674b8bbfcf-ps5pt" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-coredns--674b8bbfcf--ps5pt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--499db54055-k8s-coredns--674b8bbfcf--ps5pt-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2342ff31-0324-4c15-ba8c-63477bccb715", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 2, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-499db54055", ContainerID:"cbcc83212f9b844fffcd16bb4b2f5c2dd4e352be71735e78cb414e7155e2d7b5", Pod:"coredns-674b8bbfcf-ps5pt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib01bd8fd273", MAC:"c6:dd:f6:2d:85:d4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:03:28.724439 containerd[1547]: 2026-03-13 00:03:28.719 [INFO][4643] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cbcc83212f9b844fffcd16bb4b2f5c2dd4e352be71735e78cb414e7155e2d7b5" Namespace="kube-system" Pod="coredns-674b8bbfcf-ps5pt" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-coredns--674b8bbfcf--ps5pt-eth0" Mar 13 00:03:28.775341 containerd[1547]: time="2026-03-13T00:03:28.775208698Z" level=info msg="connecting to shim cbcc83212f9b844fffcd16bb4b2f5c2dd4e352be71735e78cb414e7155e2d7b5" address="unix:///run/containerd/s/1fd508769ff10412b01d3eb1d86de106b87897b57a807106ab142e34fb0b23c6" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:03:28.820431 systemd[1]: Started cri-containerd-cbcc83212f9b844fffcd16bb4b2f5c2dd4e352be71735e78cb414e7155e2d7b5.scope - libcontainer container cbcc83212f9b844fffcd16bb4b2f5c2dd4e352be71735e78cb414e7155e2d7b5. Mar 13 00:03:28.836295 systemd-networkd[1419]: cali71c25911506: Link UP Mar 13 00:03:28.837038 systemd-networkd[1419]: cali71c25911506: Gained carrier Mar 13 00:03:28.867452 containerd[1547]: 2026-03-13 00:03:28.581 [INFO][4637] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--499db54055-k8s-goldmane--5b85766d88--psgzh-eth0 goldmane-5b85766d88- calico-system 60e1da4c-b449-490a-8fd7-8c8e37b407af 878 0 2026-03-13 00:02:57 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459-2-4-n-499db54055 goldmane-5b85766d88-psgzh eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali71c25911506 [] [] }} ContainerID="be2a18d3c6b9a661e5ee062a17921d6d1f6d06832b011b76d2ec2e67d7d9498d" Namespace="calico-system" Pod="goldmane-5b85766d88-psgzh" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-goldmane--5b85766d88--psgzh-" Mar 13 00:03:28.867452 containerd[1547]: 2026-03-13 00:03:28.581 [INFO][4637] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="be2a18d3c6b9a661e5ee062a17921d6d1f6d06832b011b76d2ec2e67d7d9498d" Namespace="calico-system" Pod="goldmane-5b85766d88-psgzh" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-goldmane--5b85766d88--psgzh-eth0" Mar 13 00:03:28.867452 containerd[1547]: 2026-03-13 00:03:28.630 [INFO][4674] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="be2a18d3c6b9a661e5ee062a17921d6d1f6d06832b011b76d2ec2e67d7d9498d" HandleID="k8s-pod-network.be2a18d3c6b9a661e5ee062a17921d6d1f6d06832b011b76d2ec2e67d7d9498d" Workload="ci--4459--2--4--n--499db54055-k8s-goldmane--5b85766d88--psgzh-eth0" Mar 13 00:03:28.867452 containerd[1547]: 2026-03-13 00:03:28.647 [INFO][4674] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="be2a18d3c6b9a661e5ee062a17921d6d1f6d06832b011b76d2ec2e67d7d9498d" HandleID="k8s-pod-network.be2a18d3c6b9a661e5ee062a17921d6d1f6d06832b011b76d2ec2e67d7d9498d" Workload="ci--4459--2--4--n--499db54055-k8s-goldmane--5b85766d88--psgzh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f34b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-n-499db54055", "pod":"goldmane-5b85766d88-psgzh", "timestamp":"2026-03-13 00:03:28.630580388 +0000 UTC"}, Hostname:"ci-4459-2-4-n-499db54055", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400025cdc0)} Mar 13 00:03:28.867452 containerd[1547]: 2026-03-13 00:03:28.647 [INFO][4674] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:03:28.867452 containerd[1547]: 2026-03-13 00:03:28.692 [INFO][4674] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:03:28.867452 containerd[1547]: 2026-03-13 00:03:28.692 [INFO][4674] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-499db54055' Mar 13 00:03:28.867452 containerd[1547]: 2026-03-13 00:03:28.743 [INFO][4674] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.be2a18d3c6b9a661e5ee062a17921d6d1f6d06832b011b76d2ec2e67d7d9498d" host="ci-4459-2-4-n-499db54055" Mar 13 00:03:28.867452 containerd[1547]: 2026-03-13 00:03:28.766 [INFO][4674] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-499db54055" Mar 13 00:03:28.867452 containerd[1547]: 2026-03-13 00:03:28.790 [INFO][4674] ipam/ipam.go 526: Trying affinity for 192.168.58.192/26 host="ci-4459-2-4-n-499db54055" Mar 13 00:03:28.867452 containerd[1547]: 2026-03-13 00:03:28.796 [INFO][4674] ipam/ipam.go 160: Attempting to load block cidr=192.168.58.192/26 host="ci-4459-2-4-n-499db54055" Mar 13 00:03:28.867452 containerd[1547]: 2026-03-13 00:03:28.800 [INFO][4674] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.58.192/26 host="ci-4459-2-4-n-499db54055" Mar 13 00:03:28.867452 containerd[1547]: 2026-03-13 00:03:28.800 [INFO][4674] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.58.192/26 handle="k8s-pod-network.be2a18d3c6b9a661e5ee062a17921d6d1f6d06832b011b76d2ec2e67d7d9498d" host="ci-4459-2-4-n-499db54055" Mar 13 00:03:28.867452 containerd[1547]: 2026-03-13 00:03:28.803 [INFO][4674] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.be2a18d3c6b9a661e5ee062a17921d6d1f6d06832b011b76d2ec2e67d7d9498d Mar 13 00:03:28.867452 containerd[1547]: 2026-03-13 00:03:28.811 [INFO][4674] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.58.192/26 handle="k8s-pod-network.be2a18d3c6b9a661e5ee062a17921d6d1f6d06832b011b76d2ec2e67d7d9498d" host="ci-4459-2-4-n-499db54055" Mar 13 00:03:28.867452 containerd[1547]: 2026-03-13 00:03:28.825 [INFO][4674] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.58.198/26] block=192.168.58.192/26 handle="k8s-pod-network.be2a18d3c6b9a661e5ee062a17921d6d1f6d06832b011b76d2ec2e67d7d9498d" host="ci-4459-2-4-n-499db54055" Mar 13 00:03:28.867452 containerd[1547]: 2026-03-13 00:03:28.825 [INFO][4674] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.58.198/26] handle="k8s-pod-network.be2a18d3c6b9a661e5ee062a17921d6d1f6d06832b011b76d2ec2e67d7d9498d" host="ci-4459-2-4-n-499db54055" Mar 13 00:03:28.867452 containerd[1547]: 2026-03-13 00:03:28.825 [INFO][4674] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:03:28.867452 containerd[1547]: 2026-03-13 00:03:28.825 [INFO][4674] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.58.198/26] IPv6=[] ContainerID="be2a18d3c6b9a661e5ee062a17921d6d1f6d06832b011b76d2ec2e67d7d9498d" HandleID="k8s-pod-network.be2a18d3c6b9a661e5ee062a17921d6d1f6d06832b011b76d2ec2e67d7d9498d" Workload="ci--4459--2--4--n--499db54055-k8s-goldmane--5b85766d88--psgzh-eth0" Mar 13 00:03:28.868986 containerd[1547]: 2026-03-13 00:03:28.829 [INFO][4637] cni-plugin/k8s.go 418: Populated endpoint ContainerID="be2a18d3c6b9a661e5ee062a17921d6d1f6d06832b011b76d2ec2e67d7d9498d" Namespace="calico-system" Pod="goldmane-5b85766d88-psgzh" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-goldmane--5b85766d88--psgzh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--499db54055-k8s-goldmane--5b85766d88--psgzh-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"60e1da4c-b449-490a-8fd7-8c8e37b407af", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 2, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-499db54055", ContainerID:"", Pod:"goldmane-5b85766d88-psgzh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.58.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali71c25911506", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:03:28.868986 containerd[1547]: 2026-03-13 00:03:28.829 [INFO][4637] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.198/32] ContainerID="be2a18d3c6b9a661e5ee062a17921d6d1f6d06832b011b76d2ec2e67d7d9498d" Namespace="calico-system" Pod="goldmane-5b85766d88-psgzh" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-goldmane--5b85766d88--psgzh-eth0" Mar 13 00:03:28.868986 containerd[1547]: 2026-03-13 00:03:28.829 [INFO][4637] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali71c25911506 ContainerID="be2a18d3c6b9a661e5ee062a17921d6d1f6d06832b011b76d2ec2e67d7d9498d" Namespace="calico-system" Pod="goldmane-5b85766d88-psgzh" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-goldmane--5b85766d88--psgzh-eth0" Mar 13 00:03:28.868986 containerd[1547]: 2026-03-13 00:03:28.836 [INFO][4637] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="be2a18d3c6b9a661e5ee062a17921d6d1f6d06832b011b76d2ec2e67d7d9498d" Namespace="calico-system" Pod="goldmane-5b85766d88-psgzh" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-goldmane--5b85766d88--psgzh-eth0" Mar 13 00:03:28.868986 containerd[1547]: 2026-03-13 00:03:28.838 [INFO][4637] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="be2a18d3c6b9a661e5ee062a17921d6d1f6d06832b011b76d2ec2e67d7d9498d" Namespace="calico-system" Pod="goldmane-5b85766d88-psgzh" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-goldmane--5b85766d88--psgzh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--499db54055-k8s-goldmane--5b85766d88--psgzh-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"60e1da4c-b449-490a-8fd7-8c8e37b407af", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 2, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-499db54055", ContainerID:"be2a18d3c6b9a661e5ee062a17921d6d1f6d06832b011b76d2ec2e67d7d9498d", Pod:"goldmane-5b85766d88-psgzh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.58.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali71c25911506", MAC:"02:78:7c:a1:97:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:03:28.868986 containerd[1547]: 2026-03-13 00:03:28.861 [INFO][4637] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="be2a18d3c6b9a661e5ee062a17921d6d1f6d06832b011b76d2ec2e67d7d9498d" Namespace="calico-system" Pod="goldmane-5b85766d88-psgzh" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-goldmane--5b85766d88--psgzh-eth0" Mar 13 00:03:28.909404 containerd[1547]: time="2026-03-13T00:03:28.909345032Z" level=info msg="connecting to shim be2a18d3c6b9a661e5ee062a17921d6d1f6d06832b011b76d2ec2e67d7d9498d" address="unix:///run/containerd/s/6c72dcd573278066a20f40ce630373b98a14023cc30e67231057a650e8e18fad" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:03:28.915353 containerd[1547]: time="2026-03-13T00:03:28.915279799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ps5pt,Uid:2342ff31-0324-4c15-ba8c-63477bccb715,Namespace:kube-system,Attempt:0,} returns sandbox id \"cbcc83212f9b844fffcd16bb4b2f5c2dd4e352be71735e78cb414e7155e2d7b5\"" Mar 13 00:03:28.925647 containerd[1547]: time="2026-03-13T00:03:28.925471206Z" level=info msg="CreateContainer within sandbox \"cbcc83212f9b844fffcd16bb4b2f5c2dd4e352be71735e78cb414e7155e2d7b5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 13 00:03:28.956238 containerd[1547]: time="2026-03-13T00:03:28.955923023Z" level=info msg="Container 3ac0cf34af071d9358b4fe3bc8d94b414a195d37102f1316c7ed3c3740efbe6e: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:03:28.969653 containerd[1547]: time="2026-03-13T00:03:28.969597648Z" level=info msg="CreateContainer within sandbox \"cbcc83212f9b844fffcd16bb4b2f5c2dd4e352be71735e78cb414e7155e2d7b5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3ac0cf34af071d9358b4fe3bc8d94b414a195d37102f1316c7ed3c3740efbe6e\"" Mar 13 00:03:28.972316 containerd[1547]: time="2026-03-13T00:03:28.972263603Z" level=info msg="StartContainer for \"3ac0cf34af071d9358b4fe3bc8d94b414a195d37102f1316c7ed3c3740efbe6e\"" Mar 13 00:03:28.976234 containerd[1547]: time="2026-03-13T00:03:28.976116391Z" level=info msg="connecting to shim 3ac0cf34af071d9358b4fe3bc8d94b414a195d37102f1316c7ed3c3740efbe6e" address="unix:///run/containerd/s/1fd508769ff10412b01d3eb1d86de106b87897b57a807106ab142e34fb0b23c6" protocol=ttrpc version=3 Mar 13 00:03:29.010871 systemd[1]: Started cri-containerd-be2a18d3c6b9a661e5ee062a17921d6d1f6d06832b011b76d2ec2e67d7d9498d.scope - libcontainer container be2a18d3c6b9a661e5ee062a17921d6d1f6d06832b011b76d2ec2e67d7d9498d. Mar 13 00:03:29.033450 systemd[1]: Started cri-containerd-3ac0cf34af071d9358b4fe3bc8d94b414a195d37102f1316c7ed3c3740efbe6e.scope - libcontainer container 3ac0cf34af071d9358b4fe3bc8d94b414a195d37102f1316c7ed3c3740efbe6e. Mar 13 00:03:29.085203 containerd[1547]: time="2026-03-13T00:03:29.085094999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-psgzh,Uid:60e1da4c-b449-490a-8fd7-8c8e37b407af,Namespace:calico-system,Attempt:0,} returns sandbox id \"be2a18d3c6b9a661e5ee062a17921d6d1f6d06832b011b76d2ec2e67d7d9498d\"" Mar 13 00:03:29.087045 containerd[1547]: time="2026-03-13T00:03:29.087001851Z" level=info msg="StartContainer for \"3ac0cf34af071d9358b4fe3bc8d94b414a195d37102f1316c7ed3c3740efbe6e\" returns successfully" Mar 13 00:03:29.489126 systemd-networkd[1419]: calie05e86c26bf: Gained IPv6LL Mar 13 00:03:29.494288 containerd[1547]: time="2026-03-13T00:03:29.492531538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7995888b6d-w882j,Uid:a121ceba-000c-4da5-9bb5-f183423a4619,Namespace:calico-system,Attempt:0,}" Mar 13 00:03:29.510838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount146406552.mount: Deactivated successfully. Mar 13 00:03:29.701978 systemd-networkd[1419]: cali107734e5eba: Link UP Mar 13 00:03:29.703389 systemd-networkd[1419]: cali107734e5eba: Gained carrier Mar 13 00:03:29.727139 containerd[1547]: 2026-03-13 00:03:29.596 [INFO][4849] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--499db54055-k8s-calico--kube--controllers--7995888b6d--w882j-eth0 calico-kube-controllers-7995888b6d- calico-system a121ceba-000c-4da5-9bb5-f183423a4619 875 0 2026-03-13 00:02:59 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7995888b6d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459-2-4-n-499db54055 calico-kube-controllers-7995888b6d-w882j eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali107734e5eba [] [] }} ContainerID="b2058a8592184318e3e0497006f1bd0fc78e5d1cab269f74a35b1185827a96bb" Namespace="calico-system" Pod="calico-kube-controllers-7995888b6d-w882j" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-calico--kube--controllers--7995888b6d--w882j-" Mar 13 00:03:29.727139 containerd[1547]: 2026-03-13 00:03:29.596 [INFO][4849] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b2058a8592184318e3e0497006f1bd0fc78e5d1cab269f74a35b1185827a96bb" Namespace="calico-system" Pod="calico-kube-controllers-7995888b6d-w882j" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-calico--kube--controllers--7995888b6d--w882j-eth0" Mar 13 00:03:29.727139 containerd[1547]: 2026-03-13 00:03:29.623 [INFO][4864] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b2058a8592184318e3e0497006f1bd0fc78e5d1cab269f74a35b1185827a96bb" HandleID="k8s-pod-network.b2058a8592184318e3e0497006f1bd0fc78e5d1cab269f74a35b1185827a96bb" Workload="ci--4459--2--4--n--499db54055-k8s-calico--kube--controllers--7995888b6d--w882j-eth0" Mar 13 00:03:29.727139 containerd[1547]: 2026-03-13 00:03:29.638 [INFO][4864] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b2058a8592184318e3e0497006f1bd0fc78e5d1cab269f74a35b1185827a96bb" HandleID="k8s-pod-network.b2058a8592184318e3e0497006f1bd0fc78e5d1cab269f74a35b1185827a96bb" Workload="ci--4459--2--4--n--499db54055-k8s-calico--kube--controllers--7995888b6d--w882j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ed4b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-n-499db54055", "pod":"calico-kube-controllers-7995888b6d-w882j", "timestamp":"2026-03-13 00:03:29.623368648 +0000 UTC"}, Hostname:"ci-4459-2-4-n-499db54055", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400030d080)} Mar 13 00:03:29.727139 containerd[1547]: 2026-03-13 00:03:29.639 [INFO][4864] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:03:29.727139 containerd[1547]: 2026-03-13 00:03:29.639 [INFO][4864] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:03:29.727139 containerd[1547]: 2026-03-13 00:03:29.639 [INFO][4864] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-499db54055' Mar 13 00:03:29.727139 containerd[1547]: 2026-03-13 00:03:29.642 [INFO][4864] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b2058a8592184318e3e0497006f1bd0fc78e5d1cab269f74a35b1185827a96bb" host="ci-4459-2-4-n-499db54055" Mar 13 00:03:29.727139 containerd[1547]: 2026-03-13 00:03:29.652 [INFO][4864] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-499db54055" Mar 13 00:03:29.727139 containerd[1547]: 2026-03-13 00:03:29.668 [INFO][4864] ipam/ipam.go 526: Trying affinity for 192.168.58.192/26 host="ci-4459-2-4-n-499db54055" Mar 13 00:03:29.727139 containerd[1547]: 2026-03-13 00:03:29.670 [INFO][4864] ipam/ipam.go 160: Attempting to load block cidr=192.168.58.192/26 host="ci-4459-2-4-n-499db54055" Mar 13 00:03:29.727139 containerd[1547]: 2026-03-13 00:03:29.674 [INFO][4864] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.58.192/26 host="ci-4459-2-4-n-499db54055" Mar 13 00:03:29.727139 containerd[1547]: 2026-03-13 00:03:29.674 [INFO][4864] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.58.192/26 handle="k8s-pod-network.b2058a8592184318e3e0497006f1bd0fc78e5d1cab269f74a35b1185827a96bb" host="ci-4459-2-4-n-499db54055" Mar 13 00:03:29.727139 containerd[1547]: 2026-03-13 00:03:29.676 [INFO][4864] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b2058a8592184318e3e0497006f1bd0fc78e5d1cab269f74a35b1185827a96bb Mar 13 00:03:29.727139 containerd[1547]: 2026-03-13 00:03:29.682 [INFO][4864] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.58.192/26 handle="k8s-pod-network.b2058a8592184318e3e0497006f1bd0fc78e5d1cab269f74a35b1185827a96bb" host="ci-4459-2-4-n-499db54055" Mar 13 00:03:29.727139 containerd[1547]: 2026-03-13 00:03:29.693 [INFO][4864] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.58.199/26] block=192.168.58.192/26 handle="k8s-pod-network.b2058a8592184318e3e0497006f1bd0fc78e5d1cab269f74a35b1185827a96bb" host="ci-4459-2-4-n-499db54055" Mar 13 00:03:29.727139 containerd[1547]: 2026-03-13 00:03:29.693 [INFO][4864] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.58.199/26] handle="k8s-pod-network.b2058a8592184318e3e0497006f1bd0fc78e5d1cab269f74a35b1185827a96bb" host="ci-4459-2-4-n-499db54055" Mar 13 00:03:29.727139 containerd[1547]: 2026-03-13 00:03:29.693 [INFO][4864] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:03:29.727139 containerd[1547]: 2026-03-13 00:03:29.693 [INFO][4864] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.58.199/26] IPv6=[] ContainerID="b2058a8592184318e3e0497006f1bd0fc78e5d1cab269f74a35b1185827a96bb" HandleID="k8s-pod-network.b2058a8592184318e3e0497006f1bd0fc78e5d1cab269f74a35b1185827a96bb" Workload="ci--4459--2--4--n--499db54055-k8s-calico--kube--controllers--7995888b6d--w882j-eth0" Mar 13 00:03:29.732796 containerd[1547]: 2026-03-13 00:03:29.697 [INFO][4849] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b2058a8592184318e3e0497006f1bd0fc78e5d1cab269f74a35b1185827a96bb" Namespace="calico-system" Pod="calico-kube-controllers-7995888b6d-w882j" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-calico--kube--controllers--7995888b6d--w882j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--499db54055-k8s-calico--kube--controllers--7995888b6d--w882j-eth0", GenerateName:"calico-kube-controllers-7995888b6d-", Namespace:"calico-system", SelfLink:"", UID:"a121ceba-000c-4da5-9bb5-f183423a4619", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 2, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7995888b6d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-499db54055", ContainerID:"", Pod:"calico-kube-controllers-7995888b6d-w882j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.58.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali107734e5eba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:03:29.732796 containerd[1547]: 2026-03-13 00:03:29.697 [INFO][4849] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.199/32] ContainerID="b2058a8592184318e3e0497006f1bd0fc78e5d1cab269f74a35b1185827a96bb" Namespace="calico-system" Pod="calico-kube-controllers-7995888b6d-w882j" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-calico--kube--controllers--7995888b6d--w882j-eth0" Mar 13 00:03:29.732796 containerd[1547]: 2026-03-13 00:03:29.697 [INFO][4849] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali107734e5eba ContainerID="b2058a8592184318e3e0497006f1bd0fc78e5d1cab269f74a35b1185827a96bb" Namespace="calico-system" Pod="calico-kube-controllers-7995888b6d-w882j" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-calico--kube--controllers--7995888b6d--w882j-eth0" Mar 13 00:03:29.732796 containerd[1547]: 2026-03-13 00:03:29.703 [INFO][4849] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b2058a8592184318e3e0497006f1bd0fc78e5d1cab269f74a35b1185827a96bb" Namespace="calico-system" Pod="calico-kube-controllers-7995888b6d-w882j" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-calico--kube--controllers--7995888b6d--w882j-eth0" Mar 13 00:03:29.732796 containerd[1547]: 2026-03-13 00:03:29.704 [INFO][4849] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b2058a8592184318e3e0497006f1bd0fc78e5d1cab269f74a35b1185827a96bb" Namespace="calico-system" Pod="calico-kube-controllers-7995888b6d-w882j" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-calico--kube--controllers--7995888b6d--w882j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--499db54055-k8s-calico--kube--controllers--7995888b6d--w882j-eth0", GenerateName:"calico-kube-controllers-7995888b6d-", Namespace:"calico-system", SelfLink:"", UID:"a121ceba-000c-4da5-9bb5-f183423a4619", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 2, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7995888b6d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-499db54055", ContainerID:"b2058a8592184318e3e0497006f1bd0fc78e5d1cab269f74a35b1185827a96bb", Pod:"calico-kube-controllers-7995888b6d-w882j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.58.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali107734e5eba", MAC:"e6:fd:52:2f:24:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:03:29.732796 containerd[1547]: 2026-03-13 00:03:29.720 [INFO][4849] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b2058a8592184318e3e0497006f1bd0fc78e5d1cab269f74a35b1185827a96bb" Namespace="calico-system" Pod="calico-kube-controllers-7995888b6d-w882j" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-calico--kube--controllers--7995888b6d--w882j-eth0" Mar 13 00:03:29.768093 containerd[1547]: time="2026-03-13T00:03:29.767620166Z" level=info msg="connecting to shim b2058a8592184318e3e0497006f1bd0fc78e5d1cab269f74a35b1185827a96bb" address="unix:///run/containerd/s/c1215c885d875c5e43d07c0acff41a8ba939048090f8a0ce5b592743afd74e61" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:03:29.811292 systemd[1]: Started cri-containerd-b2058a8592184318e3e0497006f1bd0fc78e5d1cab269f74a35b1185827a96bb.scope - libcontainer container b2058a8592184318e3e0497006f1bd0fc78e5d1cab269f74a35b1185827a96bb. Mar 13 00:03:29.831683 kubelet[2780]: I0313 00:03:29.831001 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-ps5pt" podStartSLOduration=49.830983304 podStartE2EDuration="49.830983304s" podCreationTimestamp="2026-03-13 00:02:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:03:29.830629734 +0000 UTC m=+56.506188148" watchObservedRunningTime="2026-03-13 00:03:29.830983304 +0000 UTC m=+56.506541678" Mar 13 00:03:29.953844 containerd[1547]: time="2026-03-13T00:03:29.953768193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7995888b6d-w882j,Uid:a121ceba-000c-4da5-9bb5-f183423a4619,Namespace:calico-system,Attempt:0,} returns sandbox id \"b2058a8592184318e3e0497006f1bd0fc78e5d1cab269f74a35b1185827a96bb\"" Mar 13 00:03:30.257804 systemd-networkd[1419]: calib01bd8fd273: Gained IPv6LL Mar 13 00:03:30.492306 containerd[1547]: time="2026-03-13T00:03:30.491974946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f9d647bcf-qvp9s,Uid:4c6f0879-0241-4332-9161-877f63c69a41,Namespace:calico-system,Attempt:0,}" Mar 13 00:03:30.513257 systemd-networkd[1419]: cali71c25911506: Gained IPv6LL Mar 13 00:03:30.715467 systemd-networkd[1419]: calibaadd338167: Link UP Mar 13 00:03:30.715970 systemd-networkd[1419]: calibaadd338167: Gained carrier Mar 13 00:03:30.742949 containerd[1547]: 2026-03-13 00:03:30.586 [INFO][4953] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--499db54055-k8s-calico--apiserver--6f9d647bcf--qvp9s-eth0 calico-apiserver-6f9d647bcf- calico-system 4c6f0879-0241-4332-9161-877f63c69a41 879 0 2026-03-13 00:02:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f9d647bcf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-4-n-499db54055 calico-apiserver-6f9d647bcf-qvp9s eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calibaadd338167 [] [] }} ContainerID="4e6f35c4106bb62b0b96d8414b19771a7459747f01b63c578bd0339cbcdd46f6" Namespace="calico-system" Pod="calico-apiserver-6f9d647bcf-qvp9s" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-calico--apiserver--6f9d647bcf--qvp9s-" Mar 13 00:03:30.742949 containerd[1547]: 2026-03-13 00:03:30.587 [INFO][4953] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4e6f35c4106bb62b0b96d8414b19771a7459747f01b63c578bd0339cbcdd46f6" Namespace="calico-system" Pod="calico-apiserver-6f9d647bcf-qvp9s" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-calico--apiserver--6f9d647bcf--qvp9s-eth0" Mar 13 00:03:30.742949 containerd[1547]: 2026-03-13 00:03:30.626 [INFO][4967] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4e6f35c4106bb62b0b96d8414b19771a7459747f01b63c578bd0339cbcdd46f6" HandleID="k8s-pod-network.4e6f35c4106bb62b0b96d8414b19771a7459747f01b63c578bd0339cbcdd46f6" Workload="ci--4459--2--4--n--499db54055-k8s-calico--apiserver--6f9d647bcf--qvp9s-eth0" Mar 13 00:03:30.742949 containerd[1547]: 2026-03-13 00:03:30.641 [INFO][4967] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4e6f35c4106bb62b0b96d8414b19771a7459747f01b63c578bd0339cbcdd46f6" HandleID="k8s-pod-network.4e6f35c4106bb62b0b96d8414b19771a7459747f01b63c578bd0339cbcdd46f6" Workload="ci--4459--2--4--n--499db54055-k8s-calico--apiserver--6f9d647bcf--qvp9s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbaf0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-n-499db54055", "pod":"calico-apiserver-6f9d647bcf-qvp9s", "timestamp":"2026-03-13 00:03:30.626627989 +0000 UTC"}, Hostname:"ci-4459-2-4-n-499db54055", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400036cdc0)} Mar 13 00:03:30.742949 containerd[1547]: 2026-03-13 00:03:30.641 [INFO][4967] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:03:30.742949 containerd[1547]: 2026-03-13 00:03:30.641 [INFO][4967] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:03:30.742949 containerd[1547]: 2026-03-13 00:03:30.641 [INFO][4967] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-499db54055' Mar 13 00:03:30.742949 containerd[1547]: 2026-03-13 00:03:30.648 [INFO][4967] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4e6f35c4106bb62b0b96d8414b19771a7459747f01b63c578bd0339cbcdd46f6" host="ci-4459-2-4-n-499db54055" Mar 13 00:03:30.742949 containerd[1547]: 2026-03-13 00:03:30.660 [INFO][4967] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-499db54055" Mar 13 00:03:30.742949 containerd[1547]: 2026-03-13 00:03:30.670 [INFO][4967] ipam/ipam.go 526: Trying affinity for 192.168.58.192/26 host="ci-4459-2-4-n-499db54055" Mar 13 00:03:30.742949 containerd[1547]: 2026-03-13 00:03:30.673 [INFO][4967] ipam/ipam.go 160: Attempting to load block cidr=192.168.58.192/26 host="ci-4459-2-4-n-499db54055" Mar 13 00:03:30.742949 containerd[1547]: 2026-03-13 00:03:30.678 [INFO][4967] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.58.192/26 host="ci-4459-2-4-n-499db54055" Mar 13 00:03:30.742949 containerd[1547]: 2026-03-13 00:03:30.678 [INFO][4967] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.58.192/26 handle="k8s-pod-network.4e6f35c4106bb62b0b96d8414b19771a7459747f01b63c578bd0339cbcdd46f6" host="ci-4459-2-4-n-499db54055" Mar 13 00:03:30.742949 containerd[1547]: 2026-03-13 00:03:30.681 [INFO][4967] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4e6f35c4106bb62b0b96d8414b19771a7459747f01b63c578bd0339cbcdd46f6 Mar 13 00:03:30.742949 containerd[1547]: 2026-03-13 00:03:30.687 [INFO][4967] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.58.192/26 handle="k8s-pod-network.4e6f35c4106bb62b0b96d8414b19771a7459747f01b63c578bd0339cbcdd46f6" host="ci-4459-2-4-n-499db54055" Mar 13 00:03:30.742949 containerd[1547]: 2026-03-13 00:03:30.698 [INFO][4967] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.58.200/26] block=192.168.58.192/26 handle="k8s-pod-network.4e6f35c4106bb62b0b96d8414b19771a7459747f01b63c578bd0339cbcdd46f6" host="ci-4459-2-4-n-499db54055" Mar 13 00:03:30.742949 containerd[1547]: 2026-03-13 00:03:30.699 [INFO][4967] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.58.200/26] handle="k8s-pod-network.4e6f35c4106bb62b0b96d8414b19771a7459747f01b63c578bd0339cbcdd46f6" host="ci-4459-2-4-n-499db54055" Mar 13 00:03:30.742949 containerd[1547]: 2026-03-13 00:03:30.699 [INFO][4967] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:03:30.742949 containerd[1547]: 2026-03-13 00:03:30.699 [INFO][4967] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.58.200/26] IPv6=[] ContainerID="4e6f35c4106bb62b0b96d8414b19771a7459747f01b63c578bd0339cbcdd46f6" HandleID="k8s-pod-network.4e6f35c4106bb62b0b96d8414b19771a7459747f01b63c578bd0339cbcdd46f6" Workload="ci--4459--2--4--n--499db54055-k8s-calico--apiserver--6f9d647bcf--qvp9s-eth0" Mar 13 00:03:30.744441 containerd[1547]: 2026-03-13 00:03:30.703 [INFO][4953] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4e6f35c4106bb62b0b96d8414b19771a7459747f01b63c578bd0339cbcdd46f6" Namespace="calico-system" Pod="calico-apiserver-6f9d647bcf-qvp9s" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-calico--apiserver--6f9d647bcf--qvp9s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--499db54055-k8s-calico--apiserver--6f9d647bcf--qvp9s-eth0", GenerateName:"calico-apiserver-6f9d647bcf-", Namespace:"calico-system", SelfLink:"", UID:"4c6f0879-0241-4332-9161-877f63c69a41", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 2, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f9d647bcf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-499db54055", ContainerID:"", Pod:"calico-apiserver-6f9d647bcf-qvp9s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibaadd338167", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:03:30.744441 containerd[1547]: 2026-03-13 00:03:30.703 [INFO][4953] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.200/32] ContainerID="4e6f35c4106bb62b0b96d8414b19771a7459747f01b63c578bd0339cbcdd46f6" Namespace="calico-system" Pod="calico-apiserver-6f9d647bcf-qvp9s" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-calico--apiserver--6f9d647bcf--qvp9s-eth0" Mar 13 00:03:30.744441 containerd[1547]: 2026-03-13 00:03:30.703 [INFO][4953] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibaadd338167 ContainerID="4e6f35c4106bb62b0b96d8414b19771a7459747f01b63c578bd0339cbcdd46f6" Namespace="calico-system" Pod="calico-apiserver-6f9d647bcf-qvp9s" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-calico--apiserver--6f9d647bcf--qvp9s-eth0" Mar 13 00:03:30.744441 containerd[1547]: 2026-03-13 00:03:30.715 [INFO][4953] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4e6f35c4106bb62b0b96d8414b19771a7459747f01b63c578bd0339cbcdd46f6" Namespace="calico-system" Pod="calico-apiserver-6f9d647bcf-qvp9s" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-calico--apiserver--6f9d647bcf--qvp9s-eth0" Mar 13 00:03:30.744441 containerd[1547]: 2026-03-13 00:03:30.717 [INFO][4953] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4e6f35c4106bb62b0b96d8414b19771a7459747f01b63c578bd0339cbcdd46f6" Namespace="calico-system" Pod="calico-apiserver-6f9d647bcf-qvp9s" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-calico--apiserver--6f9d647bcf--qvp9s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--499db54055-k8s-calico--apiserver--6f9d647bcf--qvp9s-eth0", GenerateName:"calico-apiserver-6f9d647bcf-", Namespace:"calico-system", SelfLink:"", UID:"4c6f0879-0241-4332-9161-877f63c69a41", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 2, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f9d647bcf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-499db54055", ContainerID:"4e6f35c4106bb62b0b96d8414b19771a7459747f01b63c578bd0339cbcdd46f6", Pod:"calico-apiserver-6f9d647bcf-qvp9s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibaadd338167", MAC:"5e:35:95:1a:f6:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:03:30.744441 containerd[1547]: 2026-03-13 00:03:30.737 [INFO][4953] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4e6f35c4106bb62b0b96d8414b19771a7459747f01b63c578bd0339cbcdd46f6" Namespace="calico-system" Pod="calico-apiserver-6f9d647bcf-qvp9s" WorkloadEndpoint="ci--4459--2--4--n--499db54055-k8s-calico--apiserver--6f9d647bcf--qvp9s-eth0" Mar 13 00:03:30.790937 containerd[1547]: time="2026-03-13T00:03:30.790729459Z" level=info msg="connecting to shim 4e6f35c4106bb62b0b96d8414b19771a7459747f01b63c578bd0339cbcdd46f6" address="unix:///run/containerd/s/8a12855582908c7ed26ea6767efa110ec85c4cee2db9d5dc4fb030a5145c7b87" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:03:30.833489 systemd[1]: Started cri-containerd-4e6f35c4106bb62b0b96d8414b19771a7459747f01b63c578bd0339cbcdd46f6.scope - libcontainer container 4e6f35c4106bb62b0b96d8414b19771a7459747f01b63c578bd0339cbcdd46f6. Mar 13 00:03:30.896401 containerd[1547]: time="2026-03-13T00:03:30.896309564Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:03:30.898896 containerd[1547]: time="2026-03-13T00:03:30.898546184Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Mar 13 00:03:30.899150 containerd[1547]: time="2026-03-13T00:03:30.899116520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f9d647bcf-qvp9s,Uid:4c6f0879-0241-4332-9161-877f63c69a41,Namespace:calico-system,Attempt:0,} returns sandbox id \"4e6f35c4106bb62b0b96d8414b19771a7459747f01b63c578bd0339cbcdd46f6\"" Mar 13 00:03:30.901018 containerd[1547]: time="2026-03-13T00:03:30.900957609Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:03:30.908711 containerd[1547]: time="2026-03-13T00:03:30.908149641Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:03:30.909615 containerd[1547]: time="2026-03-13T00:03:30.909406875Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 3.102915499s" Mar 13 00:03:30.909615 containerd[1547]: time="2026-03-13T00:03:30.909553479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 13 00:03:30.910374 containerd[1547]: time="2026-03-13T00:03:30.910346020Z" level=info msg="CreateContainer within sandbox \"4e6f35c4106bb62b0b96d8414b19771a7459747f01b63c578bd0339cbcdd46f6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 13 00:03:30.914741 containerd[1547]: time="2026-03-13T00:03:30.913409342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 13 00:03:30.917505 containerd[1547]: time="2026-03-13T00:03:30.917436890Z" level=info msg="CreateContainer within sandbox \"3da15eb1010aa4433af0759e4c8b0ee8336a26d35a2717893329dac231fa7d14\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 13 00:03:30.930293 containerd[1547]: time="2026-03-13T00:03:30.930246432Z" level=info msg="Container 1092bd392d8fbb02c85d50d49c157ab798f76aadfe4cb1d9f85052dc2199e129: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:03:30.934692 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3467384654.mount: Deactivated successfully. Mar 13 00:03:30.941326 containerd[1547]: time="2026-03-13T00:03:30.941273607Z" level=info msg="Container eb210b38c62a4d14e4077255a21b2dd85148162054b88ddb8e59255b746dff91: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:03:30.957243 containerd[1547]: time="2026-03-13T00:03:30.957001668Z" level=info msg="CreateContainer within sandbox \"4e6f35c4106bb62b0b96d8414b19771a7459747f01b63c578bd0339cbcdd46f6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1092bd392d8fbb02c85d50d49c157ab798f76aadfe4cb1d9f85052dc2199e129\"" Mar 13 00:03:30.958856 containerd[1547]: time="2026-03-13T00:03:30.958822957Z" level=info msg="StartContainer for \"1092bd392d8fbb02c85d50d49c157ab798f76aadfe4cb1d9f85052dc2199e129\"" Mar 13 00:03:30.965646 containerd[1547]: time="2026-03-13T00:03:30.965542177Z" level=info msg="connecting to shim 1092bd392d8fbb02c85d50d49c157ab798f76aadfe4cb1d9f85052dc2199e129" address="unix:///run/containerd/s/8a12855582908c7ed26ea6767efa110ec85c4cee2db9d5dc4fb030a5145c7b87" protocol=ttrpc version=3 Mar 13 00:03:30.969296 containerd[1547]: time="2026-03-13T00:03:30.969101752Z" level=info msg="CreateContainer within sandbox \"3da15eb1010aa4433af0759e4c8b0ee8336a26d35a2717893329dac231fa7d14\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"eb210b38c62a4d14e4077255a21b2dd85148162054b88ddb8e59255b746dff91\"" Mar 13 00:03:30.974407 containerd[1547]: time="2026-03-13T00:03:30.974361893Z" level=info msg="StartContainer for \"eb210b38c62a4d14e4077255a21b2dd85148162054b88ddb8e59255b746dff91\"" Mar 13 00:03:30.976802 containerd[1547]: time="2026-03-13T00:03:30.976692995Z" level=info msg="connecting to shim eb210b38c62a4d14e4077255a21b2dd85148162054b88ddb8e59255b746dff91" address="unix:///run/containerd/s/df44dcde68c247adbb8b7d884731686e9ca239c74cdb8af3843dc4df100b9e33" protocol=ttrpc version=3 Mar 13 00:03:30.998370 systemd[1]: Started cri-containerd-1092bd392d8fbb02c85d50d49c157ab798f76aadfe4cb1d9f85052dc2199e129.scope - libcontainer container 1092bd392d8fbb02c85d50d49c157ab798f76aadfe4cb1d9f85052dc2199e129. Mar 13 00:03:31.003356 systemd[1]: Started cri-containerd-eb210b38c62a4d14e4077255a21b2dd85148162054b88ddb8e59255b746dff91.scope - libcontainer container eb210b38c62a4d14e4077255a21b2dd85148162054b88ddb8e59255b746dff91. Mar 13 00:03:31.077809 containerd[1547]: time="2026-03-13T00:03:31.077355518Z" level=info msg="StartContainer for \"1092bd392d8fbb02c85d50d49c157ab798f76aadfe4cb1d9f85052dc2199e129\" returns successfully" Mar 13 00:03:31.093505 containerd[1547]: time="2026-03-13T00:03:31.093452818Z" level=info msg="StartContainer for \"eb210b38c62a4d14e4077255a21b2dd85148162054b88ddb8e59255b746dff91\" returns successfully" Mar 13 00:03:31.152319 systemd-networkd[1419]: cali107734e5eba: Gained IPv6LL Mar 13 00:03:31.774657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3779373386.mount: Deactivated successfully. Mar 13 00:03:31.841787 kubelet[2780]: I0313 00:03:31.841697 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6f9d647bcf-qvp9s" podStartSLOduration=35.841680625 podStartE2EDuration="35.841680625s" podCreationTimestamp="2026-03-13 00:02:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:03:31.841354896 +0000 UTC m=+58.516913310" watchObservedRunningTime="2026-03-13 00:03:31.841680625 +0000 UTC m=+58.517239039" Mar 13 00:03:32.560407 systemd-networkd[1419]: calibaadd338167: Gained IPv6LL Mar 13 00:03:32.838564 kubelet[2780]: I0313 00:03:32.838483 2780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:03:32.839178 kubelet[2780]: I0313 00:03:32.838483 2780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:03:33.361653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount603335733.mount: Deactivated successfully. Mar 13 00:03:33.958044 containerd[1547]: time="2026-03-13T00:03:33.957216513Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:03:33.960205 containerd[1547]: time="2026-03-13T00:03:33.960167786Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Mar 13 00:03:33.961521 containerd[1547]: time="2026-03-13T00:03:33.961489899Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:03:33.965748 containerd[1547]: time="2026-03-13T00:03:33.965701164Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:03:33.966522 containerd[1547]: time="2026-03-13T00:03:33.966353540Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 3.05151044s" Mar 13 00:03:33.966830 containerd[1547]: time="2026-03-13T00:03:33.966651147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Mar 13 00:03:33.968824 containerd[1547]: time="2026-03-13T00:03:33.968712478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 13 00:03:33.972897 containerd[1547]: time="2026-03-13T00:03:33.972799260Z" level=info msg="CreateContainer within sandbox \"be2a18d3c6b9a661e5ee062a17921d6d1f6d06832b011b76d2ec2e67d7d9498d\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 13 00:03:33.982805 containerd[1547]: time="2026-03-13T00:03:33.980185563Z" level=info msg="Container 5d257c6f4c80b7b4d08f234e4329c43e08b050c68f88a528ae0117374ef3ef66: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:03:34.010135 containerd[1547]: time="2026-03-13T00:03:34.010087620Z" level=info msg="CreateContainer within sandbox \"be2a18d3c6b9a661e5ee062a17921d6d1f6d06832b011b76d2ec2e67d7d9498d\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"5d257c6f4c80b7b4d08f234e4329c43e08b050c68f88a528ae0117374ef3ef66\"" Mar 13 00:03:34.011915 containerd[1547]: time="2026-03-13T00:03:34.011824142Z" level=info msg="StartContainer for \"5d257c6f4c80b7b4d08f234e4329c43e08b050c68f88a528ae0117374ef3ef66\"" Mar 13 00:03:34.015490 containerd[1547]: time="2026-03-13T00:03:34.015431270Z" level=info msg="connecting to shim 5d257c6f4c80b7b4d08f234e4329c43e08b050c68f88a528ae0117374ef3ef66" address="unix:///run/containerd/s/6c72dcd573278066a20f40ce630373b98a14023cc30e67231057a650e8e18fad" protocol=ttrpc version=3 Mar 13 00:03:34.043380 systemd[1]: Started cri-containerd-5d257c6f4c80b7b4d08f234e4329c43e08b050c68f88a528ae0117374ef3ef66.scope - libcontainer container 5d257c6f4c80b7b4d08f234e4329c43e08b050c68f88a528ae0117374ef3ef66. Mar 13 00:03:34.093685 containerd[1547]: time="2026-03-13T00:03:34.093572004Z" level=info msg="StartContainer for \"5d257c6f4c80b7b4d08f234e4329c43e08b050c68f88a528ae0117374ef3ef66\" returns successfully" Mar 13 00:03:34.869803 kubelet[2780]: I0313 00:03:34.869629 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6f9d647bcf-kt45l" podStartSLOduration=35.763585312000004 podStartE2EDuration="38.869608535s" podCreationTimestamp="2026-03-13 00:02:56 +0000 UTC" firstStartedPulling="2026-03-13 00:03:27.80556239 +0000 UTC m=+54.481120804" lastFinishedPulling="2026-03-13 00:03:30.911585613 +0000 UTC m=+57.587144027" observedRunningTime="2026-03-13 00:03:31.863054063 +0000 UTC m=+58.538612477" watchObservedRunningTime="2026-03-13 00:03:34.869608535 +0000 UTC m=+61.545166949" Mar 13 00:03:34.872111 kubelet[2780]: I0313 00:03:34.871946 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-psgzh" podStartSLOduration=32.991898169 podStartE2EDuration="37.87190471s" podCreationTimestamp="2026-03-13 00:02:57 +0000 UTC" firstStartedPulling="2026-03-13 00:03:29.088498892 +0000 UTC m=+55.764057306" lastFinishedPulling="2026-03-13 00:03:33.968505473 +0000 UTC m=+60.644063847" observedRunningTime="2026-03-13 00:03:34.871640944 +0000 UTC m=+61.547199358" watchObservedRunningTime="2026-03-13 00:03:34.87190471 +0000 UTC m=+61.547463124" Mar 13 00:03:36.891121 containerd[1547]: time="2026-03-13T00:03:36.891031814Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:03:36.893644 containerd[1547]: time="2026-03-13T00:03:36.893325947Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Mar 13 00:03:36.894924 containerd[1547]: time="2026-03-13T00:03:36.894873622Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:03:36.898135 containerd[1547]: time="2026-03-13T00:03:36.898093657Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:03:36.900090 containerd[1547]: time="2026-03-13T00:03:36.899887858Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 2.930879533s" Mar 13 00:03:36.900090 containerd[1547]: time="2026-03-13T00:03:36.900035382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Mar 13 00:03:36.920870 containerd[1547]: time="2026-03-13T00:03:36.920426373Z" level=info msg="CreateContainer within sandbox \"b2058a8592184318e3e0497006f1bd0fc78e5d1cab269f74a35b1185827a96bb\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 13 00:03:36.944586 containerd[1547]: time="2026-03-13T00:03:36.944520969Z" level=info msg="Container 4cd09ec32893a36caf215cf1c5d95ff92dfbd437e9751f95af2e8e1850960ad9: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:03:36.964994 containerd[1547]: time="2026-03-13T00:03:36.964947641Z" level=info msg="CreateContainer within sandbox \"b2058a8592184318e3e0497006f1bd0fc78e5d1cab269f74a35b1185827a96bb\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"4cd09ec32893a36caf215cf1c5d95ff92dfbd437e9751f95af2e8e1850960ad9\"" Mar 13 00:03:36.966401 containerd[1547]: time="2026-03-13T00:03:36.966344833Z" level=info msg="StartContainer for \"4cd09ec32893a36caf215cf1c5d95ff92dfbd437e9751f95af2e8e1850960ad9\"" Mar 13 00:03:36.971553 containerd[1547]: time="2026-03-13T00:03:36.971211066Z" level=info msg="connecting to shim 4cd09ec32893a36caf215cf1c5d95ff92dfbd437e9751f95af2e8e1850960ad9" address="unix:///run/containerd/s/c1215c885d875c5e43d07c0acff41a8ba939048090f8a0ce5b592743afd74e61" protocol=ttrpc version=3 Mar 13 00:03:37.003351 systemd[1]: Started cri-containerd-4cd09ec32893a36caf215cf1c5d95ff92dfbd437e9751f95af2e8e1850960ad9.scope - libcontainer container 4cd09ec32893a36caf215cf1c5d95ff92dfbd437e9751f95af2e8e1850960ad9. Mar 13 00:03:37.053105 containerd[1547]: time="2026-03-13T00:03:37.052863243Z" level=info msg="StartContainer for \"4cd09ec32893a36caf215cf1c5d95ff92dfbd437e9751f95af2e8e1850960ad9\" returns successfully" Mar 13 00:03:37.876854 kubelet[2780]: I0313 00:03:37.875722 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7995888b6d-w882j" podStartSLOduration=31.932290734 podStartE2EDuration="38.875703519s" podCreationTimestamp="2026-03-13 00:02:59 +0000 UTC" firstStartedPulling="2026-03-13 00:03:29.957938867 +0000 UTC m=+56.633497241" lastFinishedPulling="2026-03-13 00:03:36.901351612 +0000 UTC m=+63.576910026" observedRunningTime="2026-03-13 00:03:37.872837654 +0000 UTC m=+64.548396068" watchObservedRunningTime="2026-03-13 00:03:37.875703519 +0000 UTC m=+64.551261893" Mar 13 00:04:01.515203 kubelet[2780]: I0313 00:04:01.514139 2780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:04:08.518119 kubelet[2780]: I0313 00:04:08.517667 2780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:04:30.472849 systemd[1]: Started sshd@8-168.119.109.176:22-20.161.92.111:55976.service - OpenSSH per-connection server daemon (20.161.92.111:55976). Mar 13 00:04:31.021653 sshd[5503]: Accepted publickey for core from 20.161.92.111 port 55976 ssh2: RSA SHA256:FZv9jIyBkQro1AwcCziPsaQ5MV8OObRYGf9smKb8nkU Mar 13 00:04:31.024414 sshd-session[5503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:04:31.030799 systemd-logind[1528]: New session 8 of user core. Mar 13 00:04:31.040458 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 13 00:04:31.410774 sshd[5507]: Connection closed by 20.161.92.111 port 55976 Mar 13 00:04:31.409920 sshd-session[5503]: pam_unix(sshd:session): session closed for user core Mar 13 00:04:31.415248 systemd[1]: sshd@8-168.119.109.176:22-20.161.92.111:55976.service: Deactivated successfully. Mar 13 00:04:31.415309 systemd-logind[1528]: Session 8 logged out. Waiting for processes to exit. Mar 13 00:04:31.418353 systemd[1]: session-8.scope: Deactivated successfully. Mar 13 00:04:31.422527 systemd-logind[1528]: Removed session 8. Mar 13 00:04:36.522951 systemd[1]: Started sshd@9-168.119.109.176:22-20.161.92.111:55986.service - OpenSSH per-connection server daemon (20.161.92.111:55986). Mar 13 00:04:37.071148 sshd[5549]: Accepted publickey for core from 20.161.92.111 port 55986 ssh2: RSA SHA256:FZv9jIyBkQro1AwcCziPsaQ5MV8OObRYGf9smKb8nkU Mar 13 00:04:37.074441 sshd-session[5549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:04:37.081127 systemd-logind[1528]: New session 9 of user core. Mar 13 00:04:37.090411 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 13 00:04:37.452278 sshd[5575]: Connection closed by 20.161.92.111 port 55986 Mar 13 00:04:37.452857 sshd-session[5549]: pam_unix(sshd:session): session closed for user core Mar 13 00:04:37.459809 systemd-logind[1528]: Session 9 logged out. Waiting for processes to exit. Mar 13 00:04:37.461707 systemd[1]: sshd@9-168.119.109.176:22-20.161.92.111:55986.service: Deactivated successfully. Mar 13 00:04:37.466399 systemd[1]: session-9.scope: Deactivated successfully. Mar 13 00:04:37.467845 systemd-logind[1528]: Removed session 9. Mar 13 00:04:42.562012 systemd[1]: Started sshd@10-168.119.109.176:22-20.161.92.111:36880.service - OpenSSH per-connection server daemon (20.161.92.111:36880). Mar 13 00:04:43.105636 sshd[5668]: Accepted publickey for core from 20.161.92.111 port 36880 ssh2: RSA SHA256:FZv9jIyBkQro1AwcCziPsaQ5MV8OObRYGf9smKb8nkU Mar 13 00:04:43.109786 sshd-session[5668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:04:43.117909 systemd-logind[1528]: New session 10 of user core. Mar 13 00:04:43.123255 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 13 00:04:43.481372 sshd[5671]: Connection closed by 20.161.92.111 port 36880 Mar 13 00:04:43.480474 sshd-session[5668]: pam_unix(sshd:session): session closed for user core Mar 13 00:04:43.485216 systemd-logind[1528]: Session 10 logged out. Waiting for processes to exit. Mar 13 00:04:43.485815 systemd[1]: sshd@10-168.119.109.176:22-20.161.92.111:36880.service: Deactivated successfully. Mar 13 00:04:43.488992 systemd[1]: session-10.scope: Deactivated successfully. Mar 13 00:04:43.496164 systemd-logind[1528]: Removed session 10. Mar 13 00:04:43.587841 systemd[1]: Started sshd@11-168.119.109.176:22-20.161.92.111:36884.service - OpenSSH per-connection server daemon (20.161.92.111:36884). Mar 13 00:04:44.121748 sshd[5683]: Accepted publickey for core from 20.161.92.111 port 36884 ssh2: RSA SHA256:FZv9jIyBkQro1AwcCziPsaQ5MV8OObRYGf9smKb8nkU Mar 13 00:04:44.123337 sshd-session[5683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:04:44.130529 systemd-logind[1528]: New session 11 of user core. Mar 13 00:04:44.136372 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 13 00:04:44.532800 sshd[5686]: Connection closed by 20.161.92.111 port 36884 Mar 13 00:04:44.532425 sshd-session[5683]: pam_unix(sshd:session): session closed for user core Mar 13 00:04:44.541125 systemd[1]: sshd@11-168.119.109.176:22-20.161.92.111:36884.service: Deactivated successfully. Mar 13 00:04:44.544443 systemd[1]: session-11.scope: Deactivated successfully. Mar 13 00:04:44.545953 systemd-logind[1528]: Session 11 logged out. Waiting for processes to exit. Mar 13 00:04:44.547769 systemd-logind[1528]: Removed session 11. Mar 13 00:04:44.633909 systemd[1]: Started sshd@12-168.119.109.176:22-20.161.92.111:36900.service - OpenSSH per-connection server daemon (20.161.92.111:36900). Mar 13 00:04:45.164148 sshd[5696]: Accepted publickey for core from 20.161.92.111 port 36900 ssh2: RSA SHA256:FZv9jIyBkQro1AwcCziPsaQ5MV8OObRYGf9smKb8nkU Mar 13 00:04:45.166122 sshd-session[5696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:04:45.172639 systemd-logind[1528]: New session 12 of user core. Mar 13 00:04:45.179412 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 13 00:04:45.529425 sshd[5699]: Connection closed by 20.161.92.111 port 36900 Mar 13 00:04:45.530626 sshd-session[5696]: pam_unix(sshd:session): session closed for user core Mar 13 00:04:45.535942 systemd-logind[1528]: Session 12 logged out. Waiting for processes to exit. Mar 13 00:04:45.536567 systemd[1]: sshd@12-168.119.109.176:22-20.161.92.111:36900.service: Deactivated successfully. Mar 13 00:04:45.542611 systemd[1]: session-12.scope: Deactivated successfully. Mar 13 00:04:45.549026 systemd-logind[1528]: Removed session 12. Mar 13 00:04:50.637033 systemd[1]: Started sshd@13-168.119.109.176:22-20.161.92.111:51688.service - OpenSSH per-connection server daemon (20.161.92.111:51688). Mar 13 00:04:51.170338 sshd[5725]: Accepted publickey for core from 20.161.92.111 port 51688 ssh2: RSA SHA256:FZv9jIyBkQro1AwcCziPsaQ5MV8OObRYGf9smKb8nkU Mar 13 00:04:51.172491 sshd-session[5725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:04:51.179218 systemd-logind[1528]: New session 13 of user core. Mar 13 00:04:51.191444 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 13 00:04:51.536422 sshd[5738]: Connection closed by 20.161.92.111 port 51688 Mar 13 00:04:51.537251 sshd-session[5725]: pam_unix(sshd:session): session closed for user core Mar 13 00:04:51.545894 systemd[1]: sshd@13-168.119.109.176:22-20.161.92.111:51688.service: Deactivated successfully. Mar 13 00:04:51.549485 systemd[1]: session-13.scope: Deactivated successfully. Mar 13 00:04:51.552537 systemd-logind[1528]: Session 13 logged out. Waiting for processes to exit. Mar 13 00:04:51.554512 systemd-logind[1528]: Removed session 13. Mar 13 00:04:51.645328 systemd[1]: Started sshd@14-168.119.109.176:22-20.161.92.111:51698.service - OpenSSH per-connection server daemon (20.161.92.111:51698). Mar 13 00:04:52.173107 sshd[5750]: Accepted publickey for core from 20.161.92.111 port 51698 ssh2: RSA SHA256:FZv9jIyBkQro1AwcCziPsaQ5MV8OObRYGf9smKb8nkU Mar 13 00:04:52.174619 sshd-session[5750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:04:52.181790 systemd-logind[1528]: New session 14 of user core. Mar 13 00:04:52.189349 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 13 00:04:52.698886 sshd[5753]: Connection closed by 20.161.92.111 port 51698 Mar 13 00:04:52.700858 sshd-session[5750]: pam_unix(sshd:session): session closed for user core Mar 13 00:04:52.708827 systemd[1]: sshd@14-168.119.109.176:22-20.161.92.111:51698.service: Deactivated successfully. Mar 13 00:04:52.711920 systemd[1]: session-14.scope: Deactivated successfully. Mar 13 00:04:52.714119 systemd-logind[1528]: Session 14 logged out. Waiting for processes to exit. Mar 13 00:04:52.716835 systemd-logind[1528]: Removed session 14. Mar 13 00:04:52.806393 systemd[1]: Started sshd@15-168.119.109.176:22-20.161.92.111:51714.service - OpenSSH per-connection server daemon (20.161.92.111:51714). Mar 13 00:04:53.346921 sshd[5807]: Accepted publickey for core from 20.161.92.111 port 51714 ssh2: RSA SHA256:FZv9jIyBkQro1AwcCziPsaQ5MV8OObRYGf9smKb8nkU Mar 13 00:04:53.349177 sshd-session[5807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:04:53.356960 systemd-logind[1528]: New session 15 of user core. Mar 13 00:04:53.362508 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 13 00:04:54.389091 sshd[5810]: Connection closed by 20.161.92.111 port 51714 Mar 13 00:04:54.388894 sshd-session[5807]: pam_unix(sshd:session): session closed for user core Mar 13 00:04:54.394524 systemd[1]: sshd@15-168.119.109.176:22-20.161.92.111:51714.service: Deactivated successfully. Mar 13 00:04:54.397936 systemd[1]: session-15.scope: Deactivated successfully. Mar 13 00:04:54.400531 systemd-logind[1528]: Session 15 logged out. Waiting for processes to exit. Mar 13 00:04:54.402238 systemd-logind[1528]: Removed session 15. Mar 13 00:04:54.495338 systemd[1]: Started sshd@16-168.119.109.176:22-20.161.92.111:51728.service - OpenSSH per-connection server daemon (20.161.92.111:51728). Mar 13 00:04:55.025962 sshd[5835]: Accepted publickey for core from 20.161.92.111 port 51728 ssh2: RSA SHA256:FZv9jIyBkQro1AwcCziPsaQ5MV8OObRYGf9smKb8nkU Mar 13 00:04:55.028009 sshd-session[5835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:04:55.036181 systemd-logind[1528]: New session 16 of user core. Mar 13 00:04:55.043402 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 13 00:04:55.520105 sshd[5838]: Connection closed by 20.161.92.111 port 51728 Mar 13 00:04:55.519925 sshd-session[5835]: pam_unix(sshd:session): session closed for user core Mar 13 00:04:55.528574 systemd-logind[1528]: Session 16 logged out. Waiting for processes to exit. Mar 13 00:04:55.529935 systemd[1]: sshd@16-168.119.109.176:22-20.161.92.111:51728.service: Deactivated successfully. Mar 13 00:04:55.532411 systemd[1]: session-16.scope: Deactivated successfully. Mar 13 00:04:55.536646 systemd-logind[1528]: Removed session 16. Mar 13 00:04:55.632261 systemd[1]: Started sshd@17-168.119.109.176:22-20.161.92.111:51744.service - OpenSSH per-connection server daemon (20.161.92.111:51744). Mar 13 00:04:56.177551 sshd[5848]: Accepted publickey for core from 20.161.92.111 port 51744 ssh2: RSA SHA256:FZv9jIyBkQro1AwcCziPsaQ5MV8OObRYGf9smKb8nkU Mar 13 00:04:56.179934 sshd-session[5848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:04:56.185392 systemd-logind[1528]: New session 17 of user core. Mar 13 00:04:56.199313 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 13 00:04:56.552860 sshd[5851]: Connection closed by 20.161.92.111 port 51744 Mar 13 00:04:56.553813 sshd-session[5848]: pam_unix(sshd:session): session closed for user core Mar 13 00:04:56.559118 systemd-logind[1528]: Session 17 logged out. Waiting for processes to exit. Mar 13 00:04:56.560264 systemd[1]: sshd@17-168.119.109.176:22-20.161.92.111:51744.service: Deactivated successfully. Mar 13 00:04:56.564597 systemd[1]: session-17.scope: Deactivated successfully. Mar 13 00:04:56.568134 systemd-logind[1528]: Removed session 17. Mar 13 00:05:01.662515 systemd[1]: Started sshd@18-168.119.109.176:22-20.161.92.111:53066.service - OpenSSH per-connection server daemon (20.161.92.111:53066). Mar 13 00:05:02.190751 sshd[5867]: Accepted publickey for core from 20.161.92.111 port 53066 ssh2: RSA SHA256:FZv9jIyBkQro1AwcCziPsaQ5MV8OObRYGf9smKb8nkU Mar 13 00:05:02.192572 sshd-session[5867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:05:02.198904 systemd-logind[1528]: New session 18 of user core. Mar 13 00:05:02.204428 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 13 00:05:02.551204 sshd[5870]: Connection closed by 20.161.92.111 port 53066 Mar 13 00:05:02.553442 sshd-session[5867]: pam_unix(sshd:session): session closed for user core Mar 13 00:05:02.559689 systemd[1]: sshd@18-168.119.109.176:22-20.161.92.111:53066.service: Deactivated successfully. Mar 13 00:05:02.563351 systemd[1]: session-18.scope: Deactivated successfully. Mar 13 00:05:02.565801 systemd-logind[1528]: Session 18 logged out. Waiting for processes to exit. Mar 13 00:05:02.567330 systemd-logind[1528]: Removed session 18. Mar 13 00:05:07.663788 systemd[1]: Started sshd@19-168.119.109.176:22-20.161.92.111:53082.service - OpenSSH per-connection server daemon (20.161.92.111:53082). Mar 13 00:05:08.196722 sshd[5904]: Accepted publickey for core from 20.161.92.111 port 53082 ssh2: RSA SHA256:FZv9jIyBkQro1AwcCziPsaQ5MV8OObRYGf9smKb8nkU Mar 13 00:05:08.199946 sshd-session[5904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:05:08.207090 systemd-logind[1528]: New session 19 of user core. Mar 13 00:05:08.212323 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 13 00:05:08.558837 sshd[5929]: Connection closed by 20.161.92.111 port 53082 Mar 13 00:05:08.557779 sshd-session[5904]: pam_unix(sshd:session): session closed for user core Mar 13 00:05:08.565235 systemd[1]: sshd@19-168.119.109.176:22-20.161.92.111:53082.service: Deactivated successfully. Mar 13 00:05:08.570394 systemd[1]: session-19.scope: Deactivated successfully. Mar 13 00:05:08.571907 systemd-logind[1528]: Session 19 logged out. Waiting for processes to exit. Mar 13 00:05:08.575034 systemd-logind[1528]: Removed session 19. Mar 13 00:05:23.332723 systemd[1]: cri-containerd-163e0c7cd567cb1e014c69bf731e404288f5f624368e2a376afe6edd9441f419.scope: Deactivated successfully. Mar 13 00:05:23.333111 systemd[1]: cri-containerd-163e0c7cd567cb1e014c69bf731e404288f5f624368e2a376afe6edd9441f419.scope: Consumed 4.995s CPU time, 63.3M memory peak, 2.9M read from disk. Mar 13 00:05:23.342313 containerd[1547]: time="2026-03-13T00:05:23.342270310Z" level=info msg="received container exit event container_id:\"163e0c7cd567cb1e014c69bf731e404288f5f624368e2a376afe6edd9441f419\" id:\"163e0c7cd567cb1e014c69bf731e404288f5f624368e2a376afe6edd9441f419\" pid:2641 exit_status:1 exited_at:{seconds:1773360323 nanos:341840733}" Mar 13 00:05:23.368701 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-163e0c7cd567cb1e014c69bf731e404288f5f624368e2a376afe6edd9441f419-rootfs.mount: Deactivated successfully. Mar 13 00:05:23.539928 kubelet[2780]: E0313 00:05:23.539878 2780 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:43898->10.0.0.2:2379: read: connection timed out" Mar 13 00:05:23.553392 systemd[1]: cri-containerd-d077d03907a664b3205bfd178a7e03e93c73ed8176e34239a49f5369047fa096.scope: Deactivated successfully. Mar 13 00:05:23.554354 systemd[1]: cri-containerd-d077d03907a664b3205bfd178a7e03e93c73ed8176e34239a49f5369047fa096.scope: Consumed 3.334s CPU time, 25.9M memory peak, 3.5M read from disk. Mar 13 00:05:23.556927 containerd[1547]: time="2026-03-13T00:05:23.556838851Z" level=info msg="received container exit event container_id:\"d077d03907a664b3205bfd178a7e03e93c73ed8176e34239a49f5369047fa096\" id:\"d077d03907a664b3205bfd178a7e03e93c73ed8176e34239a49f5369047fa096\" pid:2634 exit_status:1 exited_at:{seconds:1773360323 nanos:556293190}" Mar 13 00:05:23.591402 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d077d03907a664b3205bfd178a7e03e93c73ed8176e34239a49f5369047fa096-rootfs.mount: Deactivated successfully. Mar 13 00:05:24.188183 systemd[1]: cri-containerd-58780a7b57d8d0796adf7995d71fde0e488e8f3ca8ee9ba2175a39c8d818ed7a.scope: Deactivated successfully. Mar 13 00:05:24.189333 systemd[1]: cri-containerd-58780a7b57d8d0796adf7995d71fde0e488e8f3ca8ee9ba2175a39c8d818ed7a.scope: Consumed 18.363s CPU time, 137.1M memory peak, 2.5M read from disk. Mar 13 00:05:24.191235 containerd[1547]: time="2026-03-13T00:05:24.190682051Z" level=info msg="received container exit event container_id:\"58780a7b57d8d0796adf7995d71fde0e488e8f3ca8ee9ba2175a39c8d818ed7a\" id:\"58780a7b57d8d0796adf7995d71fde0e488e8f3ca8ee9ba2175a39c8d818ed7a\" pid:3106 exit_status:1 exited_at:{seconds:1773360324 nanos:189951743}" Mar 13 00:05:24.227421 kubelet[2780]: I0313 00:05:24.226663 2780 scope.go:117] "RemoveContainer" containerID="d077d03907a664b3205bfd178a7e03e93c73ed8176e34239a49f5369047fa096" Mar 13 00:05:24.227421 kubelet[2780]: I0313 00:05:24.226777 2780 scope.go:117] "RemoveContainer" containerID="163e0c7cd567cb1e014c69bf731e404288f5f624368e2a376afe6edd9441f419" Mar 13 00:05:24.239923 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58780a7b57d8d0796adf7995d71fde0e488e8f3ca8ee9ba2175a39c8d818ed7a-rootfs.mount: Deactivated successfully. Mar 13 00:05:24.247727 containerd[1547]: time="2026-03-13T00:05:24.247680660Z" level=info msg="CreateContainer within sandbox \"43bc6e90b35d8e4a1b18b54c2458eba8dc050995f93f449e168856007418e67a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 13 00:05:24.250154 containerd[1547]: time="2026-03-13T00:05:24.249863583Z" level=info msg="CreateContainer within sandbox \"f8af977aacac585f0279b7552edbb6d9fd22fd5e46b26d47e65f1349dcd9f30a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 13 00:05:24.264290 containerd[1547]: time="2026-03-13T00:05:24.264231569Z" level=info msg="Container 3f845a120855c74214eee84d058e39171d170d16a683087c463d201f41851767: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:05:24.266535 containerd[1547]: time="2026-03-13T00:05:24.266490135Z" level=info msg="Container b62333732083a0923dc5899cf0271caa4de0b7dac1995e5b6b88a1701598927a: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:05:24.275949 containerd[1547]: time="2026-03-13T00:05:24.275899453Z" level=info msg="CreateContainer within sandbox \"43bc6e90b35d8e4a1b18b54c2458eba8dc050995f93f449e168856007418e67a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"3f845a120855c74214eee84d058e39171d170d16a683087c463d201f41851767\"" Mar 13 00:05:24.276619 containerd[1547]: time="2026-03-13T00:05:24.276591600Z" level=info msg="StartContainer for \"3f845a120855c74214eee84d058e39171d170d16a683087c463d201f41851767\"" Mar 13 00:05:24.278459 containerd[1547]: time="2026-03-13T00:05:24.278401989Z" level=info msg="connecting to shim 3f845a120855c74214eee84d058e39171d170d16a683087c463d201f41851767" address="unix:///run/containerd/s/8ba54817637e9b2d740ea831507af06073ebfa711c26b5b6103089d445ba1f60" protocol=ttrpc version=3 Mar 13 00:05:24.280971 containerd[1547]: time="2026-03-13T00:05:24.280933965Z" level=info msg="CreateContainer within sandbox \"f8af977aacac585f0279b7552edbb6d9fd22fd5e46b26d47e65f1349dcd9f30a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"b62333732083a0923dc5899cf0271caa4de0b7dac1995e5b6b88a1701598927a\"" Mar 13 00:05:24.281891 containerd[1547]: time="2026-03-13T00:05:24.281859120Z" level=info msg="StartContainer for \"b62333732083a0923dc5899cf0271caa4de0b7dac1995e5b6b88a1701598927a\"" Mar 13 00:05:24.285284 containerd[1547]: time="2026-03-13T00:05:24.285250009Z" level=info msg="connecting to shim b62333732083a0923dc5899cf0271caa4de0b7dac1995e5b6b88a1701598927a" address="unix:///run/containerd/s/f7b2cfd28c691331a410806107ebabf18e218d9d1a8ac299919615ef89841db9" protocol=ttrpc version=3 Mar 13 00:05:24.301260 systemd[1]: Started cri-containerd-3f845a120855c74214eee84d058e39171d170d16a683087c463d201f41851767.scope - libcontainer container 3f845a120855c74214eee84d058e39171d170d16a683087c463d201f41851767. Mar 13 00:05:24.318345 systemd[1]: Started cri-containerd-b62333732083a0923dc5899cf0271caa4de0b7dac1995e5b6b88a1701598927a.scope - libcontainer container b62333732083a0923dc5899cf0271caa4de0b7dac1995e5b6b88a1701598927a. Mar 13 00:05:24.363280 containerd[1547]: time="2026-03-13T00:05:24.363229657Z" level=info msg="StartContainer for \"3f845a120855c74214eee84d058e39171d170d16a683087c463d201f41851767\" returns successfully" Mar 13 00:05:24.422195 containerd[1547]: time="2026-03-13T00:05:24.422152099Z" level=info msg="StartContainer for \"b62333732083a0923dc5899cf0271caa4de0b7dac1995e5b6b88a1701598927a\" returns successfully" Mar 13 00:05:24.435767 kubelet[2780]: I0313 00:05:24.435608 2780 status_manager.go:895] "Failed to get status for pod" podUID="44d4cb533159abdfa2bb4f6afb0ccca0" pod="kube-system/kube-scheduler-ci-4459-2-4-n-499db54055" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:43820->10.0.0.2:2379: read: connection timed out" Mar 13 00:05:25.232580 kubelet[2780]: I0313 00:05:25.232546 2780 scope.go:117] "RemoveContainer" containerID="58780a7b57d8d0796adf7995d71fde0e488e8f3ca8ee9ba2175a39c8d818ed7a" Mar 13 00:05:25.236611 containerd[1547]: time="2026-03-13T00:05:25.236399460Z" level=info msg="CreateContainer within sandbox \"07d79d4221972e88fc4ed68f539fae3e35cb9bc1a48df9722af0dae69204a823\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Mar 13 00:05:25.254747 containerd[1547]: time="2026-03-13T00:05:25.254664147Z" level=info msg="Container 1d9fa1845d062478416251c272a3747c3db6f7501cb0b52c6eb9d4a309ee8f47: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:05:25.269381 containerd[1547]: time="2026-03-13T00:05:25.268985286Z" level=info msg="CreateContainer within sandbox \"07d79d4221972e88fc4ed68f539fae3e35cb9bc1a48df9722af0dae69204a823\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"1d9fa1845d062478416251c272a3747c3db6f7501cb0b52c6eb9d4a309ee8f47\"" Mar 13 00:05:25.271686 containerd[1547]: time="2026-03-13T00:05:25.271335414Z" level=info msg="StartContainer for \"1d9fa1845d062478416251c272a3747c3db6f7501cb0b52c6eb9d4a309ee8f47\"" Mar 13 00:05:25.272618 containerd[1547]: time="2026-03-13T00:05:25.272574381Z" level=info msg="connecting to shim 1d9fa1845d062478416251c272a3747c3db6f7501cb0b52c6eb9d4a309ee8f47" address="unix:///run/containerd/s/8ba09b9bbcaed8992689642b14ef0383e2eb3e7786b372474404057111cff680" protocol=ttrpc version=3 Mar 13 00:05:25.314286 systemd[1]: Started cri-containerd-1d9fa1845d062478416251c272a3747c3db6f7501cb0b52c6eb9d4a309ee8f47.scope - libcontainer container 1d9fa1845d062478416251c272a3747c3db6f7501cb0b52c6eb9d4a309ee8f47. Mar 13 00:05:25.571309 containerd[1547]: time="2026-03-13T00:05:25.571189772Z" level=info msg="StartContainer for \"1d9fa1845d062478416251c272a3747c3db6f7501cb0b52c6eb9d4a309ee8f47\" returns successfully"