May 14 04:55:32.826027 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 14 04:55:32.826047 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Wed May 14 03:42:50 -00 2025 May 14 04:55:32.826056 kernel: KASLR enabled May 14 04:55:32.826062 kernel: efi: EFI v2.7 by EDK II May 14 04:55:32.826067 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 May 14 04:55:32.826072 kernel: random: crng init done May 14 04:55:32.826079 kernel: secureboot: Secure boot disabled May 14 04:55:32.826085 kernel: ACPI: Early table checksum verification disabled May 14 04:55:32.826091 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) May 14 04:55:32.826098 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 14 04:55:32.826103 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 14 04:55:32.826109 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 04:55:32.826115 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 14 04:55:32.826121 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 04:55:32.826127 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 04:55:32.826134 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 04:55:32.826141 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 04:55:32.826146 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 14 04:55:32.826153 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 14 04:55:32.826158 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 14 04:55:32.826165 kernel: ACPI: Use ACPI SPCR as default console: Yes May 14 04:55:32.826171 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 14 04:55:32.826177 kernel: NODE_DATA(0) allocated [mem 0xdc965dc0-0xdc96cfff] May 14 04:55:32.826182 kernel: Zone ranges: May 14 04:55:32.826188 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 14 04:55:32.826195 kernel: DMA32 empty May 14 04:55:32.826201 kernel: Normal empty May 14 04:55:32.826207 kernel: Device empty May 14 04:55:32.826213 kernel: Movable zone start for each node May 14 04:55:32.826218 kernel: Early memory node ranges May 14 04:55:32.826225 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] May 14 04:55:32.826231 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] May 14 04:55:32.826236 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] May 14 04:55:32.826242 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] May 14 04:55:32.826248 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] May 14 04:55:32.826254 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] May 14 04:55:32.826260 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] May 14 04:55:32.826267 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] May 14 04:55:32.826273 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] May 14 04:55:32.826279 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 14 04:55:32.826287 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 14 04:55:32.826294 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 14 04:55:32.826300 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 14 04:55:32.826308 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 14 04:55:32.826314 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 14 04:55:32.826321 kernel: psci: probing for conduit method from ACPI. May 14 04:55:32.826327 kernel: psci: PSCIv1.1 detected in firmware. May 14 04:55:32.826333 kernel: psci: Using standard PSCI v0.2 function IDs May 14 04:55:32.826340 kernel: psci: Trusted OS migration not required May 14 04:55:32.826346 kernel: psci: SMC Calling Convention v1.1 May 14 04:55:32.826352 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 14 04:55:32.826359 kernel: percpu: Embedded 33 pages/cpu s98136 r8192 d28840 u135168 May 14 04:55:32.826365 kernel: pcpu-alloc: s98136 r8192 d28840 u135168 alloc=33*4096 May 14 04:55:32.826373 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 14 04:55:32.826379 kernel: Detected PIPT I-cache on CPU0 May 14 04:55:32.826385 kernel: CPU features: detected: GIC system register CPU interface May 14 04:55:32.826391 kernel: CPU features: detected: Spectre-v4 May 14 04:55:32.826398 kernel: CPU features: detected: Spectre-BHB May 14 04:55:32.826404 kernel: CPU features: kernel page table isolation forced ON by KASLR May 14 04:55:32.826410 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 14 04:55:32.826417 kernel: CPU features: detected: ARM erratum 1418040 May 14 04:55:32.826423 kernel: CPU features: detected: SSBS not fully self-synchronizing May 14 04:55:32.826429 kernel: alternatives: applying boot alternatives May 14 04:55:32.826436 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=121c9a3653fd599e6c6b931638a08771d538e77e97aff08e06f2cb7bca392d8e May 14 04:55:32.826444 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 04:55:32.826451 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 14 04:55:32.826458 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 04:55:32.826464 kernel: Fallback order for Node 0: 0 May 14 04:55:32.826470 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 May 14 04:55:32.826477 kernel: Policy zone: DMA May 14 04:55:32.826483 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 04:55:32.826489 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB May 14 04:55:32.826496 kernel: software IO TLB: area num 4. May 14 04:55:32.826502 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB May 14 04:55:32.826509 kernel: software IO TLB: mapped [mem 0x00000000d8c00000-0x00000000d9000000] (4MB) May 14 04:55:32.826515 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 14 04:55:32.826523 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 04:55:32.826530 kernel: rcu: RCU event tracing is enabled. May 14 04:55:32.826537 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 14 04:55:32.826544 kernel: Trampoline variant of Tasks RCU enabled. May 14 04:55:32.826550 kernel: Tracing variant of Tasks RCU enabled. May 14 04:55:32.826557 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 04:55:32.826563 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 14 04:55:32.826569 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 04:55:32.826576 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 04:55:32.826582 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 14 04:55:32.826588 kernel: GICv3: 256 SPIs implemented May 14 04:55:32.826596 kernel: GICv3: 0 Extended SPIs implemented May 14 04:55:32.826602 kernel: Root IRQ handler: gic_handle_irq May 14 04:55:32.826617 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 14 04:55:32.826624 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 May 14 04:55:32.826630 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 14 04:55:32.826637 kernel: ITS [mem 0x08080000-0x0809ffff] May 14 04:55:32.826643 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400e0000 (indirect, esz 8, psz 64K, shr 1) May 14 04:55:32.826650 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400f0000 (flat, esz 8, psz 64K, shr 1) May 14 04:55:32.826656 kernel: GICv3: using LPI property table @0x0000000040100000 May 14 04:55:32.826663 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040110000 May 14 04:55:32.826669 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 14 04:55:32.826675 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 04:55:32.826684 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 14 04:55:32.826696 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 14 04:55:32.826703 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 14 04:55:32.826709 kernel: arm-pv: using stolen time PV May 14 04:55:32.826716 kernel: Console: colour dummy device 80x25 May 14 04:55:32.826722 kernel: ACPI: Core revision 20240827 May 14 04:55:32.826729 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 14 04:55:32.826736 kernel: pid_max: default: 32768 minimum: 301 May 14 04:55:32.826742 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 14 04:55:32.826750 kernel: landlock: Up and running. May 14 04:55:32.826756 kernel: SELinux: Initializing. May 14 04:55:32.826763 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 04:55:32.826769 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 04:55:32.826810 kernel: rcu: Hierarchical SRCU implementation. May 14 04:55:32.826818 kernel: rcu: Max phase no-delay instances is 400. May 14 04:55:32.826824 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 14 04:55:32.826831 kernel: Remapping and enabling EFI services. May 14 04:55:32.826837 kernel: smp: Bringing up secondary CPUs ... May 14 04:55:32.826844 kernel: Detected PIPT I-cache on CPU1 May 14 04:55:32.826857 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 14 04:55:32.826864 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040120000 May 14 04:55:32.826872 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 04:55:32.826878 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 14 04:55:32.826885 kernel: Detected PIPT I-cache on CPU2 May 14 04:55:32.826892 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 14 04:55:32.826899 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040130000 May 14 04:55:32.826907 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 04:55:32.826914 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 14 04:55:32.826920 kernel: Detected PIPT I-cache on CPU3 May 14 04:55:32.826928 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 14 04:55:32.826934 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040140000 May 14 04:55:32.826941 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 04:55:32.826948 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 14 04:55:32.826955 kernel: smp: Brought up 1 node, 4 CPUs May 14 04:55:32.826962 kernel: SMP: Total of 4 processors activated. May 14 04:55:32.826969 kernel: CPU: All CPU(s) started at EL1 May 14 04:55:32.826977 kernel: CPU features: detected: 32-bit EL0 Support May 14 04:55:32.826983 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 14 04:55:32.826990 kernel: CPU features: detected: Common not Private translations May 14 04:55:32.826997 kernel: CPU features: detected: CRC32 instructions May 14 04:55:32.827004 kernel: CPU features: detected: Enhanced Virtualization Traps May 14 04:55:32.827011 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 14 04:55:32.827018 kernel: CPU features: detected: LSE atomic instructions May 14 04:55:32.827025 kernel: CPU features: detected: Privileged Access Never May 14 04:55:32.827031 kernel: CPU features: detected: RAS Extension Support May 14 04:55:32.827040 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 14 04:55:32.827047 kernel: alternatives: applying system-wide alternatives May 14 04:55:32.827054 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 May 14 04:55:32.827061 kernel: Memory: 2440984K/2572288K available (11072K kernel code, 2276K rwdata, 8928K rodata, 39424K init, 1034K bss, 125536K reserved, 0K cma-reserved) May 14 04:55:32.827068 kernel: devtmpfs: initialized May 14 04:55:32.827075 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 04:55:32.827082 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 14 04:55:32.827089 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 14 04:55:32.827095 kernel: 0 pages in range for non-PLT usage May 14 04:55:32.827105 kernel: 508544 pages in range for PLT usage May 14 04:55:32.827112 kernel: pinctrl core: initialized pinctrl subsystem May 14 04:55:32.827119 kernel: SMBIOS 3.0.0 present. May 14 04:55:32.827125 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 14 04:55:32.827132 kernel: DMI: Memory slots populated: 1/1 May 14 04:55:32.827139 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 04:55:32.827146 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 14 04:55:32.827166 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 14 04:55:32.827173 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 14 04:55:32.827181 kernel: audit: initializing netlink subsys (disabled) May 14 04:55:32.827188 kernel: audit: type=2000 audit(0.029:1): state=initialized audit_enabled=0 res=1 May 14 04:55:32.827194 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 04:55:32.827201 kernel: cpuidle: using governor menu May 14 04:55:32.827209 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 14 04:55:32.827215 kernel: ASID allocator initialised with 32768 entries May 14 04:55:32.827222 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 04:55:32.827230 kernel: Serial: AMBA PL011 UART driver May 14 04:55:32.827236 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 14 04:55:32.827245 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 14 04:55:32.827252 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 14 04:55:32.827259 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 14 04:55:32.827265 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 14 04:55:32.827272 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 14 04:55:32.827279 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 14 04:55:32.827286 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 14 04:55:32.827292 kernel: ACPI: Added _OSI(Module Device) May 14 04:55:32.827299 kernel: ACPI: Added _OSI(Processor Device) May 14 04:55:32.827307 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 04:55:32.827314 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 04:55:32.827321 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 04:55:32.827328 kernel: ACPI: Interpreter enabled May 14 04:55:32.827335 kernel: ACPI: Using GIC for interrupt routing May 14 04:55:32.827345 kernel: ACPI: MCFG table detected, 1 entries May 14 04:55:32.827352 kernel: ACPI: CPU0 has been hot-added May 14 04:55:32.827359 kernel: ACPI: CPU1 has been hot-added May 14 04:55:32.827368 kernel: ACPI: CPU2 has been hot-added May 14 04:55:32.827378 kernel: ACPI: CPU3 has been hot-added May 14 04:55:32.827386 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 14 04:55:32.827393 kernel: printk: legacy console [ttyAMA0] enabled May 14 04:55:32.827400 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 14 04:55:32.827526 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 04:55:32.827590 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 14 04:55:32.827656 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 14 04:55:32.827713 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 14 04:55:32.827770 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 14 04:55:32.827788 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 14 04:55:32.827795 kernel: PCI host bridge to bus 0000:00 May 14 04:55:32.827864 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 14 04:55:32.827919 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 14 04:55:32.827970 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 14 04:55:32.828020 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 14 04:55:32.828092 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint May 14 04:55:32.828159 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 14 04:55:32.828219 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] May 14 04:55:32.828276 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] May 14 04:55:32.828334 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] May 14 04:55:32.828391 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned May 14 04:55:32.828449 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned May 14 04:55:32.828510 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned May 14 04:55:32.828563 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 14 04:55:32.828622 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 14 04:55:32.828678 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 14 04:55:32.828688 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 14 04:55:32.828695 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 14 04:55:32.828702 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 14 04:55:32.828710 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 14 04:55:32.828717 kernel: iommu: Default domain type: Translated May 14 04:55:32.828724 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 14 04:55:32.828731 kernel: efivars: Registered efivars operations May 14 04:55:32.828738 kernel: vgaarb: loaded May 14 04:55:32.828745 kernel: clocksource: Switched to clocksource arch_sys_counter May 14 04:55:32.828752 kernel: VFS: Disk quotas dquot_6.6.0 May 14 04:55:32.828759 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 04:55:32.828765 kernel: pnp: PnP ACPI init May 14 04:55:32.828843 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 14 04:55:32.828853 kernel: pnp: PnP ACPI: found 1 devices May 14 04:55:32.828860 kernel: NET: Registered PF_INET protocol family May 14 04:55:32.828867 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 14 04:55:32.828874 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 14 04:55:32.828881 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 04:55:32.828888 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 14 04:55:32.828895 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 14 04:55:32.828904 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 14 04:55:32.828911 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 04:55:32.828918 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 04:55:32.828925 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 04:55:32.828932 kernel: PCI: CLS 0 bytes, default 64 May 14 04:55:32.828939 kernel: kvm [1]: HYP mode not available May 14 04:55:32.828946 kernel: Initialise system trusted keyrings May 14 04:55:32.828953 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 14 04:55:32.828960 kernel: Key type asymmetric registered May 14 04:55:32.828967 kernel: Asymmetric key parser 'x509' registered May 14 04:55:32.828974 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 14 04:55:32.828981 kernel: io scheduler mq-deadline registered May 14 04:55:32.828988 kernel: io scheduler kyber registered May 14 04:55:32.828995 kernel: io scheduler bfq registered May 14 04:55:32.829002 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 14 04:55:32.829009 kernel: ACPI: button: Power Button [PWRB] May 14 04:55:32.829016 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 14 04:55:32.829076 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 14 04:55:32.829086 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 04:55:32.829093 kernel: thunder_xcv, ver 1.0 May 14 04:55:32.829100 kernel: thunder_bgx, ver 1.0 May 14 04:55:32.829107 kernel: nicpf, ver 1.0 May 14 04:55:32.829113 kernel: nicvf, ver 1.0 May 14 04:55:32.829178 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 14 04:55:32.829232 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-14T04:55:32 UTC (1747198532) May 14 04:55:32.829241 kernel: hid: raw HID events driver (C) Jiri Kosina May 14 04:55:32.829250 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available May 14 04:55:32.829257 kernel: watchdog: NMI not fully supported May 14 04:55:32.829263 kernel: watchdog: Hard watchdog permanently disabled May 14 04:55:32.829270 kernel: NET: Registered PF_INET6 protocol family May 14 04:55:32.829277 kernel: Segment Routing with IPv6 May 14 04:55:32.829284 kernel: In-situ OAM (IOAM) with IPv6 May 14 04:55:32.829291 kernel: NET: Registered PF_PACKET protocol family May 14 04:55:32.829297 kernel: Key type dns_resolver registered May 14 04:55:32.829304 kernel: registered taskstats version 1 May 14 04:55:32.829311 kernel: Loading compiled-in X.509 certificates May 14 04:55:32.829319 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: 9f54d711faad5edc118c062fcbac248335430a87' May 14 04:55:32.829326 kernel: Demotion targets for Node 0: null May 14 04:55:32.829333 kernel: Key type .fscrypt registered May 14 04:55:32.829339 kernel: Key type fscrypt-provisioning registered May 14 04:55:32.829346 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 04:55:32.829353 kernel: ima: Allocated hash algorithm: sha1 May 14 04:55:32.829360 kernel: ima: No architecture policies found May 14 04:55:32.829367 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 14 04:55:32.829375 kernel: clk: Disabling unused clocks May 14 04:55:32.829382 kernel: PM: genpd: Disabling unused power domains May 14 04:55:32.829389 kernel: Warning: unable to open an initial console. May 14 04:55:32.829396 kernel: Freeing unused kernel memory: 39424K May 14 04:55:32.829402 kernel: Run /init as init process May 14 04:55:32.829409 kernel: with arguments: May 14 04:55:32.829416 kernel: /init May 14 04:55:32.829423 kernel: with environment: May 14 04:55:32.829429 kernel: HOME=/ May 14 04:55:32.829438 kernel: TERM=linux May 14 04:55:32.829444 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 04:55:32.829452 systemd[1]: Successfully made /usr/ read-only. May 14 04:55:32.829461 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 04:55:32.829469 systemd[1]: Detected virtualization kvm. May 14 04:55:32.829476 systemd[1]: Detected architecture arm64. May 14 04:55:32.829483 systemd[1]: Running in initrd. May 14 04:55:32.829491 systemd[1]: No hostname configured, using default hostname. May 14 04:55:32.829499 systemd[1]: Hostname set to . May 14 04:55:32.829506 systemd[1]: Initializing machine ID from VM UUID. May 14 04:55:32.829514 systemd[1]: Queued start job for default target initrd.target. May 14 04:55:32.829521 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 04:55:32.829528 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 04:55:32.829536 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 14 04:55:32.829544 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 04:55:32.829551 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 14 04:55:32.829560 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 14 04:55:32.829569 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 14 04:55:32.829576 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 14 04:55:32.829583 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 04:55:32.829591 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 04:55:32.829598 systemd[1]: Reached target paths.target - Path Units. May 14 04:55:32.829613 systemd[1]: Reached target slices.target - Slice Units. May 14 04:55:32.829621 systemd[1]: Reached target swap.target - Swaps. May 14 04:55:32.829628 systemd[1]: Reached target timers.target - Timer Units. May 14 04:55:32.829635 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 14 04:55:32.829643 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 04:55:32.829650 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 14 04:55:32.829657 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 14 04:55:32.829665 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 04:55:32.829672 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 04:55:32.829681 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 04:55:32.829688 systemd[1]: Reached target sockets.target - Socket Units. May 14 04:55:32.829696 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 14 04:55:32.829703 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 04:55:32.829710 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 14 04:55:32.829718 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 14 04:55:32.829725 systemd[1]: Starting systemd-fsck-usr.service... May 14 04:55:32.829732 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 04:55:32.829741 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 04:55:32.829748 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 04:55:32.829755 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 14 04:55:32.829763 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 04:55:32.829770 systemd[1]: Finished systemd-fsck-usr.service. May 14 04:55:32.829834 systemd-journald[243]: Collecting audit messages is disabled. May 14 04:55:32.829853 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 04:55:32.829861 systemd-journald[243]: Journal started May 14 04:55:32.829880 systemd-journald[243]: Runtime Journal (/run/log/journal/7a466567a3c34f0f91379fbf630d06e8) is 6M, max 48.5M, 42.4M free. May 14 04:55:32.823504 systemd-modules-load[244]: Inserted module 'overlay' May 14 04:55:32.838301 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 04:55:32.840375 systemd[1]: Started systemd-journald.service - Journal Service. May 14 04:55:32.840389 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 04:55:32.842227 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 04:55:32.845161 kernel: Bridge firewalling registered May 14 04:55:32.845108 systemd-modules-load[244]: Inserted module 'br_netfilter' May 14 04:55:32.845228 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 04:55:32.847390 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 04:55:32.859218 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 04:55:32.860454 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 04:55:32.863694 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 04:55:32.866740 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 04:55:32.869670 systemd-tmpfiles[266]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 14 04:55:32.872441 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 04:55:32.874529 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 04:55:32.877082 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 04:55:32.879592 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 04:55:32.897424 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 14 04:55:32.910842 dracut-cmdline[288]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=121c9a3653fd599e6c6b931638a08771d538e77e97aff08e06f2cb7bca392d8e May 14 04:55:32.926150 systemd-resolved[285]: Positive Trust Anchors: May 14 04:55:32.926166 systemd-resolved[285]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 04:55:32.926196 systemd-resolved[285]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 04:55:32.930906 systemd-resolved[285]: Defaulting to hostname 'linux'. May 14 04:55:32.931791 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 04:55:32.935266 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 04:55:32.988804 kernel: SCSI subsystem initialized May 14 04:55:32.993793 kernel: Loading iSCSI transport class v2.0-870. May 14 04:55:33.001803 kernel: iscsi: registered transport (tcp) May 14 04:55:33.013798 kernel: iscsi: registered transport (qla4xxx) May 14 04:55:33.013816 kernel: QLogic iSCSI HBA Driver May 14 04:55:33.029506 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 04:55:33.044741 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 04:55:33.046842 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 04:55:33.087331 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 14 04:55:33.089436 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 14 04:55:33.152839 kernel: raid6: neonx8 gen() 15755 MB/s May 14 04:55:33.169821 kernel: raid6: neonx4 gen() 15771 MB/s May 14 04:55:33.186829 kernel: raid6: neonx2 gen() 13164 MB/s May 14 04:55:33.203798 kernel: raid6: neonx1 gen() 10397 MB/s May 14 04:55:33.220802 kernel: raid6: int64x8 gen() 6890 MB/s May 14 04:55:33.237807 kernel: raid6: int64x4 gen() 7338 MB/s May 14 04:55:33.254795 kernel: raid6: int64x2 gen() 6082 MB/s May 14 04:55:33.271896 kernel: raid6: int64x1 gen() 5041 MB/s May 14 04:55:33.271912 kernel: raid6: using algorithm neonx4 gen() 15771 MB/s May 14 04:55:33.289940 kernel: raid6: .... xor() 12320 MB/s, rmw enabled May 14 04:55:33.289957 kernel: raid6: using neon recovery algorithm May 14 04:55:33.296162 kernel: xor: measuring software checksum speed May 14 04:55:33.296189 kernel: 8regs : 21618 MB/sec May 14 04:55:33.296199 kernel: 32regs : 20865 MB/sec May 14 04:55:33.296796 kernel: arm64_neon : 28089 MB/sec May 14 04:55:33.296812 kernel: xor: using function: arm64_neon (28089 MB/sec) May 14 04:55:33.350818 kernel: Btrfs loaded, zoned=no, fsverity=no May 14 04:55:33.356109 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 14 04:55:33.359405 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 04:55:33.384578 systemd-udevd[496]: Using default interface naming scheme 'v255'. May 14 04:55:33.388585 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 04:55:33.390812 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 14 04:55:33.420429 dracut-pre-trigger[505]: rd.md=0: removing MD RAID activation May 14 04:55:33.440266 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 14 04:55:33.442379 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 04:55:33.495696 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 04:55:33.497922 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 14 04:55:33.552798 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 14 04:55:33.570411 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 14 04:55:33.570495 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 14 04:55:33.570506 kernel: GPT:9289727 != 19775487 May 14 04:55:33.570514 kernel: GPT:Alternate GPT header not at the end of the disk. May 14 04:55:33.570523 kernel: GPT:9289727 != 19775487 May 14 04:55:33.570531 kernel: GPT: Use GNU Parted to correct GPT errors. May 14 04:55:33.570539 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 04:55:33.569472 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 04:55:33.569598 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 04:55:33.571702 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 04:55:33.574106 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 04:55:33.596857 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 04:55:33.605080 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 14 04:55:33.611863 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 14 04:55:33.617983 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 14 04:55:33.619159 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 14 04:55:33.628832 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 14 04:55:33.636284 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 04:55:33.637466 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 14 04:55:33.639528 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 04:55:33.641621 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 04:55:33.644212 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 14 04:55:33.645878 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 14 04:55:33.664054 disk-uuid[590]: Primary Header is updated. May 14 04:55:33.664054 disk-uuid[590]: Secondary Entries is updated. May 14 04:55:33.664054 disk-uuid[590]: Secondary Header is updated. May 14 04:55:33.668790 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 04:55:33.668981 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 14 04:55:34.676821 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 04:55:34.676963 disk-uuid[593]: The operation has completed successfully. May 14 04:55:34.712882 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 04:55:34.712979 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 14 04:55:34.741952 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 14 04:55:34.757627 sh[612]: Success May 14 04:55:34.772390 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 04:55:34.774304 kernel: device-mapper: uevent: version 1.0.3 May 14 04:55:34.774335 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 14 04:55:34.781878 kernel: device-mapper: verity: sha256 using shash "sha256-ce" May 14 04:55:34.805705 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 14 04:55:34.808518 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 14 04:55:34.822470 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 14 04:55:34.832166 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 14 04:55:34.832209 kernel: BTRFS: device fsid 73dd31f4-39c4-4cc0-95ea-0c124bed739c devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (624) May 14 04:55:34.833568 kernel: BTRFS info (device dm-0): first mount of filesystem 73dd31f4-39c4-4cc0-95ea-0c124bed739c May 14 04:55:34.834607 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 14 04:55:34.834632 kernel: BTRFS info (device dm-0): using free-space-tree May 14 04:55:34.838491 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 14 04:55:34.839750 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 14 04:55:34.841175 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 14 04:55:34.841891 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 14 04:55:34.843324 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 14 04:55:34.872710 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (655) May 14 04:55:34.872750 kernel: BTRFS info (device vda6): first mount of filesystem 9734c607-12cd-4e4b-b169-9d2d51a1b870 May 14 04:55:34.872760 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 14 04:55:34.873691 kernel: BTRFS info (device vda6): using free-space-tree May 14 04:55:34.886785 kernel: BTRFS info (device vda6): last unmount of filesystem 9734c607-12cd-4e4b-b169-9d2d51a1b870 May 14 04:55:34.889039 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 14 04:55:34.890873 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 14 04:55:34.962209 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 04:55:34.967072 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 04:55:35.010862 systemd-networkd[799]: lo: Link UP May 14 04:55:35.011561 systemd-networkd[799]: lo: Gained carrier May 14 04:55:35.013072 systemd-networkd[799]: Enumeration completed May 14 04:55:35.013181 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 04:55:35.014226 systemd[1]: Reached target network.target - Network. May 14 04:55:35.015887 systemd-networkd[799]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 04:55:35.015891 systemd-networkd[799]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 04:55:35.016467 systemd-networkd[799]: eth0: Link UP May 14 04:55:35.016470 systemd-networkd[799]: eth0: Gained carrier May 14 04:55:35.016477 systemd-networkd[799]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 04:55:35.025960 ignition[704]: Ignition 2.21.0 May 14 04:55:35.025971 ignition[704]: Stage: fetch-offline May 14 04:55:35.026008 ignition[704]: no configs at "/usr/lib/ignition/base.d" May 14 04:55:35.026016 ignition[704]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 04:55:35.026193 ignition[704]: parsed url from cmdline: "" May 14 04:55:35.026196 ignition[704]: no config URL provided May 14 04:55:35.026201 ignition[704]: reading system config file "/usr/lib/ignition/user.ign" May 14 04:55:35.026207 ignition[704]: no config at "/usr/lib/ignition/user.ign" May 14 04:55:35.026232 ignition[704]: op(1): [started] loading QEMU firmware config module May 14 04:55:35.026239 ignition[704]: op(1): executing: "modprobe" "qemu_fw_cfg" May 14 04:55:35.034044 ignition[704]: op(1): [finished] loading QEMU firmware config module May 14 04:55:35.034071 ignition[704]: QEMU firmware config was not found. Ignoring... May 14 04:55:35.039834 systemd-networkd[799]: eth0: DHCPv4 address 10.0.0.80/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 04:55:35.081677 ignition[704]: parsing config with SHA512: de6080be303de86ec1506508628da5e29faf4f0d9603e5d8d1ccda9ddda37de94d02fe5b75977af26a7128fda26a52fe1ad80ccda0027f295d8f20ae3dd031a0 May 14 04:55:35.086693 systemd-resolved[285]: Detected conflict on linux IN A 10.0.0.80 May 14 04:55:35.086703 systemd-resolved[285]: Hostname conflict, changing published hostname from 'linux' to 'linux4'. May 14 04:55:35.087688 unknown[704]: fetched base config from "system" May 14 04:55:35.088163 ignition[704]: fetch-offline: fetch-offline passed May 14 04:55:35.087695 unknown[704]: fetched user config from "qemu" May 14 04:55:35.088221 ignition[704]: Ignition finished successfully May 14 04:55:35.090945 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 14 04:55:35.092657 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 14 04:55:35.093397 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 14 04:55:35.120308 ignition[812]: Ignition 2.21.0 May 14 04:55:35.120327 ignition[812]: Stage: kargs May 14 04:55:35.120442 ignition[812]: no configs at "/usr/lib/ignition/base.d" May 14 04:55:35.120451 ignition[812]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 04:55:35.122971 ignition[812]: kargs: kargs passed May 14 04:55:35.123013 ignition[812]: Ignition finished successfully May 14 04:55:35.125075 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 14 04:55:35.126786 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 14 04:55:35.149530 ignition[820]: Ignition 2.21.0 May 14 04:55:35.149549 ignition[820]: Stage: disks May 14 04:55:35.149680 ignition[820]: no configs at "/usr/lib/ignition/base.d" May 14 04:55:35.152487 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 14 04:55:35.149688 ignition[820]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 04:55:35.153565 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 14 04:55:35.150503 ignition[820]: disks: disks passed May 14 04:55:35.155087 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 14 04:55:35.150542 ignition[820]: Ignition finished successfully May 14 04:55:35.156736 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 04:55:35.158459 systemd[1]: Reached target sysinit.target - System Initialization. May 14 04:55:35.159789 systemd[1]: Reached target basic.target - Basic System. May 14 04:55:35.162275 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 14 04:55:35.184505 systemd-fsck[830]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 14 04:55:35.188585 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 14 04:55:35.190600 systemd[1]: Mounting sysroot.mount - /sysroot... May 14 04:55:35.251813 kernel: EXT4-fs (vda9): mounted filesystem 008d778b-58b1-4ebe-9d06-c739d7d81b3b r/w with ordered data mode. Quota mode: none. May 14 04:55:35.251732 systemd[1]: Mounted sysroot.mount - /sysroot. May 14 04:55:35.252910 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 14 04:55:35.255200 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 04:55:35.276352 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 14 04:55:35.277262 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 14 04:55:35.277298 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 04:55:35.277319 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 14 04:55:35.283401 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 14 04:55:35.289457 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (840) May 14 04:55:35.289477 kernel: BTRFS info (device vda6): first mount of filesystem 9734c607-12cd-4e4b-b169-9d2d51a1b870 May 14 04:55:35.289486 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 14 04:55:35.289495 kernel: BTRFS info (device vda6): using free-space-tree May 14 04:55:35.286861 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 14 04:55:35.293995 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 04:55:35.328834 initrd-setup-root[864]: cut: /sysroot/etc/passwd: No such file or directory May 14 04:55:35.332662 initrd-setup-root[871]: cut: /sysroot/etc/group: No such file or directory May 14 04:55:35.336442 initrd-setup-root[878]: cut: /sysroot/etc/shadow: No such file or directory May 14 04:55:35.339173 initrd-setup-root[885]: cut: /sysroot/etc/gshadow: No such file or directory May 14 04:55:35.403234 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 14 04:55:35.405050 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 14 04:55:35.406388 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 14 04:55:35.419807 kernel: BTRFS info (device vda6): last unmount of filesystem 9734c607-12cd-4e4b-b169-9d2d51a1b870 May 14 04:55:35.434886 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 14 04:55:35.446810 ignition[953]: INFO : Ignition 2.21.0 May 14 04:55:35.446810 ignition[953]: INFO : Stage: mount May 14 04:55:35.446810 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 04:55:35.446810 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 04:55:35.452233 ignition[953]: INFO : mount: mount passed May 14 04:55:35.452233 ignition[953]: INFO : Ignition finished successfully May 14 04:55:35.450049 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 14 04:55:35.452034 systemd[1]: Starting ignition-files.service - Ignition (files)... May 14 04:55:35.830733 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 14 04:55:35.832255 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 04:55:35.861570 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (966) May 14 04:55:35.861611 kernel: BTRFS info (device vda6): first mount of filesystem 9734c607-12cd-4e4b-b169-9d2d51a1b870 May 14 04:55:35.861623 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 14 04:55:35.862575 kernel: BTRFS info (device vda6): using free-space-tree May 14 04:55:35.866941 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 04:55:35.891138 ignition[984]: INFO : Ignition 2.21.0 May 14 04:55:35.891138 ignition[984]: INFO : Stage: files May 14 04:55:35.893109 ignition[984]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 04:55:35.893109 ignition[984]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 04:55:35.893109 ignition[984]: DEBUG : files: compiled without relabeling support, skipping May 14 04:55:35.896376 ignition[984]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 04:55:35.896376 ignition[984]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 04:55:35.896376 ignition[984]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 04:55:35.896376 ignition[984]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 04:55:35.896376 ignition[984]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 04:55:35.896125 unknown[984]: wrote ssh authorized keys file for user: core May 14 04:55:35.903733 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 14 04:55:35.903733 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 14 04:55:35.974974 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 14 04:55:36.101304 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 14 04:55:36.103521 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 14 04:55:36.103521 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 14 04:55:36.103521 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 04:55:36.103521 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 04:55:36.103521 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 04:55:36.103521 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 04:55:36.103521 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 04:55:36.103521 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 04:55:36.115998 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 04:55:36.115998 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 04:55:36.115998 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 14 04:55:36.115998 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 14 04:55:36.115998 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 14 04:55:36.115998 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 14 04:55:36.437903 systemd-networkd[799]: eth0: Gained IPv6LL May 14 04:55:36.472790 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 14 04:55:37.119855 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 14 04:55:37.119855 ignition[984]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 14 04:55:37.123541 ignition[984]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 04:55:37.127083 ignition[984]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 04:55:37.127083 ignition[984]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 14 04:55:37.127083 ignition[984]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 14 04:55:37.131615 ignition[984]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 04:55:37.131615 ignition[984]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 04:55:37.131615 ignition[984]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 14 04:55:37.131615 ignition[984]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 14 04:55:37.144638 ignition[984]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 14 04:55:37.148050 ignition[984]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 14 04:55:37.150720 ignition[984]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 14 04:55:37.150720 ignition[984]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 14 04:55:37.150720 ignition[984]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 14 04:55:37.150720 ignition[984]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 04:55:37.150720 ignition[984]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 04:55:37.150720 ignition[984]: INFO : files: files passed May 14 04:55:37.150720 ignition[984]: INFO : Ignition finished successfully May 14 04:55:37.151398 systemd[1]: Finished ignition-files.service - Ignition (files). May 14 04:55:37.153877 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 14 04:55:37.158313 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 14 04:55:37.169977 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 04:55:37.171242 initrd-setup-root-after-ignition[1012]: grep: /sysroot/oem/oem-release: No such file or directory May 14 04:55:37.171555 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 14 04:55:37.174988 initrd-setup-root-after-ignition[1014]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 04:55:37.174988 initrd-setup-root-after-ignition[1014]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 14 04:55:37.177958 initrd-setup-root-after-ignition[1018]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 04:55:37.180081 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 04:55:37.181564 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 14 04:55:37.184180 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 14 04:55:37.218525 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 04:55:37.218663 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 14 04:55:37.220683 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 14 04:55:37.222456 systemd[1]: Reached target initrd.target - Initrd Default Target. May 14 04:55:37.224195 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 14 04:55:37.225026 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 14 04:55:37.259232 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 04:55:37.261621 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 14 04:55:37.283693 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 14 04:55:37.285026 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 04:55:37.286913 systemd[1]: Stopped target timers.target - Timer Units. May 14 04:55:37.288650 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 04:55:37.288791 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 04:55:37.291348 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 14 04:55:37.293288 systemd[1]: Stopped target basic.target - Basic System. May 14 04:55:37.294857 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 14 04:55:37.296559 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 14 04:55:37.298362 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 14 04:55:37.300056 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 14 04:55:37.301868 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 14 04:55:37.303742 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 14 04:55:37.305763 systemd[1]: Stopped target sysinit.target - System Initialization. May 14 04:55:37.307934 systemd[1]: Stopped target local-fs.target - Local File Systems. May 14 04:55:37.309654 systemd[1]: Stopped target swap.target - Swaps. May 14 04:55:37.311171 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 04:55:37.311349 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 14 04:55:37.313638 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 14 04:55:37.314974 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 04:55:37.316874 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 14 04:55:37.317873 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 04:55:37.319980 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 04:55:37.320105 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 14 04:55:37.322892 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 04:55:37.323019 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 14 04:55:37.325008 systemd[1]: Stopped target paths.target - Path Units. May 14 04:55:37.326568 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 04:55:37.327351 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 04:55:37.328580 systemd[1]: Stopped target slices.target - Slice Units. May 14 04:55:37.330399 systemd[1]: Stopped target sockets.target - Socket Units. May 14 04:55:37.331881 systemd[1]: iscsid.socket: Deactivated successfully. May 14 04:55:37.331968 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 14 04:55:37.333702 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 04:55:37.333807 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 04:55:37.335868 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 04:55:37.335983 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 04:55:37.337901 systemd[1]: ignition-files.service: Deactivated successfully. May 14 04:55:37.338024 systemd[1]: Stopped ignition-files.service - Ignition (files). May 14 04:55:37.340645 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 14 04:55:37.342060 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 04:55:37.342189 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 14 04:55:37.353297 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 14 04:55:37.354145 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 04:55:37.354262 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 14 04:55:37.356104 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 04:55:37.356205 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 14 04:55:37.362368 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 04:55:37.362464 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 14 04:55:37.367925 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 04:55:37.369297 ignition[1040]: INFO : Ignition 2.21.0 May 14 04:55:37.369297 ignition[1040]: INFO : Stage: umount May 14 04:55:37.369297 ignition[1040]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 04:55:37.369297 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 04:55:37.373048 ignition[1040]: INFO : umount: umount passed May 14 04:55:37.373048 ignition[1040]: INFO : Ignition finished successfully May 14 04:55:37.375460 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 04:55:37.375563 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 14 04:55:37.377390 systemd[1]: Stopped target network.target - Network. May 14 04:55:37.379000 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 04:55:37.379059 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 14 04:55:37.380644 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 04:55:37.380690 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 14 04:55:37.382511 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 04:55:37.382560 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 14 04:55:37.384249 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 14 04:55:37.384288 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 14 04:55:37.386023 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 14 04:55:37.387830 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 14 04:55:37.394269 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 04:55:37.394383 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 14 04:55:37.398794 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 14 04:55:37.399284 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 14 04:55:37.399369 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 04:55:37.403012 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 14 04:55:37.403209 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 04:55:37.403325 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 14 04:55:37.407548 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 14 04:55:37.407699 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 14 04:55:37.408915 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 04:55:37.408956 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 14 04:55:37.415637 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 14 04:55:37.416553 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 04:55:37.416623 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 04:55:37.418790 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 04:55:37.418840 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 04:55:37.421950 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 04:55:37.421993 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 14 04:55:37.425726 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 04:55:37.430234 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 04:55:37.430536 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 04:55:37.430685 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 14 04:55:37.434435 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 04:55:37.434503 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 14 04:55:37.447762 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 04:55:37.447923 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 14 04:55:37.454462 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 04:55:37.454604 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 04:55:37.456755 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 04:55:37.456806 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 14 04:55:37.458736 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 04:55:37.458765 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 14 04:55:37.460530 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 04:55:37.460581 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 14 04:55:37.463383 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 04:55:37.463429 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 14 04:55:37.466119 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 04:55:37.466175 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 04:55:37.469516 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 14 04:55:37.470648 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 14 04:55:37.470701 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 14 04:55:37.473840 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 14 04:55:37.473886 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 04:55:37.476895 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 04:55:37.476935 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 04:55:37.484974 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 04:55:37.485090 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 14 04:55:37.487167 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 14 04:55:37.489444 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 14 04:55:37.506347 systemd[1]: Switching root. May 14 04:55:37.529992 systemd-journald[243]: Journal stopped May 14 04:55:38.283761 systemd-journald[243]: Received SIGTERM from PID 1 (systemd). May 14 04:55:38.283833 kernel: SELinux: policy capability network_peer_controls=1 May 14 04:55:38.283849 kernel: SELinux: policy capability open_perms=1 May 14 04:55:38.283858 kernel: SELinux: policy capability extended_socket_class=1 May 14 04:55:38.283869 kernel: SELinux: policy capability always_check_network=0 May 14 04:55:38.283881 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 04:55:38.283895 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 04:55:38.283905 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 04:55:38.283914 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 04:55:38.283923 kernel: SELinux: policy capability userspace_initial_context=0 May 14 04:55:38.283933 kernel: audit: type=1403 audit(1747198537.699:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 04:55:38.283947 systemd[1]: Successfully loaded SELinux policy in 55.247ms. May 14 04:55:38.283963 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.267ms. May 14 04:55:38.283976 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 04:55:38.283987 systemd[1]: Detected virtualization kvm. May 14 04:55:38.283997 systemd[1]: Detected architecture arm64. May 14 04:55:38.284007 systemd[1]: Detected first boot. May 14 04:55:38.284017 systemd[1]: Initializing machine ID from VM UUID. May 14 04:55:38.284026 kernel: NET: Registered PF_VSOCK protocol family May 14 04:55:38.284040 zram_generator::config[1087]: No configuration found. May 14 04:55:38.284050 systemd[1]: Populated /etc with preset unit settings. May 14 04:55:38.284061 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 14 04:55:38.284071 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 04:55:38.284081 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 14 04:55:38.284092 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 04:55:38.284102 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 14 04:55:38.284112 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 14 04:55:38.284123 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 14 04:55:38.284133 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 14 04:55:38.284143 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 14 04:55:38.284153 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 14 04:55:38.284163 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 14 04:55:38.284172 systemd[1]: Created slice user.slice - User and Session Slice. May 14 04:55:38.284183 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 04:55:38.284194 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 04:55:38.284204 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 14 04:55:38.284216 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 14 04:55:38.284226 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 14 04:55:38.284236 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 04:55:38.284246 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 14 04:55:38.284256 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 04:55:38.284266 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 04:55:38.284276 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 14 04:55:38.284292 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 14 04:55:38.284302 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 14 04:55:38.284312 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 14 04:55:38.284322 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 04:55:38.284332 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 04:55:38.284342 systemd[1]: Reached target slices.target - Slice Units. May 14 04:55:38.284352 systemd[1]: Reached target swap.target - Swaps. May 14 04:55:38.284362 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 14 04:55:38.284372 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 14 04:55:38.284384 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 14 04:55:38.284394 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 04:55:38.284404 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 04:55:38.284414 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 04:55:38.284424 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 14 04:55:38.284434 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 14 04:55:38.284444 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 14 04:55:38.284454 systemd[1]: Mounting media.mount - External Media Directory... May 14 04:55:38.284464 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 14 04:55:38.284475 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 14 04:55:38.284485 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 14 04:55:38.284496 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 04:55:38.284507 systemd[1]: Reached target machines.target - Containers. May 14 04:55:38.284516 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 14 04:55:38.284526 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 04:55:38.284536 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 04:55:38.284546 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 14 04:55:38.284556 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 04:55:38.284567 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 04:55:38.284577 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 04:55:38.284593 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 14 04:55:38.284604 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 04:55:38.284614 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 04:55:38.284624 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 04:55:38.284635 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 14 04:55:38.284645 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 04:55:38.284656 systemd[1]: Stopped systemd-fsck-usr.service. May 14 04:55:38.284667 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 04:55:38.284677 kernel: fuse: init (API version 7.41) May 14 04:55:38.284686 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 04:55:38.284696 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 04:55:38.284706 kernel: loop: module loaded May 14 04:55:38.284716 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 04:55:38.284725 kernel: ACPI: bus type drm_connector registered May 14 04:55:38.284735 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 14 04:55:38.284747 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 14 04:55:38.284757 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 04:55:38.284767 systemd[1]: verity-setup.service: Deactivated successfully. May 14 04:55:38.284783 systemd[1]: Stopped verity-setup.service. May 14 04:55:38.284794 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 14 04:55:38.284806 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 14 04:55:38.284836 systemd-journald[1166]: Collecting audit messages is disabled. May 14 04:55:38.284859 systemd[1]: Mounted media.mount - External Media Directory. May 14 04:55:38.284869 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 14 04:55:38.284879 systemd-journald[1166]: Journal started May 14 04:55:38.284900 systemd-journald[1166]: Runtime Journal (/run/log/journal/7a466567a3c34f0f91379fbf630d06e8) is 6M, max 48.5M, 42.4M free. May 14 04:55:38.068536 systemd[1]: Queued start job for default target multi-user.target. May 14 04:55:38.089621 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 14 04:55:38.089996 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 04:55:38.287625 systemd[1]: Started systemd-journald.service - Journal Service. May 14 04:55:38.289314 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 14 04:55:38.290398 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 14 04:55:38.292810 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 14 04:55:38.294031 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 04:55:38.295379 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 04:55:38.295529 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 14 04:55:38.296882 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 04:55:38.297032 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 04:55:38.298229 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 04:55:38.298384 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 04:55:38.299602 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 04:55:38.299753 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 04:55:38.301167 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 04:55:38.301335 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 14 04:55:38.302534 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 04:55:38.302698 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 04:55:38.304142 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 04:55:38.305382 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 04:55:38.306807 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 14 04:55:38.308304 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 14 04:55:38.320383 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 04:55:38.322738 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 14 04:55:38.324724 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 14 04:55:38.325852 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 04:55:38.325880 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 04:55:38.327610 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 14 04:55:38.333498 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 14 04:55:38.334764 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 04:55:38.337501 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 14 04:55:38.339374 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 14 04:55:38.340468 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 04:55:38.342983 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 14 04:55:38.345876 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 04:55:38.346705 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 04:55:38.348273 systemd-journald[1166]: Time spent on flushing to /var/log/journal/7a466567a3c34f0f91379fbf630d06e8 is 23.570ms for 883 entries. May 14 04:55:38.348273 systemd-journald[1166]: System Journal (/var/log/journal/7a466567a3c34f0f91379fbf630d06e8) is 8M, max 195.6M, 187.6M free. May 14 04:55:38.387096 systemd-journald[1166]: Received client request to flush runtime journal. May 14 04:55:38.387153 kernel: loop0: detected capacity change from 0 to 107312 May 14 04:55:38.387175 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 04:55:38.350264 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 14 04:55:38.353668 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 14 04:55:38.356395 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 04:55:38.359398 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 14 04:55:38.361309 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 14 04:55:38.363396 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 14 04:55:38.366005 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 14 04:55:38.368247 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 14 04:55:38.380287 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 04:55:38.390281 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 14 04:55:38.402015 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 14 04:55:38.406289 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 14 04:55:38.407806 kernel: loop1: detected capacity change from 0 to 138376 May 14 04:55:38.409924 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 04:55:38.441001 kernel: loop2: detected capacity change from 0 to 194096 May 14 04:55:38.445864 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. May 14 04:55:38.445878 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. May 14 04:55:38.451936 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 04:55:38.467799 kernel: loop3: detected capacity change from 0 to 107312 May 14 04:55:38.474805 kernel: loop4: detected capacity change from 0 to 138376 May 14 04:55:38.481847 kernel: loop5: detected capacity change from 0 to 194096 May 14 04:55:38.486207 (sd-merge)[1225]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 14 04:55:38.486574 (sd-merge)[1225]: Merged extensions into '/usr'. May 14 04:55:38.489956 systemd[1]: Reload requested from client PID 1203 ('systemd-sysext') (unit systemd-sysext.service)... May 14 04:55:38.489970 systemd[1]: Reloading... May 14 04:55:38.537994 zram_generator::config[1248]: No configuration found. May 14 04:55:38.613991 ldconfig[1198]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 04:55:38.618836 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 04:55:38.680269 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 04:55:38.680642 systemd[1]: Reloading finished in 190 ms. May 14 04:55:38.711813 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 14 04:55:38.713119 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 14 04:55:38.726047 systemd[1]: Starting ensure-sysext.service... May 14 04:55:38.727770 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 04:55:38.738528 systemd[1]: Reload requested from client PID 1286 ('systemctl') (unit ensure-sysext.service)... May 14 04:55:38.738544 systemd[1]: Reloading... May 14 04:55:38.748628 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 14 04:55:38.749095 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 14 04:55:38.749334 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 04:55:38.749513 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 14 04:55:38.750335 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 04:55:38.750627 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. May 14 04:55:38.750736 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. May 14 04:55:38.753426 systemd-tmpfiles[1287]: Detected autofs mount point /boot during canonicalization of boot. May 14 04:55:38.753514 systemd-tmpfiles[1287]: Skipping /boot May 14 04:55:38.762256 systemd-tmpfiles[1287]: Detected autofs mount point /boot during canonicalization of boot. May 14 04:55:38.762346 systemd-tmpfiles[1287]: Skipping /boot May 14 04:55:38.784899 zram_generator::config[1317]: No configuration found. May 14 04:55:38.851804 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 04:55:38.914134 systemd[1]: Reloading finished in 175 ms. May 14 04:55:38.932271 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 14 04:55:38.933768 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 04:55:38.947756 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 04:55:38.949951 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 14 04:55:38.951939 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 14 04:55:38.954550 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 04:55:38.959924 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 04:55:38.962446 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 14 04:55:38.968949 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 04:55:38.970007 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 04:55:38.977127 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 04:55:38.979136 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 04:55:38.980128 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 04:55:38.980256 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 04:55:38.984614 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 14 04:55:38.988005 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 14 04:55:38.989624 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 04:55:38.991845 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 04:55:38.993557 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 04:55:38.993710 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 04:55:38.995301 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 04:55:38.995430 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 04:55:39.002375 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 04:55:39.003609 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 04:55:39.008388 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 04:55:39.014007 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 04:55:39.014418 systemd-udevd[1355]: Using default interface naming scheme 'v255'. May 14 04:55:39.015007 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 04:55:39.015124 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 04:55:39.022006 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 14 04:55:39.024338 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 14 04:55:39.027960 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 14 04:55:39.030194 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 04:55:39.030904 augenrules[1387]: No rules May 14 04:55:39.030356 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 04:55:39.032114 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 04:55:39.035357 systemd[1]: audit-rules.service: Deactivated successfully. May 14 04:55:39.035554 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 04:55:39.036895 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 04:55:39.048196 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 04:55:39.049917 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 14 04:55:39.051644 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 04:55:39.051845 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 04:55:39.058089 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 14 04:55:39.075156 systemd[1]: Finished ensure-sysext.service. May 14 04:55:39.085000 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 04:55:39.086222 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 04:55:39.088003 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 04:55:39.100651 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 04:55:39.105008 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 04:55:39.107133 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 04:55:39.108207 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 04:55:39.108258 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 04:55:39.110449 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 04:55:39.117548 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 14 04:55:39.123741 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 04:55:39.133911 augenrules[1432]: /sbin/augenrules: No change May 14 04:55:39.135828 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 04:55:39.135993 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 04:55:39.140070 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 04:55:39.140228 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 04:55:39.141669 augenrules[1459]: No rules May 14 04:55:39.141668 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 04:55:39.141855 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 04:55:39.143103 systemd[1]: audit-rules.service: Deactivated successfully. May 14 04:55:39.143281 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 04:55:39.144481 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 04:55:39.144649 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 04:55:39.155763 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 14 04:55:39.162328 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 04:55:39.162386 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 04:55:39.164908 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 04:55:39.167391 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 14 04:55:39.200327 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 14 04:55:39.208008 systemd-resolved[1353]: Positive Trust Anchors: May 14 04:55:39.208366 systemd-resolved[1353]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 04:55:39.208649 systemd-resolved[1353]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 04:55:39.215075 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 14 04:55:39.216423 systemd[1]: Reached target time-set.target - System Time Set. May 14 04:55:39.224796 systemd-resolved[1353]: Defaulting to hostname 'linux'. May 14 04:55:39.226648 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 04:55:39.227951 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 04:55:39.229404 systemd[1]: Reached target sysinit.target - System Initialization. May 14 04:55:39.230729 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 14 04:55:39.232362 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 14 04:55:39.232554 systemd-networkd[1443]: lo: Link UP May 14 04:55:39.232570 systemd-networkd[1443]: lo: Gained carrier May 14 04:55:39.233342 systemd-networkd[1443]: Enumeration completed May 14 04:55:39.233729 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 14 04:55:39.233750 systemd-networkd[1443]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 04:55:39.233754 systemd-networkd[1443]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 04:55:39.234265 systemd-networkd[1443]: eth0: Link UP May 14 04:55:39.234390 systemd-networkd[1443]: eth0: Gained carrier May 14 04:55:39.234408 systemd-networkd[1443]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 04:55:39.235362 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 14 04:55:39.236691 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 14 04:55:39.237947 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 04:55:39.237983 systemd[1]: Reached target paths.target - Path Units. May 14 04:55:39.239012 systemd[1]: Reached target timers.target - Timer Units. May 14 04:55:39.241128 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 14 04:55:39.243345 systemd[1]: Starting docker.socket - Docker Socket for the API... May 14 04:55:39.246467 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 14 04:55:39.247963 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 14 04:55:39.249061 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 14 04:55:39.251838 systemd-networkd[1443]: eth0: DHCPv4 address 10.0.0.80/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 04:55:39.252474 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 14 04:55:39.252597 systemd-timesyncd[1446]: Network configuration changed, trying to establish connection. May 14 04:55:38.815277 systemd-timesyncd[1446]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 14 04:55:38.820393 systemd-journald[1166]: Time jumped backwards, rotating. May 14 04:55:38.815320 systemd-timesyncd[1446]: Initial clock synchronization to Wed 2025-05-14 04:55:38.815197 UTC. May 14 04:55:38.816899 systemd-resolved[1353]: Clock change detected. Flushing caches. May 14 04:55:38.817176 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 14 04:55:38.819879 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 04:55:38.821046 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 14 04:55:38.823078 systemd[1]: Reached target network.target - Network. May 14 04:55:38.824457 systemd[1]: Reached target sockets.target - Socket Units. May 14 04:55:38.826772 systemd[1]: Reached target basic.target - Basic System. May 14 04:55:38.827608 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 14 04:55:38.827637 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 14 04:55:38.829931 systemd[1]: Starting containerd.service - containerd container runtime... May 14 04:55:38.832883 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 14 04:55:38.835820 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 14 04:55:38.838821 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 14 04:55:38.842955 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 14 04:55:38.843850 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 14 04:55:38.847220 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 14 04:55:38.858847 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 14 04:55:38.861758 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 14 04:55:38.864914 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 14 04:55:38.868754 systemd[1]: Starting systemd-logind.service - User Login Management... May 14 04:55:38.872898 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 14 04:55:38.876895 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 14 04:55:38.878173 jq[1498]: false May 14 04:55:38.879637 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 04:55:38.880047 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 04:55:38.882955 systemd[1]: Starting update-engine.service - Update Engine... May 14 04:55:38.887215 extend-filesystems[1499]: Found loop3 May 14 04:55:38.888899 extend-filesystems[1499]: Found loop4 May 14 04:55:38.890123 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 14 04:55:38.891801 extend-filesystems[1499]: Found loop5 May 14 04:55:38.891801 extend-filesystems[1499]: Found vda May 14 04:55:38.891801 extend-filesystems[1499]: Found vda1 May 14 04:55:38.891801 extend-filesystems[1499]: Found vda2 May 14 04:55:38.891801 extend-filesystems[1499]: Found vda3 May 14 04:55:38.891801 extend-filesystems[1499]: Found usr May 14 04:55:38.891801 extend-filesystems[1499]: Found vda4 May 14 04:55:38.891801 extend-filesystems[1499]: Found vda6 May 14 04:55:38.891801 extend-filesystems[1499]: Found vda7 May 14 04:55:38.891801 extend-filesystems[1499]: Found vda9 May 14 04:55:38.891801 extend-filesystems[1499]: Checking size of /dev/vda9 May 14 04:55:38.894714 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 14 04:55:38.898075 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 04:55:38.899613 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 14 04:55:38.902193 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 04:55:38.913125 jq[1515]: true May 14 04:55:38.902357 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 14 04:55:38.910433 systemd[1]: motdgen.service: Deactivated successfully. May 14 04:55:38.912447 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 14 04:55:38.926356 (ntainerd)[1523]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 14 04:55:38.940947 extend-filesystems[1499]: Resized partition /dev/vda9 May 14 04:55:38.941224 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 04:55:38.948061 jq[1522]: true May 14 04:55:38.949187 tar[1520]: linux-arm64/helm May 14 04:55:38.952584 update_engine[1514]: I20250514 04:55:38.952359 1514 main.cc:92] Flatcar Update Engine starting May 14 04:55:38.956673 extend-filesystems[1536]: resize2fs 1.47.2 (1-Jan-2025) May 14 04:55:38.960187 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 14 04:55:38.965730 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 14 04:55:38.994455 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 14 04:55:38.992554 dbus-daemon[1495]: [system] SELinux support is enabled May 14 04:55:38.992729 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 14 04:55:39.009967 update_engine[1514]: I20250514 04:55:39.000237 1514 update_check_scheduler.cc:74] Next update check in 7m5s May 14 04:55:38.997439 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 04:55:38.997460 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 14 04:55:38.998649 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 04:55:38.998664 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 14 04:55:39.000101 systemd[1]: Started update-engine.service - Update Engine. May 14 04:55:39.003831 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 14 04:55:39.010958 extend-filesystems[1536]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 14 04:55:39.010958 extend-filesystems[1536]: old_desc_blocks = 1, new_desc_blocks = 1 May 14 04:55:39.010958 extend-filesystems[1536]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 14 04:55:39.020046 extend-filesystems[1499]: Resized filesystem in /dev/vda9 May 14 04:55:39.012085 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 04:55:39.012324 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 14 04:55:39.030770 bash[1558]: Updated "/home/core/.ssh/authorized_keys" May 14 04:55:39.056590 systemd-logind[1507]: Watching system buttons on /dev/input/event0 (Power Button) May 14 04:55:39.056864 systemd-logind[1507]: New seat seat0. May 14 04:55:39.065205 systemd[1]: Started systemd-logind.service - User Login Management. May 14 04:55:39.066696 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 04:55:39.068125 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 14 04:55:39.076358 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 14 04:55:39.091340 locksmithd[1552]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 04:55:39.192049 containerd[1523]: time="2025-05-14T04:55:39Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 14 04:55:39.193850 containerd[1523]: time="2025-05-14T04:55:39.193815114Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 14 04:55:39.212466 containerd[1523]: time="2025-05-14T04:55:39.212425154Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.32µs" May 14 04:55:39.212466 containerd[1523]: time="2025-05-14T04:55:39.212462154Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 14 04:55:39.212548 containerd[1523]: time="2025-05-14T04:55:39.212480074Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 14 04:55:39.212763 containerd[1523]: time="2025-05-14T04:55:39.212673874Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 14 04:55:39.212805 containerd[1523]: time="2025-05-14T04:55:39.212767594Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 14 04:55:39.212805 containerd[1523]: time="2025-05-14T04:55:39.212798914Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 04:55:39.212928 containerd[1523]: time="2025-05-14T04:55:39.212903314Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 04:55:39.212951 containerd[1523]: time="2025-05-14T04:55:39.212925434Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 04:55:39.213221 containerd[1523]: time="2025-05-14T04:55:39.213194874Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 04:55:39.213251 containerd[1523]: time="2025-05-14T04:55:39.213220554Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 04:55:39.213297 containerd[1523]: time="2025-05-14T04:55:39.213277354Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 04:55:39.213322 containerd[1523]: time="2025-05-14T04:55:39.213294514Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 14 04:55:39.213388 containerd[1523]: time="2025-05-14T04:55:39.213372394Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 14 04:55:39.213740 containerd[1523]: time="2025-05-14T04:55:39.213716234Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 04:55:39.213777 containerd[1523]: time="2025-05-14T04:55:39.213760554Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 04:55:39.213801 containerd[1523]: time="2025-05-14T04:55:39.213776834Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 14 04:55:39.213825 containerd[1523]: time="2025-05-14T04:55:39.213815194Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 14 04:55:39.215248 containerd[1523]: time="2025-05-14T04:55:39.214947634Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 14 04:55:39.215248 containerd[1523]: time="2025-05-14T04:55:39.215060874Z" level=info msg="metadata content store policy set" policy=shared May 14 04:55:39.218931 containerd[1523]: time="2025-05-14T04:55:39.218895874Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 14 04:55:39.218992 containerd[1523]: time="2025-05-14T04:55:39.218950994Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 14 04:55:39.218992 containerd[1523]: time="2025-05-14T04:55:39.218973354Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 14 04:55:39.218992 containerd[1523]: time="2025-05-14T04:55:39.218985994Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 14 04:55:39.219037 containerd[1523]: time="2025-05-14T04:55:39.218998754Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 14 04:55:39.219037 containerd[1523]: time="2025-05-14T04:55:39.219009354Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 14 04:55:39.219037 containerd[1523]: time="2025-05-14T04:55:39.219020394Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 14 04:55:39.219037 containerd[1523]: time="2025-05-14T04:55:39.219031914Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 14 04:55:39.219112 containerd[1523]: time="2025-05-14T04:55:39.219043394Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 14 04:55:39.219112 containerd[1523]: time="2025-05-14T04:55:39.219053634Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 14 04:55:39.219112 containerd[1523]: time="2025-05-14T04:55:39.219062514Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 14 04:55:39.219112 containerd[1523]: time="2025-05-14T04:55:39.219074394Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 14 04:55:39.219428 containerd[1523]: time="2025-05-14T04:55:39.219187114Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 14 04:55:39.219462 containerd[1523]: time="2025-05-14T04:55:39.219436234Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 14 04:55:39.219479 containerd[1523]: time="2025-05-14T04:55:39.219462194Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 14 04:55:39.219479 containerd[1523]: time="2025-05-14T04:55:39.219473554Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 14 04:55:39.219524 containerd[1523]: time="2025-05-14T04:55:39.219488354Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 14 04:55:39.219568 containerd[1523]: time="2025-05-14T04:55:39.219499874Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 14 04:55:39.219591 containerd[1523]: time="2025-05-14T04:55:39.219580514Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 14 04:55:39.219615 containerd[1523]: time="2025-05-14T04:55:39.219593394Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 14 04:55:39.219615 containerd[1523]: time="2025-05-14T04:55:39.219610994Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 14 04:55:39.219648 containerd[1523]: time="2025-05-14T04:55:39.219625554Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 14 04:55:39.219648 containerd[1523]: time="2025-05-14T04:55:39.219635914Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 14 04:55:39.220128 containerd[1523]: time="2025-05-14T04:55:39.220107954Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 14 04:55:39.220151 containerd[1523]: time="2025-05-14T04:55:39.220140074Z" level=info msg="Start snapshots syncer" May 14 04:55:39.220249 containerd[1523]: time="2025-05-14T04:55:39.220231754Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 14 04:55:39.220772 containerd[1523]: time="2025-05-14T04:55:39.220730994Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 14 04:55:39.220866 containerd[1523]: time="2025-05-14T04:55:39.220845914Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 14 04:55:39.220958 containerd[1523]: time="2025-05-14T04:55:39.220937234Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 14 04:55:39.221106 containerd[1523]: time="2025-05-14T04:55:39.221085674Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 14 04:55:39.221130 containerd[1523]: time="2025-05-14T04:55:39.221117394Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 14 04:55:39.221147 containerd[1523]: time="2025-05-14T04:55:39.221129514Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 14 04:55:39.221147 containerd[1523]: time="2025-05-14T04:55:39.221140114Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 14 04:55:39.221184 containerd[1523]: time="2025-05-14T04:55:39.221151834Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 14 04:55:39.221184 containerd[1523]: time="2025-05-14T04:55:39.221162194Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 14 04:55:39.221184 containerd[1523]: time="2025-05-14T04:55:39.221173034Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 14 04:55:39.221229 containerd[1523]: time="2025-05-14T04:55:39.221197434Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 14 04:55:39.221229 containerd[1523]: time="2025-05-14T04:55:39.221212874Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 14 04:55:39.221229 containerd[1523]: time="2025-05-14T04:55:39.221223714Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 14 04:55:39.221292 containerd[1523]: time="2025-05-14T04:55:39.221258514Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 04:55:39.221292 containerd[1523]: time="2025-05-14T04:55:39.221274714Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 04:55:39.221292 containerd[1523]: time="2025-05-14T04:55:39.221283034Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 04:55:39.221432 containerd[1523]: time="2025-05-14T04:55:39.221292914Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 04:55:39.221432 containerd[1523]: time="2025-05-14T04:55:39.221301074Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 14 04:55:39.221432 containerd[1523]: time="2025-05-14T04:55:39.221310474Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 14 04:55:39.221432 containerd[1523]: time="2025-05-14T04:55:39.221320514Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 14 04:55:39.221596 containerd[1523]: time="2025-05-14T04:55:39.221573274Z" level=info msg="runtime interface created" May 14 04:55:39.221596 containerd[1523]: time="2025-05-14T04:55:39.221589954Z" level=info msg="created NRI interface" May 14 04:55:39.221636 containerd[1523]: time="2025-05-14T04:55:39.221603474Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 14 04:55:39.221636 containerd[1523]: time="2025-05-14T04:55:39.221616594Z" level=info msg="Connect containerd service" May 14 04:55:39.221666 containerd[1523]: time="2025-05-14T04:55:39.221644434Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 14 04:55:39.222698 containerd[1523]: time="2025-05-14T04:55:39.222670474Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 04:55:39.345063 containerd[1523]: time="2025-05-14T04:55:39.344887914Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 04:55:39.345063 containerd[1523]: time="2025-05-14T04:55:39.344922034Z" level=info msg="Start subscribing containerd event" May 14 04:55:39.345063 containerd[1523]: time="2025-05-14T04:55:39.344949634Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 04:55:39.345191 containerd[1523]: time="2025-05-14T04:55:39.345007274Z" level=info msg="Start recovering state" May 14 04:55:39.345209 containerd[1523]: time="2025-05-14T04:55:39.345195114Z" level=info msg="Start event monitor" May 14 04:55:39.345226 containerd[1523]: time="2025-05-14T04:55:39.345208634Z" level=info msg="Start cni network conf syncer for default" May 14 04:55:39.345226 containerd[1523]: time="2025-05-14T04:55:39.345217514Z" level=info msg="Start streaming server" May 14 04:55:39.345256 containerd[1523]: time="2025-05-14T04:55:39.345226394Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 14 04:55:39.345256 containerd[1523]: time="2025-05-14T04:55:39.345233634Z" level=info msg="runtime interface starting up..." May 14 04:55:39.345256 containerd[1523]: time="2025-05-14T04:55:39.345239554Z" level=info msg="starting plugins..." May 14 04:55:39.345256 containerd[1523]: time="2025-05-14T04:55:39.345252834Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 14 04:55:39.345477 containerd[1523]: time="2025-05-14T04:55:39.345456354Z" level=info msg="containerd successfully booted in 0.153776s" May 14 04:55:39.345556 systemd[1]: Started containerd.service - containerd container runtime. May 14 04:55:39.375042 tar[1520]: linux-arm64/LICENSE May 14 04:55:39.375042 tar[1520]: linux-arm64/README.md May 14 04:55:39.392946 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 14 04:55:40.395459 sshd_keygen[1519]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 04:55:40.413672 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 14 04:55:40.418355 systemd[1]: Starting issuegen.service - Generate /run/issue... May 14 04:55:40.434791 systemd[1]: issuegen.service: Deactivated successfully. May 14 04:55:40.435753 systemd[1]: Finished issuegen.service - Generate /run/issue. May 14 04:55:40.439179 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 14 04:55:40.460187 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 14 04:55:40.463608 systemd[1]: Started getty@tty1.service - Getty on tty1. May 14 04:55:40.466314 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 14 04:55:40.467675 systemd[1]: Reached target getty.target - Login Prompts. May 14 04:55:40.607849 systemd-networkd[1443]: eth0: Gained IPv6LL May 14 04:55:40.611761 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 14 04:55:40.613475 systemd[1]: Reached target network-online.target - Network is Online. May 14 04:55:40.616158 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 14 04:55:40.618459 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 04:55:40.635697 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 14 04:55:40.648903 systemd[1]: coreos-metadata.service: Deactivated successfully. May 14 04:55:40.649114 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 14 04:55:40.651118 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 14 04:55:40.654678 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 14 04:55:41.107395 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 04:55:41.108991 systemd[1]: Reached target multi-user.target - Multi-User System. May 14 04:55:41.110027 systemd[1]: Startup finished in 2.130s (kernel) + 5.053s (initrd) + 3.910s (userspace) = 11.094s. May 14 04:55:41.120965 (kubelet)[1631]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 04:55:41.564784 kubelet[1631]: E0514 04:55:41.564672 1631 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 04:55:41.567217 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 04:55:41.567353 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 04:55:41.567667 systemd[1]: kubelet.service: Consumed 787ms CPU time, 238.3M memory peak. May 14 04:55:45.627132 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 14 04:55:45.628282 systemd[1]: Started sshd@0-10.0.0.80:22-10.0.0.1:34238.service - OpenSSH per-connection server daemon (10.0.0.1:34238). May 14 04:55:45.707997 sshd[1646]: Accepted publickey for core from 10.0.0.1 port 34238 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 04:55:45.709821 sshd-session[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 04:55:45.717294 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 14 04:55:45.718237 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 14 04:55:45.723437 systemd-logind[1507]: New session 1 of user core. May 14 04:55:45.739185 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 14 04:55:45.741634 systemd[1]: Starting user@500.service - User Manager for UID 500... May 14 04:55:45.758464 (systemd)[1650]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 04:55:45.760545 systemd-logind[1507]: New session c1 of user core. May 14 04:55:45.852241 systemd[1650]: Queued start job for default target default.target. May 14 04:55:45.870748 systemd[1650]: Created slice app.slice - User Application Slice. May 14 04:55:45.870777 systemd[1650]: Reached target paths.target - Paths. May 14 04:55:45.870816 systemd[1650]: Reached target timers.target - Timers. May 14 04:55:45.873737 systemd[1650]: Starting dbus.socket - D-Bus User Message Bus Socket... May 14 04:55:45.883329 systemd[1650]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 14 04:55:45.883385 systemd[1650]: Reached target sockets.target - Sockets. May 14 04:55:45.883425 systemd[1650]: Reached target basic.target - Basic System. May 14 04:55:45.883453 systemd[1650]: Reached target default.target - Main User Target. May 14 04:55:45.883491 systemd[1650]: Startup finished in 117ms. May 14 04:55:45.883790 systemd[1]: Started user@500.service - User Manager for UID 500. May 14 04:55:45.885378 systemd[1]: Started session-1.scope - Session 1 of User core. May 14 04:55:45.951216 systemd[1]: Started sshd@1-10.0.0.80:22-10.0.0.1:34250.service - OpenSSH per-connection server daemon (10.0.0.1:34250). May 14 04:55:46.006500 sshd[1661]: Accepted publickey for core from 10.0.0.1 port 34250 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 04:55:46.004058 sshd-session[1661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 04:55:46.010442 systemd-logind[1507]: New session 2 of user core. May 14 04:55:46.017850 systemd[1]: Started session-2.scope - Session 2 of User core. May 14 04:55:46.067798 sshd[1663]: Connection closed by 10.0.0.1 port 34250 May 14 04:55:46.068197 sshd-session[1661]: pam_unix(sshd:session): session closed for user core May 14 04:55:46.078554 systemd[1]: sshd@1-10.0.0.80:22-10.0.0.1:34250.service: Deactivated successfully. May 14 04:55:46.080241 systemd[1]: session-2.scope: Deactivated successfully. May 14 04:55:46.081865 systemd-logind[1507]: Session 2 logged out. Waiting for processes to exit. May 14 04:55:46.083677 systemd[1]: Started sshd@2-10.0.0.80:22-10.0.0.1:34256.service - OpenSSH per-connection server daemon (10.0.0.1:34256). May 14 04:55:46.084732 systemd-logind[1507]: Removed session 2. May 14 04:55:46.133206 sshd[1669]: Accepted publickey for core from 10.0.0.1 port 34256 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 04:55:46.133039 sshd-session[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 04:55:46.137843 systemd-logind[1507]: New session 3 of user core. May 14 04:55:46.141862 systemd[1]: Started session-3.scope - Session 3 of User core. May 14 04:55:46.189762 sshd[1671]: Connection closed by 10.0.0.1 port 34256 May 14 04:55:46.189978 sshd-session[1669]: pam_unix(sshd:session): session closed for user core May 14 04:55:46.212606 systemd[1]: sshd@2-10.0.0.80:22-10.0.0.1:34256.service: Deactivated successfully. May 14 04:55:46.214060 systemd[1]: session-3.scope: Deactivated successfully. May 14 04:55:46.214716 systemd-logind[1507]: Session 3 logged out. Waiting for processes to exit. May 14 04:55:46.216878 systemd[1]: Started sshd@3-10.0.0.80:22-10.0.0.1:34268.service - OpenSSH per-connection server daemon (10.0.0.1:34268). May 14 04:55:46.217845 systemd-logind[1507]: Removed session 3. May 14 04:55:46.266005 sshd[1677]: Accepted publickey for core from 10.0.0.1 port 34268 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 04:55:46.267120 sshd-session[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 04:55:46.270918 systemd-logind[1507]: New session 4 of user core. May 14 04:55:46.280847 systemd[1]: Started session-4.scope - Session 4 of User core. May 14 04:55:46.332766 sshd[1680]: Connection closed by 10.0.0.1 port 34268 May 14 04:55:46.332827 sshd-session[1677]: pam_unix(sshd:session): session closed for user core May 14 04:55:46.341950 systemd[1]: sshd@3-10.0.0.80:22-10.0.0.1:34268.service: Deactivated successfully. May 14 04:55:46.343317 systemd[1]: session-4.scope: Deactivated successfully. May 14 04:55:46.343961 systemd-logind[1507]: Session 4 logged out. Waiting for processes to exit. May 14 04:55:46.346306 systemd[1]: Started sshd@4-10.0.0.80:22-10.0.0.1:34270.service - OpenSSH per-connection server daemon (10.0.0.1:34270). May 14 04:55:46.346836 systemd-logind[1507]: Removed session 4. May 14 04:55:46.391773 sshd[1686]: Accepted publickey for core from 10.0.0.1 port 34270 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 04:55:46.393202 sshd-session[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 04:55:46.397577 systemd-logind[1507]: New session 5 of user core. May 14 04:55:46.410848 systemd[1]: Started session-5.scope - Session 5 of User core. May 14 04:55:46.477982 sudo[1689]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 14 04:55:46.478277 sudo[1689]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 04:55:46.501293 sudo[1689]: pam_unix(sudo:session): session closed for user root May 14 04:55:46.502797 sshd[1688]: Connection closed by 10.0.0.1 port 34270 May 14 04:55:46.503216 sshd-session[1686]: pam_unix(sshd:session): session closed for user core May 14 04:55:46.510676 systemd[1]: sshd@4-10.0.0.80:22-10.0.0.1:34270.service: Deactivated successfully. May 14 04:55:46.512150 systemd[1]: session-5.scope: Deactivated successfully. May 14 04:55:46.512890 systemd-logind[1507]: Session 5 logged out. Waiting for processes to exit. May 14 04:55:46.515434 systemd[1]: Started sshd@5-10.0.0.80:22-10.0.0.1:34282.service - OpenSSH per-connection server daemon (10.0.0.1:34282). May 14 04:55:46.516083 systemd-logind[1507]: Removed session 5. May 14 04:55:46.568653 sshd[1695]: Accepted publickey for core from 10.0.0.1 port 34282 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 04:55:46.569898 sshd-session[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 04:55:46.574447 systemd-logind[1507]: New session 6 of user core. May 14 04:55:46.587907 systemd[1]: Started session-6.scope - Session 6 of User core. May 14 04:55:46.639010 sudo[1699]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 14 04:55:46.639270 sudo[1699]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 04:55:46.698126 sudo[1699]: pam_unix(sudo:session): session closed for user root May 14 04:55:46.702944 sudo[1698]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 14 04:55:46.703207 sudo[1698]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 04:55:46.711150 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 04:55:46.754511 augenrules[1721]: No rules May 14 04:55:46.755507 systemd[1]: audit-rules.service: Deactivated successfully. May 14 04:55:46.755698 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 04:55:46.756888 sudo[1698]: pam_unix(sudo:session): session closed for user root May 14 04:55:46.758022 sshd[1697]: Connection closed by 10.0.0.1 port 34282 May 14 04:55:46.758483 sshd-session[1695]: pam_unix(sshd:session): session closed for user core May 14 04:55:46.770585 systemd[1]: sshd@5-10.0.0.80:22-10.0.0.1:34282.service: Deactivated successfully. May 14 04:55:46.772919 systemd[1]: session-6.scope: Deactivated successfully. May 14 04:55:46.774399 systemd-logind[1507]: Session 6 logged out. Waiting for processes to exit. May 14 04:55:46.775573 systemd[1]: Started sshd@6-10.0.0.80:22-10.0.0.1:34298.service - OpenSSH per-connection server daemon (10.0.0.1:34298). May 14 04:55:46.776383 systemd-logind[1507]: Removed session 6. May 14 04:55:46.820631 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 34298 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 04:55:46.821684 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 04:55:46.825278 systemd-logind[1507]: New session 7 of user core. May 14 04:55:46.840860 systemd[1]: Started session-7.scope - Session 7 of User core. May 14 04:55:46.890657 sudo[1733]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 04:55:46.891273 sudo[1733]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 04:55:47.255888 systemd[1]: Starting docker.service - Docker Application Container Engine... May 14 04:55:47.268975 (dockerd)[1754]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 14 04:55:47.526926 dockerd[1754]: time="2025-05-14T04:55:47.526805434Z" level=info msg="Starting up" May 14 04:55:47.528239 dockerd[1754]: time="2025-05-14T04:55:47.528214474Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 14 04:55:47.556573 systemd[1]: var-lib-docker-metacopy\x2dcheck595148684-merged.mount: Deactivated successfully. May 14 04:55:47.566413 dockerd[1754]: time="2025-05-14T04:55:47.566118834Z" level=info msg="Loading containers: start." May 14 04:55:47.575730 kernel: Initializing XFRM netlink socket May 14 04:55:47.760484 systemd-networkd[1443]: docker0: Link UP May 14 04:55:47.763378 dockerd[1754]: time="2025-05-14T04:55:47.763281394Z" level=info msg="Loading containers: done." May 14 04:55:47.776195 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2410263836-merged.mount: Deactivated successfully. May 14 04:55:47.778180 dockerd[1754]: time="2025-05-14T04:55:47.778102474Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 04:55:47.778180 dockerd[1754]: time="2025-05-14T04:55:47.778174514Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 14 04:55:47.778281 dockerd[1754]: time="2025-05-14T04:55:47.778263154Z" level=info msg="Initializing buildkit" May 14 04:55:47.797990 dockerd[1754]: time="2025-05-14T04:55:47.797961754Z" level=info msg="Completed buildkit initialization" May 14 04:55:47.804302 dockerd[1754]: time="2025-05-14T04:55:47.804260194Z" level=info msg="Daemon has completed initialization" May 14 04:55:47.804379 dockerd[1754]: time="2025-05-14T04:55:47.804313874Z" level=info msg="API listen on /run/docker.sock" May 14 04:55:47.804590 systemd[1]: Started docker.service - Docker Application Container Engine. May 14 04:55:48.682884 containerd[1523]: time="2025-05-14T04:55:48.682843474Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 14 04:55:49.264438 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4168842749.mount: Deactivated successfully. May 14 04:55:50.575973 containerd[1523]: time="2025-05-14T04:55:50.575796674Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:55:50.576747 containerd[1523]: time="2025-05-14T04:55:50.576674874Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794152" May 14 04:55:50.577404 containerd[1523]: time="2025-05-14T04:55:50.577371154Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:55:50.580905 containerd[1523]: time="2025-05-14T04:55:50.580871954Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:55:50.581464 containerd[1523]: time="2025-05-14T04:55:50.581425994Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 1.89854024s" May 14 04:55:50.581516 containerd[1523]: time="2025-05-14T04:55:50.581470314Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 14 04:55:50.597594 containerd[1523]: time="2025-05-14T04:55:50.597565794Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 14 04:55:51.817726 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 04:55:51.819635 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 04:55:51.934596 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 04:55:51.938478 (kubelet)[2047]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 04:55:51.999046 kubelet[2047]: E0514 04:55:51.998764 2047 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 04:55:52.002621 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 04:55:52.002770 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 04:55:52.003040 systemd[1]: kubelet.service: Consumed 144ms CPU time, 95M memory peak. May 14 04:55:52.334677 containerd[1523]: time="2025-05-14T04:55:52.334624914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:55:52.335904 containerd[1523]: time="2025-05-14T04:55:52.335871354Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855552" May 14 04:55:52.336869 containerd[1523]: time="2025-05-14T04:55:52.336808874Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:55:52.339021 containerd[1523]: time="2025-05-14T04:55:52.338991354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:55:52.339809 containerd[1523]: time="2025-05-14T04:55:52.339786194Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 1.74209148s" May 14 04:55:52.339873 containerd[1523]: time="2025-05-14T04:55:52.339814634Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 14 04:55:52.355718 containerd[1523]: time="2025-05-14T04:55:52.355605994Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 14 04:55:53.477355 containerd[1523]: time="2025-05-14T04:55:53.476867274Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:55:53.477355 containerd[1523]: time="2025-05-14T04:55:53.477309634Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263947" May 14 04:55:53.478264 containerd[1523]: time="2025-05-14T04:55:53.478239194Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:55:53.481400 containerd[1523]: time="2025-05-14T04:55:53.481326674Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:55:53.482430 containerd[1523]: time="2025-05-14T04:55:53.482378674Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 1.12674056s" May 14 04:55:53.482430 containerd[1523]: time="2025-05-14T04:55:53.482413194Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 14 04:55:53.499094 containerd[1523]: time="2025-05-14T04:55:53.499052994Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 14 04:55:54.409932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount650312184.mount: Deactivated successfully. May 14 04:55:54.613542 containerd[1523]: time="2025-05-14T04:55:54.613488594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:55:54.614393 containerd[1523]: time="2025-05-14T04:55:54.614362234Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775707" May 14 04:55:54.615068 containerd[1523]: time="2025-05-14T04:55:54.615032514Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:55:54.616756 containerd[1523]: time="2025-05-14T04:55:54.616725394Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:55:54.617279 containerd[1523]: time="2025-05-14T04:55:54.617247234Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.1181524s" May 14 04:55:54.617317 containerd[1523]: time="2025-05-14T04:55:54.617277994Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 14 04:55:54.631876 containerd[1523]: time="2025-05-14T04:55:54.631844114Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 14 04:55:55.291752 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1068546074.mount: Deactivated successfully. May 14 04:55:56.077564 containerd[1523]: time="2025-05-14T04:55:56.077429794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:55:56.078723 containerd[1523]: time="2025-05-14T04:55:56.078678794Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" May 14 04:55:56.079844 containerd[1523]: time="2025-05-14T04:55:56.079802114Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:55:56.082070 containerd[1523]: time="2025-05-14T04:55:56.082009074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:55:56.083051 containerd[1523]: time="2025-05-14T04:55:56.083019074Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.45114284s" May 14 04:55:56.083100 containerd[1523]: time="2025-05-14T04:55:56.083053274Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 14 04:55:56.098679 containerd[1523]: time="2025-05-14T04:55:56.098648874Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 14 04:55:56.588471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4274338665.mount: Deactivated successfully. May 14 04:55:56.594865 containerd[1523]: time="2025-05-14T04:55:56.594822634Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:55:56.595374 containerd[1523]: time="2025-05-14T04:55:56.595350674Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" May 14 04:55:56.596192 containerd[1523]: time="2025-05-14T04:55:56.596162194Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:55:56.598022 containerd[1523]: time="2025-05-14T04:55:56.597971274Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:55:56.598719 containerd[1523]: time="2025-05-14T04:55:56.598538114Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 499.85508ms" May 14 04:55:56.598719 containerd[1523]: time="2025-05-14T04:55:56.598571434Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 14 04:55:56.613037 containerd[1523]: time="2025-05-14T04:55:56.613008514Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 14 04:55:57.111649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1115802013.mount: Deactivated successfully. May 14 04:55:59.134275 containerd[1523]: time="2025-05-14T04:55:59.134221114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:55:59.134758 containerd[1523]: time="2025-05-14T04:55:59.134723114Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" May 14 04:55:59.135590 containerd[1523]: time="2025-05-14T04:55:59.135526674Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:55:59.138135 containerd[1523]: time="2025-05-14T04:55:59.138090634Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:55:59.139958 containerd[1523]: time="2025-05-14T04:55:59.139926794Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.52688588s" May 14 04:55:59.140002 containerd[1523]: time="2025-05-14T04:55:59.139965794Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 14 04:56:02.237315 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 14 04:56:02.238859 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 04:56:02.365112 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 04:56:02.375152 (kubelet)[2306]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 04:56:02.412086 kubelet[2306]: E0514 04:56:02.412045 2306 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 04:56:02.414214 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 04:56:02.414338 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 04:56:02.414735 systemd[1]: kubelet.service: Consumed 127ms CPU time, 95M memory peak. May 14 04:56:03.869967 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 04:56:03.870134 systemd[1]: kubelet.service: Consumed 127ms CPU time, 95M memory peak. May 14 04:56:03.872358 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 04:56:03.891339 systemd[1]: Reload requested from client PID 2320 ('systemctl') (unit session-7.scope)... May 14 04:56:03.891357 systemd[1]: Reloading... May 14 04:56:03.984742 zram_generator::config[2364]: No configuration found. May 14 04:56:04.061839 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 04:56:04.149539 systemd[1]: Reloading finished in 257 ms. May 14 04:56:04.216415 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 14 04:56:04.216512 systemd[1]: kubelet.service: Failed with result 'signal'. May 14 04:56:04.216791 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 04:56:04.216845 systemd[1]: kubelet.service: Consumed 84ms CPU time, 82.3M memory peak. May 14 04:56:04.218651 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 04:56:04.340017 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 04:56:04.354255 (kubelet)[2409]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 04:56:04.394419 kubelet[2409]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 04:56:04.394804 kubelet[2409]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 04:56:04.394882 kubelet[2409]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 04:56:04.395225 kubelet[2409]: I0514 04:56:04.395190 2409 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 04:56:05.234841 kubelet[2409]: I0514 04:56:05.234793 2409 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 14 04:56:05.234841 kubelet[2409]: I0514 04:56:05.234825 2409 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 04:56:05.235056 kubelet[2409]: I0514 04:56:05.235040 2409 server.go:927] "Client rotation is on, will bootstrap in background" May 14 04:56:05.296733 kubelet[2409]: I0514 04:56:05.293533 2409 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 04:56:05.297487 kubelet[2409]: E0514 04:56:05.297461 2409 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.80:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.80:6443: connect: connection refused May 14 04:56:05.305894 kubelet[2409]: I0514 04:56:05.305873 2409 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 04:56:05.306986 kubelet[2409]: I0514 04:56:05.306947 2409 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 04:56:05.307226 kubelet[2409]: I0514 04:56:05.307066 2409 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 14 04:56:05.307433 kubelet[2409]: I0514 04:56:05.307419 2409 topology_manager.go:138] "Creating topology manager with none policy" May 14 04:56:05.307489 kubelet[2409]: I0514 04:56:05.307481 2409 container_manager_linux.go:301] "Creating device plugin manager" May 14 04:56:05.307800 kubelet[2409]: I0514 04:56:05.307786 2409 state_mem.go:36] "Initialized new in-memory state store" May 14 04:56:05.310642 kubelet[2409]: I0514 04:56:05.310624 2409 kubelet.go:400] "Attempting to sync node with API server" May 14 04:56:05.310752 kubelet[2409]: I0514 04:56:05.310739 2409 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 04:56:05.310993 kubelet[2409]: I0514 04:56:05.310984 2409 kubelet.go:312] "Adding apiserver pod source" May 14 04:56:05.311058 kubelet[2409]: I0514 04:56:05.311049 2409 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 04:56:05.311291 kubelet[2409]: W0514 04:56:05.311203 2409 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused May 14 04:56:05.311291 kubelet[2409]: E0514 04:56:05.311264 2409 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused May 14 04:56:05.311721 kubelet[2409]: W0514 04:56:05.311664 2409 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.80:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused May 14 04:56:05.311774 kubelet[2409]: E0514 04:56:05.311730 2409 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.80:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused May 14 04:56:05.312253 kubelet[2409]: I0514 04:56:05.312236 2409 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 14 04:56:05.312602 kubelet[2409]: I0514 04:56:05.312580 2409 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 04:56:05.312790 kubelet[2409]: W0514 04:56:05.312777 2409 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 04:56:05.315860 kubelet[2409]: I0514 04:56:05.313571 2409 server.go:1264] "Started kubelet" May 14 04:56:05.315860 kubelet[2409]: I0514 04:56:05.313768 2409 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 04:56:05.315860 kubelet[2409]: I0514 04:56:05.314921 2409 server.go:455] "Adding debug handlers to kubelet server" May 14 04:56:05.321314 kubelet[2409]: E0514 04:56:05.316418 2409 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.80:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.80:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f4bd68d96020a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-14 04:56:05.313544714 +0000 UTC m=+0.955412521,LastTimestamp:2025-05-14 04:56:05.313544714 +0000 UTC m=+0.955412521,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 14 04:56:05.321314 kubelet[2409]: I0514 04:56:05.318975 2409 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 04:56:05.321314 kubelet[2409]: I0514 04:56:05.320632 2409 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 04:56:05.321314 kubelet[2409]: I0514 04:56:05.320840 2409 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 04:56:05.322081 kubelet[2409]: E0514 04:56:05.322045 2409 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 04:56:05.322579 kubelet[2409]: I0514 04:56:05.322550 2409 volume_manager.go:291] "Starting Kubelet Volume Manager" May 14 04:56:05.322744 kubelet[2409]: I0514 04:56:05.322728 2409 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 04:56:05.326588 kubelet[2409]: I0514 04:56:05.323828 2409 reconciler.go:26] "Reconciler: start to sync state" May 14 04:56:05.326588 kubelet[2409]: W0514 04:56:05.324206 2409 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused May 14 04:56:05.326588 kubelet[2409]: E0514 04:56:05.324243 2409 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused May 14 04:56:05.327209 kubelet[2409]: E0514 04:56:05.327179 2409 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="200ms" May 14 04:56:05.334544 kubelet[2409]: I0514 04:56:05.334523 2409 factory.go:221] Registration of the systemd container factory successfully May 14 04:56:05.334699 kubelet[2409]: I0514 04:56:05.334681 2409 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 04:56:05.336165 kubelet[2409]: I0514 04:56:05.336149 2409 factory.go:221] Registration of the containerd container factory successfully May 14 04:56:05.336302 kubelet[2409]: E0514 04:56:05.336224 2409 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 04:56:05.346054 kubelet[2409]: I0514 04:56:05.346002 2409 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 04:56:05.347208 kubelet[2409]: I0514 04:56:05.347055 2409 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 04:56:05.347494 kubelet[2409]: I0514 04:56:05.347473 2409 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 04:56:05.347494 kubelet[2409]: I0514 04:56:05.347497 2409 kubelet.go:2337] "Starting kubelet main sync loop" May 14 04:56:05.347814 kubelet[2409]: E0514 04:56:05.347547 2409 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 04:56:05.347876 kubelet[2409]: W0514 04:56:05.347842 2409 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused May 14 04:56:05.347905 kubelet[2409]: E0514 04:56:05.347877 2409 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused May 14 04:56:05.348579 kubelet[2409]: I0514 04:56:05.348562 2409 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 04:56:05.348579 kubelet[2409]: I0514 04:56:05.348574 2409 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 04:56:05.348656 kubelet[2409]: I0514 04:56:05.348590 2409 state_mem.go:36] "Initialized new in-memory state store" May 14 04:56:05.424390 kubelet[2409]: I0514 04:56:05.424071 2409 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 04:56:05.424390 kubelet[2409]: E0514 04:56:05.424359 2409 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.80:6443/api/v1/nodes\": dial tcp 10.0.0.80:6443: connect: connection refused" node="localhost" May 14 04:56:05.448114 kubelet[2409]: E0514 04:56:05.448081 2409 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 14 04:56:05.527935 kubelet[2409]: E0514 04:56:05.527895 2409 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="400ms" May 14 04:56:05.546738 kubelet[2409]: I0514 04:56:05.546685 2409 policy_none.go:49] "None policy: Start" May 14 04:56:05.547494 kubelet[2409]: I0514 04:56:05.547465 2409 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 04:56:05.547552 kubelet[2409]: I0514 04:56:05.547501 2409 state_mem.go:35] "Initializing new in-memory state store" May 14 04:56:05.553106 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 14 04:56:05.563362 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 14 04:56:05.566007 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 14 04:56:05.582412 kubelet[2409]: I0514 04:56:05.582387 2409 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 04:56:05.582615 kubelet[2409]: I0514 04:56:05.582574 2409 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 04:56:05.582713 kubelet[2409]: I0514 04:56:05.582686 2409 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 04:56:05.584275 kubelet[2409]: E0514 04:56:05.584245 2409 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 14 04:56:05.625677 kubelet[2409]: I0514 04:56:05.625649 2409 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 04:56:05.626040 kubelet[2409]: E0514 04:56:05.626015 2409 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.80:6443/api/v1/nodes\": dial tcp 10.0.0.80:6443: connect: connection refused" node="localhost" May 14 04:56:05.649173 kubelet[2409]: I0514 04:56:05.649136 2409 topology_manager.go:215] "Topology Admit Handler" podUID="1822f1523f0800e0cbfb570319c3f31e" podNamespace="kube-system" podName="kube-apiserver-localhost" May 14 04:56:05.650195 kubelet[2409]: I0514 04:56:05.650169 2409 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 14 04:56:05.651135 kubelet[2409]: I0514 04:56:05.651094 2409 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 14 04:56:05.656521 systemd[1]: Created slice kubepods-burstable-pod1822f1523f0800e0cbfb570319c3f31e.slice - libcontainer container kubepods-burstable-pod1822f1523f0800e0cbfb570319c3f31e.slice. May 14 04:56:05.684839 systemd[1]: Created slice kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice - libcontainer container kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice. May 14 04:56:05.689642 systemd[1]: Created slice kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice - libcontainer container kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice. May 14 04:56:05.725628 kubelet[2409]: I0514 04:56:05.725590 2409 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 04:56:05.725883 kubelet[2409]: I0514 04:56:05.725721 2409 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 04:56:05.725883 kubelet[2409]: I0514 04:56:05.725757 2409 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 04:56:05.725883 kubelet[2409]: I0514 04:56:05.725781 2409 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 04:56:05.725883 kubelet[2409]: I0514 04:56:05.725799 2409 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1822f1523f0800e0cbfb570319c3f31e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1822f1523f0800e0cbfb570319c3f31e\") " pod="kube-system/kube-apiserver-localhost" May 14 04:56:05.725883 kubelet[2409]: I0514 04:56:05.725825 2409 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 04:56:05.726025 kubelet[2409]: I0514 04:56:05.725842 2409 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 14 04:56:05.726025 kubelet[2409]: I0514 04:56:05.725857 2409 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1822f1523f0800e0cbfb570319c3f31e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1822f1523f0800e0cbfb570319c3f31e\") " pod="kube-system/kube-apiserver-localhost" May 14 04:56:05.726025 kubelet[2409]: I0514 04:56:05.725875 2409 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1822f1523f0800e0cbfb570319c3f31e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1822f1523f0800e0cbfb570319c3f31e\") " pod="kube-system/kube-apiserver-localhost" May 14 04:56:05.929215 kubelet[2409]: E0514 04:56:05.929095 2409 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="800ms" May 14 04:56:05.982064 containerd[1523]: time="2025-05-14T04:56:05.982008194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1822f1523f0800e0cbfb570319c3f31e,Namespace:kube-system,Attempt:0,}" May 14 04:56:05.988585 containerd[1523]: time="2025-05-14T04:56:05.988535154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 14 04:56:05.994122 containerd[1523]: time="2025-05-14T04:56:05.994076834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 14 04:56:06.027370 kubelet[2409]: I0514 04:56:06.027347 2409 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 04:56:06.027838 kubelet[2409]: E0514 04:56:06.027803 2409 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.80:6443/api/v1/nodes\": dial tcp 10.0.0.80:6443: connect: connection refused" node="localhost" May 14 04:56:06.206916 kubelet[2409]: W0514 04:56:06.206757 2409 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused May 14 04:56:06.206916 kubelet[2409]: E0514 04:56:06.206834 2409 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused May 14 04:56:06.426977 kubelet[2409]: W0514 04:56:06.426906 2409 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused May 14 04:56:06.426977 kubelet[2409]: W0514 04:56:06.426906 2409 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused May 14 04:56:06.426977 kubelet[2409]: E0514 04:56:06.426951 2409 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused May 14 04:56:06.426977 kubelet[2409]: E0514 04:56:06.426962 2409 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused May 14 04:56:06.537077 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1076112986.mount: Deactivated successfully. May 14 04:56:06.540999 containerd[1523]: time="2025-05-14T04:56:06.540952554Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 04:56:06.542469 containerd[1523]: time="2025-05-14T04:56:06.542431554Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 14 04:56:06.543106 containerd[1523]: time="2025-05-14T04:56:06.543066074Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 04:56:06.545631 containerd[1523]: time="2025-05-14T04:56:06.545577714Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 04:56:06.546750 containerd[1523]: time="2025-05-14T04:56:06.546718714Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 14 04:56:06.546896 containerd[1523]: time="2025-05-14T04:56:06.546865634Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 04:56:06.547274 containerd[1523]: time="2025-05-14T04:56:06.547245234Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 14 04:56:06.548735 containerd[1523]: time="2025-05-14T04:56:06.547841914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 04:56:06.548735 containerd[1523]: time="2025-05-14T04:56:06.548496394Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 563.25668ms" May 14 04:56:06.551636 containerd[1523]: time="2025-05-14T04:56:06.551133714Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 560.96516ms" May 14 04:56:06.553380 containerd[1523]: time="2025-05-14T04:56:06.553340914Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 557.56428ms" May 14 04:56:06.563748 containerd[1523]: time="2025-05-14T04:56:06.563691914Z" level=info msg="connecting to shim d773c1c751a977f9298f61e708bdeadf6cfb63fdfd7a6e3b4e30fe41160b498d" address="unix:///run/containerd/s/eb037d9699a7ad94cfdcf32a18b0e27edb385528c4bf9ece1c87a140b6c4c573" namespace=k8s.io protocol=ttrpc version=3 May 14 04:56:06.580107 containerd[1523]: time="2025-05-14T04:56:06.580065034Z" level=info msg="connecting to shim 320b9ea1c4acfb68c8049b13ab901e084b6be33cdb05a51dc683bdcdbc507922" address="unix:///run/containerd/s/58a2fd989a56fd88416ef2a14510511034c3ab2341259eda4a5f0f397e2a63b8" namespace=k8s.io protocol=ttrpc version=3 May 14 04:56:06.580211 containerd[1523]: time="2025-05-14T04:56:06.580136314Z" level=info msg="connecting to shim 362e4a03cc436295e0ac31f4ed511f88d76038eca3511fb88edfc6ab1fb91918" address="unix:///run/containerd/s/f00923f06840ee6a5ad36dbdc794228e3add2d431124783aeeac5bb5c240f058" namespace=k8s.io protocol=ttrpc version=3 May 14 04:56:06.595897 systemd[1]: Started cri-containerd-d773c1c751a977f9298f61e708bdeadf6cfb63fdfd7a6e3b4e30fe41160b498d.scope - libcontainer container d773c1c751a977f9298f61e708bdeadf6cfb63fdfd7a6e3b4e30fe41160b498d. May 14 04:56:06.598812 systemd[1]: Started cri-containerd-362e4a03cc436295e0ac31f4ed511f88d76038eca3511fb88edfc6ab1fb91918.scope - libcontainer container 362e4a03cc436295e0ac31f4ed511f88d76038eca3511fb88edfc6ab1fb91918. May 14 04:56:06.603156 systemd[1]: Started cri-containerd-320b9ea1c4acfb68c8049b13ab901e084b6be33cdb05a51dc683bdcdbc507922.scope - libcontainer container 320b9ea1c4acfb68c8049b13ab901e084b6be33cdb05a51dc683bdcdbc507922. May 14 04:56:06.636792 containerd[1523]: time="2025-05-14T04:56:06.636740074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"362e4a03cc436295e0ac31f4ed511f88d76038eca3511fb88edfc6ab1fb91918\"" May 14 04:56:06.642186 containerd[1523]: time="2025-05-14T04:56:06.642139114Z" level=info msg="CreateContainer within sandbox \"362e4a03cc436295e0ac31f4ed511f88d76038eca3511fb88edfc6ab1fb91918\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 04:56:06.648341 containerd[1523]: time="2025-05-14T04:56:06.648304914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"320b9ea1c4acfb68c8049b13ab901e084b6be33cdb05a51dc683bdcdbc507922\"" May 14 04:56:06.651668 containerd[1523]: time="2025-05-14T04:56:06.651635114Z" level=info msg="CreateContainer within sandbox \"320b9ea1c4acfb68c8049b13ab901e084b6be33cdb05a51dc683bdcdbc507922\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 04:56:06.651920 containerd[1523]: time="2025-05-14T04:56:06.651887834Z" level=info msg="Container 40580082316d14ed645aebd221f9ba9629f2ebfc697de11f8a30a4ff810b66dc: CDI devices from CRI Config.CDIDevices: []" May 14 04:56:06.658487 containerd[1523]: time="2025-05-14T04:56:06.658456754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1822f1523f0800e0cbfb570319c3f31e,Namespace:kube-system,Attempt:0,} returns sandbox id \"d773c1c751a977f9298f61e708bdeadf6cfb63fdfd7a6e3b4e30fe41160b498d\"" May 14 04:56:06.661822 containerd[1523]: time="2025-05-14T04:56:06.661782394Z" level=info msg="CreateContainer within sandbox \"362e4a03cc436295e0ac31f4ed511f88d76038eca3511fb88edfc6ab1fb91918\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"40580082316d14ed645aebd221f9ba9629f2ebfc697de11f8a30a4ff810b66dc\"" May 14 04:56:06.662330 containerd[1523]: time="2025-05-14T04:56:06.662308674Z" level=info msg="StartContainer for \"40580082316d14ed645aebd221f9ba9629f2ebfc697de11f8a30a4ff810b66dc\"" May 14 04:56:06.662382 containerd[1523]: time="2025-05-14T04:56:06.662342194Z" level=info msg="CreateContainer within sandbox \"d773c1c751a977f9298f61e708bdeadf6cfb63fdfd7a6e3b4e30fe41160b498d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 04:56:06.663327 containerd[1523]: time="2025-05-14T04:56:06.663300514Z" level=info msg="Container 92290fb1f6f0ff401494e01073d780d37a6e9361cce1ea8d49f16911d2b24031: CDI devices from CRI Config.CDIDevices: []" May 14 04:56:06.663408 containerd[1523]: time="2025-05-14T04:56:06.663366914Z" level=info msg="connecting to shim 40580082316d14ed645aebd221f9ba9629f2ebfc697de11f8a30a4ff810b66dc" address="unix:///run/containerd/s/f00923f06840ee6a5ad36dbdc794228e3add2d431124783aeeac5bb5c240f058" protocol=ttrpc version=3 May 14 04:56:06.669569 containerd[1523]: time="2025-05-14T04:56:06.669522754Z" level=info msg="CreateContainer within sandbox \"320b9ea1c4acfb68c8049b13ab901e084b6be33cdb05a51dc683bdcdbc507922\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"92290fb1f6f0ff401494e01073d780d37a6e9361cce1ea8d49f16911d2b24031\"" May 14 04:56:06.670090 containerd[1523]: time="2025-05-14T04:56:06.670065394Z" level=info msg="StartContainer for \"92290fb1f6f0ff401494e01073d780d37a6e9361cce1ea8d49f16911d2b24031\"" May 14 04:56:06.671218 containerd[1523]: time="2025-05-14T04:56:06.671194314Z" level=info msg="connecting to shim 92290fb1f6f0ff401494e01073d780d37a6e9361cce1ea8d49f16911d2b24031" address="unix:///run/containerd/s/58a2fd989a56fd88416ef2a14510511034c3ab2341259eda4a5f0f397e2a63b8" protocol=ttrpc version=3 May 14 04:56:06.673932 containerd[1523]: time="2025-05-14T04:56:06.673891914Z" level=info msg="Container 3faea64a6cfc6ba57097dde8a3ab0a164fb1511e738a0f67dd9753f79eb081cb: CDI devices from CRI Config.CDIDevices: []" May 14 04:56:06.681865 systemd[1]: Started cri-containerd-40580082316d14ed645aebd221f9ba9629f2ebfc697de11f8a30a4ff810b66dc.scope - libcontainer container 40580082316d14ed645aebd221f9ba9629f2ebfc697de11f8a30a4ff810b66dc. May 14 04:56:06.687016 containerd[1523]: time="2025-05-14T04:56:06.686972474Z" level=info msg="CreateContainer within sandbox \"d773c1c751a977f9298f61e708bdeadf6cfb63fdfd7a6e3b4e30fe41160b498d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3faea64a6cfc6ba57097dde8a3ab0a164fb1511e738a0f67dd9753f79eb081cb\"" May 14 04:56:06.687695 containerd[1523]: time="2025-05-14T04:56:06.687667954Z" level=info msg="StartContainer for \"3faea64a6cfc6ba57097dde8a3ab0a164fb1511e738a0f67dd9753f79eb081cb\"" May 14 04:56:06.689656 containerd[1523]: time="2025-05-14T04:56:06.689625434Z" level=info msg="connecting to shim 3faea64a6cfc6ba57097dde8a3ab0a164fb1511e738a0f67dd9753f79eb081cb" address="unix:///run/containerd/s/eb037d9699a7ad94cfdcf32a18b0e27edb385528c4bf9ece1c87a140b6c4c573" protocol=ttrpc version=3 May 14 04:56:06.702884 systemd[1]: Started cri-containerd-92290fb1f6f0ff401494e01073d780d37a6e9361cce1ea8d49f16911d2b24031.scope - libcontainer container 92290fb1f6f0ff401494e01073d780d37a6e9361cce1ea8d49f16911d2b24031. May 14 04:56:06.706257 systemd[1]: Started cri-containerd-3faea64a6cfc6ba57097dde8a3ab0a164fb1511e738a0f67dd9753f79eb081cb.scope - libcontainer container 3faea64a6cfc6ba57097dde8a3ab0a164fb1511e738a0f67dd9753f79eb081cb. May 14 04:56:06.730723 kubelet[2409]: E0514 04:56:06.730662 2409 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="1.6s" May 14 04:56:06.730826 kubelet[2409]: W0514 04:56:06.730759 2409 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.80:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused May 14 04:56:06.730826 kubelet[2409]: E0514 04:56:06.730814 2409 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.80:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused May 14 04:56:06.736903 containerd[1523]: time="2025-05-14T04:56:06.736849674Z" level=info msg="StartContainer for \"40580082316d14ed645aebd221f9ba9629f2ebfc697de11f8a30a4ff810b66dc\" returns successfully" May 14 04:56:06.799655 containerd[1523]: time="2025-05-14T04:56:06.798789434Z" level=info msg="StartContainer for \"3faea64a6cfc6ba57097dde8a3ab0a164fb1511e738a0f67dd9753f79eb081cb\" returns successfully" May 14 04:56:06.800282 containerd[1523]: time="2025-05-14T04:56:06.800247714Z" level=info msg="StartContainer for \"92290fb1f6f0ff401494e01073d780d37a6e9361cce1ea8d49f16911d2b24031\" returns successfully" May 14 04:56:06.832127 kubelet[2409]: I0514 04:56:06.830589 2409 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 04:56:06.832127 kubelet[2409]: E0514 04:56:06.830908 2409 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.80:6443/api/v1/nodes\": dial tcp 10.0.0.80:6443: connect: connection refused" node="localhost" May 14 04:56:08.433140 kubelet[2409]: I0514 04:56:08.433098 2409 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 04:56:09.371029 kubelet[2409]: E0514 04:56:09.370690 2409 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 14 04:56:09.443990 kubelet[2409]: I0514 04:56:09.443955 2409 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 14 04:56:10.061693 kubelet[2409]: E0514 04:56:10.061228 2409 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 14 04:56:10.315961 kubelet[2409]: I0514 04:56:10.315877 2409 apiserver.go:52] "Watching apiserver" May 14 04:56:10.323676 kubelet[2409]: I0514 04:56:10.323621 2409 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 04:56:11.340126 systemd[1]: Reload requested from client PID 2685 ('systemctl') (unit session-7.scope)... May 14 04:56:11.340141 systemd[1]: Reloading... May 14 04:56:11.406738 zram_generator::config[2731]: No configuration found. May 14 04:56:11.467755 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 04:56:11.564610 systemd[1]: Reloading finished in 224 ms. May 14 04:56:11.592547 kubelet[2409]: I0514 04:56:11.592419 2409 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 04:56:11.593368 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 04:56:11.606615 systemd[1]: kubelet.service: Deactivated successfully. May 14 04:56:11.607777 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 04:56:11.607833 systemd[1]: kubelet.service: Consumed 1.419s CPU time, 112.8M memory peak. May 14 04:56:11.609385 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 04:56:11.750367 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 04:56:11.753515 (kubelet)[2770]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 04:56:11.796895 kubelet[2770]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 04:56:11.796895 kubelet[2770]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 04:56:11.796895 kubelet[2770]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 04:56:11.797249 kubelet[2770]: I0514 04:56:11.796907 2770 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 04:56:11.800747 kubelet[2770]: I0514 04:56:11.800722 2770 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 14 04:56:11.800747 kubelet[2770]: I0514 04:56:11.800744 2770 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 04:56:11.800952 kubelet[2770]: I0514 04:56:11.800926 2770 server.go:927] "Client rotation is on, will bootstrap in background" May 14 04:56:11.802177 kubelet[2770]: I0514 04:56:11.802155 2770 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 04:56:11.803323 kubelet[2770]: I0514 04:56:11.803262 2770 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 04:56:11.809666 kubelet[2770]: I0514 04:56:11.809640 2770 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 04:56:11.809876 kubelet[2770]: I0514 04:56:11.809841 2770 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 04:56:11.810030 kubelet[2770]: I0514 04:56:11.809867 2770 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 14 04:56:11.810101 kubelet[2770]: I0514 04:56:11.810031 2770 topology_manager.go:138] "Creating topology manager with none policy" May 14 04:56:11.810101 kubelet[2770]: I0514 04:56:11.810040 2770 container_manager_linux.go:301] "Creating device plugin manager" May 14 04:56:11.810101 kubelet[2770]: I0514 04:56:11.810072 2770 state_mem.go:36] "Initialized new in-memory state store" May 14 04:56:11.810194 kubelet[2770]: I0514 04:56:11.810181 2770 kubelet.go:400] "Attempting to sync node with API server" May 14 04:56:11.810220 kubelet[2770]: I0514 04:56:11.810196 2770 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 04:56:11.810220 kubelet[2770]: I0514 04:56:11.810218 2770 kubelet.go:312] "Adding apiserver pod source" May 14 04:56:11.810266 kubelet[2770]: I0514 04:56:11.810227 2770 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 04:56:11.812396 kubelet[2770]: I0514 04:56:11.811528 2770 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 14 04:56:11.812912 kubelet[2770]: I0514 04:56:11.812729 2770 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 04:56:11.813780 kubelet[2770]: I0514 04:56:11.813265 2770 server.go:1264] "Started kubelet" May 14 04:56:11.817715 kubelet[2770]: I0514 04:56:11.814818 2770 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 04:56:11.817715 kubelet[2770]: I0514 04:56:11.815968 2770 server.go:455] "Adding debug handlers to kubelet server" May 14 04:56:11.817715 kubelet[2770]: I0514 04:56:11.814824 2770 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 04:56:11.817715 kubelet[2770]: I0514 04:56:11.817539 2770 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 04:56:11.819709 kubelet[2770]: I0514 04:56:11.817976 2770 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 04:56:11.823740 kubelet[2770]: I0514 04:56:11.823665 2770 volume_manager.go:291] "Starting Kubelet Volume Manager" May 14 04:56:11.824604 kubelet[2770]: I0514 04:56:11.824580 2770 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 04:56:11.824681 kubelet[2770]: E0514 04:56:11.824295 2770 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 04:56:11.827709 kubelet[2770]: I0514 04:56:11.824888 2770 reconciler.go:26] "Reconciler: start to sync state" May 14 04:56:11.832913 kubelet[2770]: I0514 04:56:11.832887 2770 factory.go:221] Registration of the systemd container factory successfully May 14 04:56:11.833005 kubelet[2770]: I0514 04:56:11.832978 2770 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 04:56:11.838384 kubelet[2770]: I0514 04:56:11.838242 2770 factory.go:221] Registration of the containerd container factory successfully May 14 04:56:11.840632 kubelet[2770]: I0514 04:56:11.840514 2770 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 04:56:11.841334 kubelet[2770]: I0514 04:56:11.841318 2770 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 04:56:11.841428 kubelet[2770]: I0514 04:56:11.841417 2770 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 04:56:11.841496 kubelet[2770]: I0514 04:56:11.841486 2770 kubelet.go:2337] "Starting kubelet main sync loop" May 14 04:56:11.841579 kubelet[2770]: E0514 04:56:11.841564 2770 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 04:56:11.867889 kubelet[2770]: I0514 04:56:11.867818 2770 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 04:56:11.867986 kubelet[2770]: I0514 04:56:11.867973 2770 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 04:56:11.868051 kubelet[2770]: I0514 04:56:11.868042 2770 state_mem.go:36] "Initialized new in-memory state store" May 14 04:56:11.868238 kubelet[2770]: I0514 04:56:11.868221 2770 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 04:56:11.868315 kubelet[2770]: I0514 04:56:11.868291 2770 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 04:56:11.868374 kubelet[2770]: I0514 04:56:11.868365 2770 policy_none.go:49] "None policy: Start" May 14 04:56:11.871146 kubelet[2770]: I0514 04:56:11.871120 2770 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 04:56:11.871146 kubelet[2770]: I0514 04:56:11.871152 2770 state_mem.go:35] "Initializing new in-memory state store" May 14 04:56:11.871330 kubelet[2770]: I0514 04:56:11.871313 2770 state_mem.go:75] "Updated machine memory state" May 14 04:56:11.875597 kubelet[2770]: I0514 04:56:11.875568 2770 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 04:56:11.875774 kubelet[2770]: I0514 04:56:11.875737 2770 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 04:56:11.875852 kubelet[2770]: I0514 04:56:11.875838 2770 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 04:56:11.925940 kubelet[2770]: I0514 04:56:11.925918 2770 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 04:56:11.931996 kubelet[2770]: I0514 04:56:11.931965 2770 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 14 04:56:11.932064 kubelet[2770]: I0514 04:56:11.932038 2770 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 14 04:56:11.942422 kubelet[2770]: I0514 04:56:11.942390 2770 topology_manager.go:215] "Topology Admit Handler" podUID="1822f1523f0800e0cbfb570319c3f31e" podNamespace="kube-system" podName="kube-apiserver-localhost" May 14 04:56:11.942505 kubelet[2770]: I0514 04:56:11.942483 2770 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 14 04:56:11.942530 kubelet[2770]: I0514 04:56:11.942517 2770 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 14 04:56:12.028843 kubelet[2770]: I0514 04:56:12.028807 2770 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 04:56:12.028843 kubelet[2770]: I0514 04:56:12.028844 2770 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1822f1523f0800e0cbfb570319c3f31e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1822f1523f0800e0cbfb570319c3f31e\") " pod="kube-system/kube-apiserver-localhost" May 14 04:56:12.028969 kubelet[2770]: I0514 04:56:12.028865 2770 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1822f1523f0800e0cbfb570319c3f31e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1822f1523f0800e0cbfb570319c3f31e\") " pod="kube-system/kube-apiserver-localhost" May 14 04:56:12.028969 kubelet[2770]: I0514 04:56:12.028881 2770 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 04:56:12.028969 kubelet[2770]: I0514 04:56:12.028897 2770 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 04:56:12.028969 kubelet[2770]: I0514 04:56:12.028914 2770 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 14 04:56:12.028969 kubelet[2770]: I0514 04:56:12.028932 2770 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1822f1523f0800e0cbfb570319c3f31e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1822f1523f0800e0cbfb570319c3f31e\") " pod="kube-system/kube-apiserver-localhost" May 14 04:56:12.029074 kubelet[2770]: I0514 04:56:12.028947 2770 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 04:56:12.029074 kubelet[2770]: I0514 04:56:12.028973 2770 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 04:56:12.811795 kubelet[2770]: I0514 04:56:12.811734 2770 apiserver.go:52] "Watching apiserver" May 14 04:56:12.825855 kubelet[2770]: I0514 04:56:12.825663 2770 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 04:56:12.862896 kubelet[2770]: E0514 04:56:12.862845 2770 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 14 04:56:12.863439 kubelet[2770]: E0514 04:56:12.862786 2770 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 14 04:56:12.894730 kubelet[2770]: I0514 04:56:12.891642 2770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.8916230779999998 podStartE2EDuration="1.891623078s" podCreationTimestamp="2025-05-14 04:56:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 04:56:12.880486176 +0000 UTC m=+1.124097942" watchObservedRunningTime="2025-05-14 04:56:12.891623078 +0000 UTC m=+1.135234844" May 14 04:56:12.895437 kubelet[2770]: I0514 04:56:12.895389 2770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.895374859 podStartE2EDuration="1.895374859s" podCreationTimestamp="2025-05-14 04:56:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 04:56:12.891154556 +0000 UTC m=+1.134766322" watchObservedRunningTime="2025-05-14 04:56:12.895374859 +0000 UTC m=+1.138986625" May 14 04:56:16.685281 sudo[1733]: pam_unix(sudo:session): session closed for user root May 14 04:56:16.688350 sshd[1732]: Connection closed by 10.0.0.1 port 34298 May 14 04:56:16.688823 sshd-session[1730]: pam_unix(sshd:session): session closed for user core May 14 04:56:16.692654 systemd-logind[1507]: Session 7 logged out. Waiting for processes to exit. May 14 04:56:16.693273 systemd[1]: sshd@6-10.0.0.80:22-10.0.0.1:34298.service: Deactivated successfully. May 14 04:56:16.695672 systemd[1]: session-7.scope: Deactivated successfully. May 14 04:56:16.697772 systemd[1]: session-7.scope: Consumed 6.800s CPU time, 245.2M memory peak. May 14 04:56:16.700265 systemd-logind[1507]: Removed session 7. May 14 04:56:17.737612 kubelet[2770]: I0514 04:56:17.737545 2770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=6.737527387 podStartE2EDuration="6.737527387s" podCreationTimestamp="2025-05-14 04:56:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 04:56:12.900369847 +0000 UTC m=+1.143981613" watchObservedRunningTime="2025-05-14 04:56:17.737527387 +0000 UTC m=+5.981139153" May 14 04:56:24.021088 update_engine[1514]: I20250514 04:56:24.021027 1514 update_attempter.cc:509] Updating boot flags... May 14 04:56:27.796311 kubelet[2770]: I0514 04:56:27.796274 2770 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 04:56:27.801517 containerd[1523]: time="2025-05-14T04:56:27.801427629Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 04:56:27.802591 kubelet[2770]: I0514 04:56:27.802115 2770 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 04:56:27.825885 kubelet[2770]: I0514 04:56:27.825049 2770 topology_manager.go:215] "Topology Admit Handler" podUID="23d3c5e5-3e89-4f0b-8f8f-44ecd0c54c75" podNamespace="kube-system" podName="kube-proxy-8rbhb" May 14 04:56:27.835235 systemd[1]: Created slice kubepods-besteffort-pod23d3c5e5_3e89_4f0b_8f8f_44ecd0c54c75.slice - libcontainer container kubepods-besteffort-pod23d3c5e5_3e89_4f0b_8f8f_44ecd0c54c75.slice. May 14 04:56:27.843513 kubelet[2770]: I0514 04:56:27.843371 2770 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/23d3c5e5-3e89-4f0b-8f8f-44ecd0c54c75-lib-modules\") pod \"kube-proxy-8rbhb\" (UID: \"23d3c5e5-3e89-4f0b-8f8f-44ecd0c54c75\") " pod="kube-system/kube-proxy-8rbhb" May 14 04:56:27.843513 kubelet[2770]: I0514 04:56:27.843410 2770 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9c2r\" (UniqueName: \"kubernetes.io/projected/23d3c5e5-3e89-4f0b-8f8f-44ecd0c54c75-kube-api-access-j9c2r\") pod \"kube-proxy-8rbhb\" (UID: \"23d3c5e5-3e89-4f0b-8f8f-44ecd0c54c75\") " pod="kube-system/kube-proxy-8rbhb" May 14 04:56:27.843513 kubelet[2770]: I0514 04:56:27.843433 2770 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/23d3c5e5-3e89-4f0b-8f8f-44ecd0c54c75-xtables-lock\") pod \"kube-proxy-8rbhb\" (UID: \"23d3c5e5-3e89-4f0b-8f8f-44ecd0c54c75\") " pod="kube-system/kube-proxy-8rbhb" May 14 04:56:27.843513 kubelet[2770]: I0514 04:56:27.843450 2770 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/23d3c5e5-3e89-4f0b-8f8f-44ecd0c54c75-kube-proxy\") pod \"kube-proxy-8rbhb\" (UID: \"23d3c5e5-3e89-4f0b-8f8f-44ecd0c54c75\") " pod="kube-system/kube-proxy-8rbhb" May 14 04:56:27.884474 kubelet[2770]: I0514 04:56:27.884424 2770 topology_manager.go:215] "Topology Admit Handler" podUID="e10f1e33-842c-4f7a-bd39-340b919f1669" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-cfggl" May 14 04:56:27.893936 systemd[1]: Created slice kubepods-besteffort-pode10f1e33_842c_4f7a_bd39_340b919f1669.slice - libcontainer container kubepods-besteffort-pode10f1e33_842c_4f7a_bd39_340b919f1669.slice. May 14 04:56:27.944572 kubelet[2770]: I0514 04:56:27.944524 2770 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e10f1e33-842c-4f7a-bd39-340b919f1669-var-lib-calico\") pod \"tigera-operator-797db67f8-cfggl\" (UID: \"e10f1e33-842c-4f7a-bd39-340b919f1669\") " pod="tigera-operator/tigera-operator-797db67f8-cfggl" May 14 04:56:27.944727 kubelet[2770]: I0514 04:56:27.944615 2770 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lstkz\" (UniqueName: \"kubernetes.io/projected/e10f1e33-842c-4f7a-bd39-340b919f1669-kube-api-access-lstkz\") pod \"tigera-operator-797db67f8-cfggl\" (UID: \"e10f1e33-842c-4f7a-bd39-340b919f1669\") " pod="tigera-operator/tigera-operator-797db67f8-cfggl" May 14 04:56:28.151088 containerd[1523]: time="2025-05-14T04:56:28.150970827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8rbhb,Uid:23d3c5e5-3e89-4f0b-8f8f-44ecd0c54c75,Namespace:kube-system,Attempt:0,}" May 14 04:56:28.169079 containerd[1523]: time="2025-05-14T04:56:28.169031863Z" level=info msg="connecting to shim 1c6245ae56fb3ab3d8a85d62f58e269bf5d23ec2a5b2b9291e84b45d52d178ad" address="unix:///run/containerd/s/2ef33c2793b732974df8d28c11a1b0156be9159c06e3efb38d082bd99645bd05" namespace=k8s.io protocol=ttrpc version=3 May 14 04:56:28.191864 systemd[1]: Started cri-containerd-1c6245ae56fb3ab3d8a85d62f58e269bf5d23ec2a5b2b9291e84b45d52d178ad.scope - libcontainer container 1c6245ae56fb3ab3d8a85d62f58e269bf5d23ec2a5b2b9291e84b45d52d178ad. May 14 04:56:28.197193 containerd[1523]: time="2025-05-14T04:56:28.197154839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-cfggl,Uid:e10f1e33-842c-4f7a-bd39-340b919f1669,Namespace:tigera-operator,Attempt:0,}" May 14 04:56:28.217966 containerd[1523]: time="2025-05-14T04:56:28.217931080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8rbhb,Uid:23d3c5e5-3e89-4f0b-8f8f-44ecd0c54c75,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c6245ae56fb3ab3d8a85d62f58e269bf5d23ec2a5b2b9291e84b45d52d178ad\"" May 14 04:56:28.221898 containerd[1523]: time="2025-05-14T04:56:28.221827087Z" level=info msg="connecting to shim 28c8adea2067257da44df381b17575b71e2ca076bbce94bb1f55e1743dbd90e3" address="unix:///run/containerd/s/803df95455ec1823636a2fdba6f2f91c40ec1e7be44e6dad757c3bb34ba36671" namespace=k8s.io protocol=ttrpc version=3 May 14 04:56:28.224987 containerd[1523]: time="2025-05-14T04:56:28.224960014Z" level=info msg="CreateContainer within sandbox \"1c6245ae56fb3ab3d8a85d62f58e269bf5d23ec2a5b2b9291e84b45d52d178ad\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 04:56:28.233891 containerd[1523]: time="2025-05-14T04:56:28.233862991Z" level=info msg="Container f8b0ac56985fa7cc20ea21897e0a4ca1ef51883b053709850f69fcf75bbdc2d8: CDI devices from CRI Config.CDIDevices: []" May 14 04:56:28.241555 containerd[1523]: time="2025-05-14T04:56:28.241463606Z" level=info msg="CreateContainer within sandbox \"1c6245ae56fb3ab3d8a85d62f58e269bf5d23ec2a5b2b9291e84b45d52d178ad\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f8b0ac56985fa7cc20ea21897e0a4ca1ef51883b053709850f69fcf75bbdc2d8\"" May 14 04:56:28.244000 containerd[1523]: time="2025-05-14T04:56:28.243961811Z" level=info msg="StartContainer for \"f8b0ac56985fa7cc20ea21897e0a4ca1ef51883b053709850f69fcf75bbdc2d8\"" May 14 04:56:28.245216 containerd[1523]: time="2025-05-14T04:56:28.245189894Z" level=info msg="connecting to shim f8b0ac56985fa7cc20ea21897e0a4ca1ef51883b053709850f69fcf75bbdc2d8" address="unix:///run/containerd/s/2ef33c2793b732974df8d28c11a1b0156be9159c06e3efb38d082bd99645bd05" protocol=ttrpc version=3 May 14 04:56:28.246060 systemd[1]: Started cri-containerd-28c8adea2067257da44df381b17575b71e2ca076bbce94bb1f55e1743dbd90e3.scope - libcontainer container 28c8adea2067257da44df381b17575b71e2ca076bbce94bb1f55e1743dbd90e3. May 14 04:56:28.267835 systemd[1]: Started cri-containerd-f8b0ac56985fa7cc20ea21897e0a4ca1ef51883b053709850f69fcf75bbdc2d8.scope - libcontainer container f8b0ac56985fa7cc20ea21897e0a4ca1ef51883b053709850f69fcf75bbdc2d8. May 14 04:56:28.282825 containerd[1523]: time="2025-05-14T04:56:28.282724008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-cfggl,Uid:e10f1e33-842c-4f7a-bd39-340b919f1669,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"28c8adea2067257da44df381b17575b71e2ca076bbce94bb1f55e1743dbd90e3\"" May 14 04:56:28.285834 containerd[1523]: time="2025-05-14T04:56:28.285801894Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 14 04:56:28.309091 containerd[1523]: time="2025-05-14T04:56:28.309034260Z" level=info msg="StartContainer for \"f8b0ac56985fa7cc20ea21897e0a4ca1ef51883b053709850f69fcf75bbdc2d8\" returns successfully" May 14 04:56:29.381668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4064409131.mount: Deactivated successfully. May 14 04:56:31.464853 containerd[1523]: time="2025-05-14T04:56:31.464804823Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:56:31.465295 containerd[1523]: time="2025-05-14T04:56:31.465259103Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" May 14 04:56:31.466217 containerd[1523]: time="2025-05-14T04:56:31.466184825Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:56:31.468693 containerd[1523]: time="2025-05-14T04:56:31.468658469Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:56:31.469926 containerd[1523]: time="2025-05-14T04:56:31.469890711Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 3.184040937s" May 14 04:56:31.469963 containerd[1523]: time="2025-05-14T04:56:31.469928071Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 14 04:56:31.483648 containerd[1523]: time="2025-05-14T04:56:31.483593693Z" level=info msg="CreateContainer within sandbox \"28c8adea2067257da44df381b17575b71e2ca076bbce94bb1f55e1743dbd90e3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 14 04:56:31.516010 containerd[1523]: time="2025-05-14T04:56:31.515961066Z" level=info msg="Container 1109c625c64ce9e1dacf7078c0bd1f0346678c53c82b7747ead768f9d5b5ba5a: CDI devices from CRI Config.CDIDevices: []" May 14 04:56:31.522114 containerd[1523]: time="2025-05-14T04:56:31.521990996Z" level=info msg="CreateContainer within sandbox \"28c8adea2067257da44df381b17575b71e2ca076bbce94bb1f55e1743dbd90e3\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1109c625c64ce9e1dacf7078c0bd1f0346678c53c82b7747ead768f9d5b5ba5a\"" May 14 04:56:31.522745 containerd[1523]: time="2025-05-14T04:56:31.522485797Z" level=info msg="StartContainer for \"1109c625c64ce9e1dacf7078c0bd1f0346678c53c82b7747ead768f9d5b5ba5a\"" May 14 04:56:31.523517 containerd[1523]: time="2025-05-14T04:56:31.523463238Z" level=info msg="connecting to shim 1109c625c64ce9e1dacf7078c0bd1f0346678c53c82b7747ead768f9d5b5ba5a" address="unix:///run/containerd/s/803df95455ec1823636a2fdba6f2f91c40ec1e7be44e6dad757c3bb34ba36671" protocol=ttrpc version=3 May 14 04:56:31.570879 systemd[1]: Started cri-containerd-1109c625c64ce9e1dacf7078c0bd1f0346678c53c82b7747ead768f9d5b5ba5a.scope - libcontainer container 1109c625c64ce9e1dacf7078c0bd1f0346678c53c82b7747ead768f9d5b5ba5a. May 14 04:56:31.600618 containerd[1523]: time="2025-05-14T04:56:31.600583124Z" level=info msg="StartContainer for \"1109c625c64ce9e1dacf7078c0bd1f0346678c53c82b7747ead768f9d5b5ba5a\" returns successfully" May 14 04:56:31.858665 kubelet[2770]: I0514 04:56:31.858596 2770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8rbhb" podStartSLOduration=4.858578745 podStartE2EDuration="4.858578745s" podCreationTimestamp="2025-05-14 04:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 04:56:28.893282137 +0000 UTC m=+17.136893943" watchObservedRunningTime="2025-05-14 04:56:31.858578745 +0000 UTC m=+20.102190511" May 14 04:56:35.697121 kubelet[2770]: I0514 04:56:35.697063 2770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-cfggl" podStartSLOduration=5.502811647 podStartE2EDuration="8.697039601s" podCreationTimestamp="2025-05-14 04:56:27 +0000 UTC" firstStartedPulling="2025-05-14 04:56:28.28379913 +0000 UTC m=+16.527410896" lastFinishedPulling="2025-05-14 04:56:31.478027084 +0000 UTC m=+19.721638850" observedRunningTime="2025-05-14 04:56:31.899240411 +0000 UTC m=+20.142852177" watchObservedRunningTime="2025-05-14 04:56:35.697039601 +0000 UTC m=+23.940651367" May 14 04:56:35.697475 kubelet[2770]: I0514 04:56:35.697298 2770 topology_manager.go:215] "Topology Admit Handler" podUID="ebd60333-709b-4cab-a67d-580ce046894e" podNamespace="calico-system" podName="calico-typha-7c987d98fd-b9xhc" May 14 04:56:35.708741 systemd[1]: Created slice kubepods-besteffort-podebd60333_709b_4cab_a67d_580ce046894e.slice - libcontainer container kubepods-besteffort-podebd60333_709b_4cab_a67d_580ce046894e.slice. May 14 04:56:35.799916 kubelet[2770]: I0514 04:56:35.799883 2770 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebd60333-709b-4cab-a67d-580ce046894e-tigera-ca-bundle\") pod \"calico-typha-7c987d98fd-b9xhc\" (UID: \"ebd60333-709b-4cab-a67d-580ce046894e\") " pod="calico-system/calico-typha-7c987d98fd-b9xhc" May 14 04:56:35.800191 kubelet[2770]: I0514 04:56:35.799967 2770 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ebd60333-709b-4cab-a67d-580ce046894e-typha-certs\") pod \"calico-typha-7c987d98fd-b9xhc\" (UID: \"ebd60333-709b-4cab-a67d-580ce046894e\") " pod="calico-system/calico-typha-7c987d98fd-b9xhc" May 14 04:56:35.800236 kubelet[2770]: I0514 04:56:35.800216 2770 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khp55\" (UniqueName: \"kubernetes.io/projected/ebd60333-709b-4cab-a67d-580ce046894e-kube-api-access-khp55\") pod \"calico-typha-7c987d98fd-b9xhc\" (UID: \"ebd60333-709b-4cab-a67d-580ce046894e\") " pod="calico-system/calico-typha-7c987d98fd-b9xhc" May 14 04:56:35.871461 kubelet[2770]: I0514 04:56:35.871415 2770 topology_manager.go:215] "Topology Admit Handler" podUID="e373f05e-e335-49f5-893a-22d5d75bcd40" podNamespace="calico-system" podName="calico-node-4th9d" May 14 04:56:35.879756 systemd[1]: Created slice kubepods-besteffort-pode373f05e_e335_49f5_893a_22d5d75bcd40.slice - libcontainer container kubepods-besteffort-pode373f05e_e335_49f5_893a_22d5d75bcd40.slice. May 14 04:56:35.900843 kubelet[2770]: I0514 04:56:35.900807 2770 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e373f05e-e335-49f5-893a-22d5d75bcd40-cni-bin-dir\") pod \"calico-node-4th9d\" (UID: \"e373f05e-e335-49f5-893a-22d5d75bcd40\") " pod="calico-system/calico-node-4th9d" May 14 04:56:35.901237 kubelet[2770]: I0514 04:56:35.900874 2770 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e373f05e-e335-49f5-893a-22d5d75bcd40-lib-modules\") pod \"calico-node-4th9d\" (UID: \"e373f05e-e335-49f5-893a-22d5d75bcd40\") " pod="calico-system/calico-node-4th9d" May 14 04:56:35.901237 kubelet[2770]: I0514 04:56:35.900906 2770 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e373f05e-e335-49f5-893a-22d5d75bcd40-xtables-lock\") pod \"calico-node-4th9d\" (UID: \"e373f05e-e335-49f5-893a-22d5d75bcd40\") " pod="calico-system/calico-node-4th9d" May 14 04:56:35.901237 kubelet[2770]: I0514 04:56:35.900932 2770 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e373f05e-e335-49f5-893a-22d5d75bcd40-policysync\") pod \"calico-node-4th9d\" (UID: \"e373f05e-e335-49f5-893a-22d5d75bcd40\") " pod="calico-system/calico-node-4th9d" May 14 04:56:35.901237 kubelet[2770]: I0514 04:56:35.900948 2770 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e373f05e-e335-49f5-893a-22d5d75bcd40-cni-net-dir\") pod \"calico-node-4th9d\" (UID: \"e373f05e-e335-49f5-893a-22d5d75bcd40\") " pod="calico-system/calico-node-4th9d" May 14 04:56:35.901237 kubelet[2770]: I0514 04:56:35.900966 2770 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4gvl\" (UniqueName: \"kubernetes.io/projected/e373f05e-e335-49f5-893a-22d5d75bcd40-kube-api-access-r4gvl\") pod \"calico-node-4th9d\" (UID: \"e373f05e-e335-49f5-893a-22d5d75bcd40\") " pod="calico-system/calico-node-4th9d" May 14 04:56:35.901374 kubelet[2770]: I0514 04:56:35.901080 2770 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e373f05e-e335-49f5-893a-22d5d75bcd40-tigera-ca-bundle\") pod \"calico-node-4th9d\" (UID: \"e373f05e-e335-49f5-893a-22d5d75bcd40\") " pod="calico-system/calico-node-4th9d" May 14 04:56:35.901374 kubelet[2770]: I0514 04:56:35.901098 2770 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e373f05e-e335-49f5-893a-22d5d75bcd40-node-certs\") pod \"calico-node-4th9d\" (UID: \"e373f05e-e335-49f5-893a-22d5d75bcd40\") " pod="calico-system/calico-node-4th9d" May 14 04:56:35.901374 kubelet[2770]: I0514 04:56:35.901114 2770 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e373f05e-e335-49f5-893a-22d5d75bcd40-cni-log-dir\") pod \"calico-node-4th9d\" (UID: \"e373f05e-e335-49f5-893a-22d5d75bcd40\") " pod="calico-system/calico-node-4th9d" May 14 04:56:35.901374 kubelet[2770]: I0514 04:56:35.901129 2770 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e373f05e-e335-49f5-893a-22d5d75bcd40-var-run-calico\") pod \"calico-node-4th9d\" (UID: \"e373f05e-e335-49f5-893a-22d5d75bcd40\") " pod="calico-system/calico-node-4th9d" May 14 04:56:35.901374 kubelet[2770]: I0514 04:56:35.901146 2770 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e373f05e-e335-49f5-893a-22d5d75bcd40-var-lib-calico\") pod \"calico-node-4th9d\" (UID: \"e373f05e-e335-49f5-893a-22d5d75bcd40\") " pod="calico-system/calico-node-4th9d" May 14 04:56:35.901482 kubelet[2770]: I0514 04:56:35.901161 2770 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e373f05e-e335-49f5-893a-22d5d75bcd40-flexvol-driver-host\") pod \"calico-node-4th9d\" (UID: \"e373f05e-e335-49f5-893a-22d5d75bcd40\") " pod="calico-system/calico-node-4th9d" May 14 04:56:36.005726 kubelet[2770]: E0514 04:56:36.004610 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.005884 kubelet[2770]: W0514 04:56:36.005819 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.005884 kubelet[2770]: E0514 04:56:36.005854 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.014223 kubelet[2770]: E0514 04:56:36.014176 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.014223 kubelet[2770]: W0514 04:56:36.014191 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.014223 kubelet[2770]: E0514 04:56:36.014204 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.015806 containerd[1523]: time="2025-05-14T04:56:36.015772962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c987d98fd-b9xhc,Uid:ebd60333-709b-4cab-a67d-580ce046894e,Namespace:calico-system,Attempt:0,}" May 14 04:56:36.049456 containerd[1523]: time="2025-05-14T04:56:36.048910961Z" level=info msg="connecting to shim 990c72c367fc664df3ec48b1804677eb29ae6e008b40c5061b9dc23caa37e84c" address="unix:///run/containerd/s/0e43657168a9f8ea732f3bc97e375c3dae50c09683c5e080c13ec523a2fa9098" namespace=k8s.io protocol=ttrpc version=3 May 14 04:56:36.077493 kubelet[2770]: I0514 04:56:36.076591 2770 topology_manager.go:215] "Topology Admit Handler" podUID="31698ef2-ce30-4522-8f97-9ef88e5b07a0" podNamespace="calico-system" podName="csi-node-driver-n7f8z" May 14 04:56:36.079694 kubelet[2770]: E0514 04:56:36.079662 2770 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n7f8z" podUID="31698ef2-ce30-4522-8f97-9ef88e5b07a0" May 14 04:56:36.079872 systemd[1]: Started cri-containerd-990c72c367fc664df3ec48b1804677eb29ae6e008b40c5061b9dc23caa37e84c.scope - libcontainer container 990c72c367fc664df3ec48b1804677eb29ae6e008b40c5061b9dc23caa37e84c. May 14 04:56:36.095299 kubelet[2770]: E0514 04:56:36.095278 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.095299 kubelet[2770]: W0514 04:56:36.095295 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.095420 kubelet[2770]: E0514 04:56:36.095317 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.095477 kubelet[2770]: E0514 04:56:36.095462 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.095477 kubelet[2770]: W0514 04:56:36.095473 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.095554 kubelet[2770]: E0514 04:56:36.095482 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.095612 kubelet[2770]: E0514 04:56:36.095600 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.095612 kubelet[2770]: W0514 04:56:36.095611 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.095680 kubelet[2770]: E0514 04:56:36.095626 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.095761 kubelet[2770]: E0514 04:56:36.095749 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.095761 kubelet[2770]: W0514 04:56:36.095759 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.095834 kubelet[2770]: E0514 04:56:36.095767 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.095908 kubelet[2770]: E0514 04:56:36.095896 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.095936 kubelet[2770]: W0514 04:56:36.095908 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.095936 kubelet[2770]: E0514 04:56:36.095926 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.096044 kubelet[2770]: E0514 04:56:36.096032 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.096044 kubelet[2770]: W0514 04:56:36.096042 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.096114 kubelet[2770]: E0514 04:56:36.096049 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.096168 kubelet[2770]: E0514 04:56:36.096157 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.096168 kubelet[2770]: W0514 04:56:36.096167 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.096212 kubelet[2770]: E0514 04:56:36.096174 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.096319 kubelet[2770]: E0514 04:56:36.096300 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.096319 kubelet[2770]: W0514 04:56:36.096317 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.096389 kubelet[2770]: E0514 04:56:36.096327 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.096464 kubelet[2770]: E0514 04:56:36.096453 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.096492 kubelet[2770]: W0514 04:56:36.096464 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.096492 kubelet[2770]: E0514 04:56:36.096473 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.096610 kubelet[2770]: E0514 04:56:36.096599 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.096610 kubelet[2770]: W0514 04:56:36.096609 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.096769 kubelet[2770]: E0514 04:56:36.096617 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.096896 kubelet[2770]: E0514 04:56:36.096862 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.096896 kubelet[2770]: W0514 04:56:36.096872 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.096896 kubelet[2770]: E0514 04:56:36.096887 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.097095 kubelet[2770]: E0514 04:56:36.097082 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.097095 kubelet[2770]: W0514 04:56:36.097092 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.097143 kubelet[2770]: E0514 04:56:36.097105 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.097260 kubelet[2770]: E0514 04:56:36.097239 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.097260 kubelet[2770]: W0514 04:56:36.097257 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.097367 kubelet[2770]: E0514 04:56:36.097265 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.097403 kubelet[2770]: E0514 04:56:36.097390 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.097426 kubelet[2770]: W0514 04:56:36.097409 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.097426 kubelet[2770]: E0514 04:56:36.097418 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.097529 kubelet[2770]: E0514 04:56:36.097520 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.097553 kubelet[2770]: W0514 04:56:36.097529 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.097553 kubelet[2770]: E0514 04:56:36.097536 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.097687 kubelet[2770]: E0514 04:56:36.097675 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.097687 kubelet[2770]: W0514 04:56:36.097686 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.097788 kubelet[2770]: E0514 04:56:36.097694 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.097927 kubelet[2770]: E0514 04:56:36.097909 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.097972 kubelet[2770]: W0514 04:56:36.097931 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.097972 kubelet[2770]: E0514 04:56:36.097941 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.098073 kubelet[2770]: E0514 04:56:36.098061 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.098101 kubelet[2770]: W0514 04:56:36.098072 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.098101 kubelet[2770]: E0514 04:56:36.098086 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.098222 kubelet[2770]: E0514 04:56:36.098211 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.098222 kubelet[2770]: W0514 04:56:36.098221 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.098268 kubelet[2770]: E0514 04:56:36.098228 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.098361 kubelet[2770]: E0514 04:56:36.098344 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.098361 kubelet[2770]: W0514 04:56:36.098354 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.098361 kubelet[2770]: E0514 04:56:36.098361 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.103151 kubelet[2770]: E0514 04:56:36.102739 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.103151 kubelet[2770]: W0514 04:56:36.103144 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.103245 kubelet[2770]: E0514 04:56:36.103161 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.103245 kubelet[2770]: I0514 04:56:36.103188 2770 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/31698ef2-ce30-4522-8f97-9ef88e5b07a0-varrun\") pod \"csi-node-driver-n7f8z\" (UID: \"31698ef2-ce30-4522-8f97-9ef88e5b07a0\") " pod="calico-system/csi-node-driver-n7f8z" May 14 04:56:36.103502 kubelet[2770]: E0514 04:56:36.103484 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.103668 kubelet[2770]: W0514 04:56:36.103641 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.103802 kubelet[2770]: E0514 04:56:36.103752 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.104140 kubelet[2770]: I0514 04:56:36.104119 2770 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldkmb\" (UniqueName: \"kubernetes.io/projected/31698ef2-ce30-4522-8f97-9ef88e5b07a0-kube-api-access-ldkmb\") pod \"csi-node-driver-n7f8z\" (UID: \"31698ef2-ce30-4522-8f97-9ef88e5b07a0\") " pod="calico-system/csi-node-driver-n7f8z" May 14 04:56:36.104260 kubelet[2770]: E0514 04:56:36.104245 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.104260 kubelet[2770]: W0514 04:56:36.104258 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.104326 kubelet[2770]: E0514 04:56:36.104274 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.104536 kubelet[2770]: E0514 04:56:36.104520 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.104536 kubelet[2770]: W0514 04:56:36.104534 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.104600 kubelet[2770]: E0514 04:56:36.104546 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.104874 kubelet[2770]: E0514 04:56:36.104860 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.104874 kubelet[2770]: W0514 04:56:36.104873 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.105068 kubelet[2770]: E0514 04:56:36.105039 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.105108 kubelet[2770]: I0514 04:56:36.105074 2770 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/31698ef2-ce30-4522-8f97-9ef88e5b07a0-kubelet-dir\") pod \"csi-node-driver-n7f8z\" (UID: \"31698ef2-ce30-4522-8f97-9ef88e5b07a0\") " pod="calico-system/csi-node-driver-n7f8z" May 14 04:56:36.105415 kubelet[2770]: E0514 04:56:36.105383 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.105415 kubelet[2770]: W0514 04:56:36.105398 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.105507 kubelet[2770]: E0514 04:56:36.105494 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.105765 kubelet[2770]: E0514 04:56:36.105689 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.105765 kubelet[2770]: W0514 04:56:36.105767 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.105896 kubelet[2770]: E0514 04:56:36.105785 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.105896 kubelet[2770]: I0514 04:56:36.105804 2770 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/31698ef2-ce30-4522-8f97-9ef88e5b07a0-socket-dir\") pod \"csi-node-driver-n7f8z\" (UID: \"31698ef2-ce30-4522-8f97-9ef88e5b07a0\") " pod="calico-system/csi-node-driver-n7f8z" May 14 04:56:36.105970 kubelet[2770]: E0514 04:56:36.105956 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.105970 kubelet[2770]: W0514 04:56:36.105969 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.106037 kubelet[2770]: E0514 04:56:36.105983 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.106343 kubelet[2770]: E0514 04:56:36.106288 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.106343 kubelet[2770]: W0514 04:56:36.106316 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.106343 kubelet[2770]: E0514 04:56:36.106330 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.106759 kubelet[2770]: E0514 04:56:36.106655 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.106759 kubelet[2770]: W0514 04:56:36.106672 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.106759 kubelet[2770]: E0514 04:56:36.106688 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.106759 kubelet[2770]: I0514 04:56:36.106734 2770 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/31698ef2-ce30-4522-8f97-9ef88e5b07a0-registration-dir\") pod \"csi-node-driver-n7f8z\" (UID: \"31698ef2-ce30-4522-8f97-9ef88e5b07a0\") " pod="calico-system/csi-node-driver-n7f8z" May 14 04:56:36.106995 kubelet[2770]: E0514 04:56:36.106972 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.106995 kubelet[2770]: W0514 04:56:36.106993 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.107304 kubelet[2770]: E0514 04:56:36.107011 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.107304 kubelet[2770]: E0514 04:56:36.107186 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.107304 kubelet[2770]: W0514 04:56:36.107198 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.107304 kubelet[2770]: E0514 04:56:36.107210 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.107440 kubelet[2770]: E0514 04:56:36.107418 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.107440 kubelet[2770]: W0514 04:56:36.107436 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.107483 kubelet[2770]: E0514 04:56:36.107454 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.107598 kubelet[2770]: E0514 04:56:36.107587 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.107623 kubelet[2770]: W0514 04:56:36.107598 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.107623 kubelet[2770]: E0514 04:56:36.107608 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.108057 kubelet[2770]: E0514 04:56:36.108042 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.108057 kubelet[2770]: W0514 04:56:36.108056 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.108108 kubelet[2770]: E0514 04:56:36.108067 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.116485 containerd[1523]: time="2025-05-14T04:56:36.116446401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c987d98fd-b9xhc,Uid:ebd60333-709b-4cab-a67d-580ce046894e,Namespace:calico-system,Attempt:0,} returns sandbox id \"990c72c367fc664df3ec48b1804677eb29ae6e008b40c5061b9dc23caa37e84c\"" May 14 04:56:36.117762 containerd[1523]: time="2025-05-14T04:56:36.117680802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 14 04:56:36.183135 containerd[1523]: time="2025-05-14T04:56:36.183102240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4th9d,Uid:e373f05e-e335-49f5-893a-22d5d75bcd40,Namespace:calico-system,Attempt:0,}" May 14 04:56:36.199863 containerd[1523]: time="2025-05-14T04:56:36.199763739Z" level=info msg="connecting to shim de2ece7ef30c67ede80aa7634c3ed14d50d2a89ba16e098d02c20726610577a6" address="unix:///run/containerd/s/addf587c45cee8358a3c4b993ef5206de5e78fc342a449e4c3ac67e25c678f99" namespace=k8s.io protocol=ttrpc version=3 May 14 04:56:36.207773 kubelet[2770]: E0514 04:56:36.207749 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.207773 kubelet[2770]: W0514 04:56:36.207768 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.207888 kubelet[2770]: E0514 04:56:36.207787 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.208116 kubelet[2770]: E0514 04:56:36.208010 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.208116 kubelet[2770]: W0514 04:56:36.208024 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.208116 kubelet[2770]: E0514 04:56:36.208041 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.209347 kubelet[2770]: E0514 04:56:36.208220 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.209347 kubelet[2770]: W0514 04:56:36.208229 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.209347 kubelet[2770]: E0514 04:56:36.208240 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.209347 kubelet[2770]: E0514 04:56:36.208379 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.209347 kubelet[2770]: W0514 04:56:36.208389 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.209347 kubelet[2770]: E0514 04:56:36.208397 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.209347 kubelet[2770]: E0514 04:56:36.208520 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.209347 kubelet[2770]: W0514 04:56:36.208527 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.209347 kubelet[2770]: E0514 04:56:36.208535 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.209347 kubelet[2770]: E0514 04:56:36.208684 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.209566 kubelet[2770]: W0514 04:56:36.208691 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.209566 kubelet[2770]: E0514 04:56:36.208719 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.209566 kubelet[2770]: E0514 04:56:36.208837 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.209566 kubelet[2770]: W0514 04:56:36.208844 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.209566 kubelet[2770]: E0514 04:56:36.208852 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.209566 kubelet[2770]: E0514 04:56:36.209066 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.209566 kubelet[2770]: W0514 04:56:36.209074 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.209566 kubelet[2770]: E0514 04:56:36.209122 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.209566 kubelet[2770]: E0514 04:56:36.209177 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.209566 kubelet[2770]: W0514 04:56:36.209183 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.211775 kubelet[2770]: E0514 04:56:36.209228 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.211775 kubelet[2770]: E0514 04:56:36.209353 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.211775 kubelet[2770]: W0514 04:56:36.209361 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.211775 kubelet[2770]: E0514 04:56:36.209399 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.211775 kubelet[2770]: E0514 04:56:36.209489 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.211775 kubelet[2770]: W0514 04:56:36.209496 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.211775 kubelet[2770]: E0514 04:56:36.209555 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.211775 kubelet[2770]: E0514 04:56:36.209607 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.211775 kubelet[2770]: W0514 04:56:36.209614 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.211775 kubelet[2770]: E0514 04:56:36.209623 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.211962 kubelet[2770]: E0514 04:56:36.209793 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.211962 kubelet[2770]: W0514 04:56:36.209802 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.211962 kubelet[2770]: E0514 04:56:36.209813 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.211962 kubelet[2770]: E0514 04:56:36.209938 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.211962 kubelet[2770]: W0514 04:56:36.209946 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.211962 kubelet[2770]: E0514 04:56:36.209959 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.211962 kubelet[2770]: E0514 04:56:36.210936 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.211962 kubelet[2770]: W0514 04:56:36.210953 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.211962 kubelet[2770]: E0514 04:56:36.210967 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.211962 kubelet[2770]: E0514 04:56:36.211350 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.212129 kubelet[2770]: W0514 04:56:36.211360 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.212129 kubelet[2770]: E0514 04:56:36.211371 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.212129 kubelet[2770]: E0514 04:56:36.211499 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.212129 kubelet[2770]: W0514 04:56:36.211508 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.212129 kubelet[2770]: E0514 04:56:36.211545 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.212627 kubelet[2770]: E0514 04:56:36.211681 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.212627 kubelet[2770]: W0514 04:56:36.212583 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.212770 kubelet[2770]: E0514 04:56:36.212697 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.213086 kubelet[2770]: E0514 04:56:36.213069 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.213086 kubelet[2770]: W0514 04:56:36.213083 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.213231 kubelet[2770]: E0514 04:56:36.213156 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.213331 kubelet[2770]: E0514 04:56:36.213274 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.213331 kubelet[2770]: W0514 04:56:36.213284 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.213331 kubelet[2770]: E0514 04:56:36.213306 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.213531 kubelet[2770]: E0514 04:56:36.213510 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.213531 kubelet[2770]: W0514 04:56:36.213524 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.213597 kubelet[2770]: E0514 04:56:36.213563 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.213765 kubelet[2770]: E0514 04:56:36.213748 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.213765 kubelet[2770]: W0514 04:56:36.213762 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.213815 kubelet[2770]: E0514 04:56:36.213778 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.214066 kubelet[2770]: E0514 04:56:36.214051 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.214066 kubelet[2770]: W0514 04:56:36.214064 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.214130 kubelet[2770]: E0514 04:56:36.214074 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.214378 kubelet[2770]: E0514 04:56:36.214352 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.214378 kubelet[2770]: W0514 04:56:36.214368 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.214520 kubelet[2770]: E0514 04:56:36.214385 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.214759 kubelet[2770]: E0514 04:56:36.214738 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.214759 kubelet[2770]: W0514 04:56:36.214755 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.214826 kubelet[2770]: E0514 04:56:36.214766 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.222003 kubelet[2770]: E0514 04:56:36.221973 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:36.222003 kubelet[2770]: W0514 04:56:36.222031 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:36.222003 kubelet[2770]: E0514 04:56:36.222046 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:36.235911 systemd[1]: Started cri-containerd-de2ece7ef30c67ede80aa7634c3ed14d50d2a89ba16e098d02c20726610577a6.scope - libcontainer container de2ece7ef30c67ede80aa7634c3ed14d50d2a89ba16e098d02c20726610577a6. May 14 04:56:36.259699 containerd[1523]: time="2025-05-14T04:56:36.257801008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4th9d,Uid:e373f05e-e335-49f5-893a-22d5d75bcd40,Namespace:calico-system,Attempt:0,} returns sandbox id \"de2ece7ef30c67ede80aa7634c3ed14d50d2a89ba16e098d02c20726610577a6\"" May 14 04:56:37.842602 kubelet[2770]: E0514 04:56:37.842514 2770 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n7f8z" podUID="31698ef2-ce30-4522-8f97-9ef88e5b07a0" May 14 04:56:38.445630 containerd[1523]: time="2025-05-14T04:56:38.445588055Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:56:38.446777 containerd[1523]: time="2025-05-14T04:56:38.446695816Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 14 04:56:38.447836 containerd[1523]: time="2025-05-14T04:56:38.447799697Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:56:38.449803 containerd[1523]: time="2025-05-14T04:56:38.449755579Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:56:38.450387 containerd[1523]: time="2025-05-14T04:56:38.450354420Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 2.332513658s" May 14 04:56:38.450387 containerd[1523]: time="2025-05-14T04:56:38.450385460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 14 04:56:38.451555 containerd[1523]: time="2025-05-14T04:56:38.451526981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 14 04:56:38.469030 containerd[1523]: time="2025-05-14T04:56:38.468508319Z" level=info msg="CreateContainer within sandbox \"990c72c367fc664df3ec48b1804677eb29ae6e008b40c5061b9dc23caa37e84c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 14 04:56:38.475093 containerd[1523]: time="2025-05-14T04:56:38.475068246Z" level=info msg="Container ecbd729fabf522db9f5ab7902720753c9eed159911f2775047d46abdff9c4998: CDI devices from CRI Config.CDIDevices: []" May 14 04:56:38.481744 containerd[1523]: time="2025-05-14T04:56:38.481694092Z" level=info msg="CreateContainer within sandbox \"990c72c367fc664df3ec48b1804677eb29ae6e008b40c5061b9dc23caa37e84c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ecbd729fabf522db9f5ab7902720753c9eed159911f2775047d46abdff9c4998\"" May 14 04:56:38.482439 containerd[1523]: time="2025-05-14T04:56:38.482405813Z" level=info msg="StartContainer for \"ecbd729fabf522db9f5ab7902720753c9eed159911f2775047d46abdff9c4998\"" May 14 04:56:38.483615 containerd[1523]: time="2025-05-14T04:56:38.483589974Z" level=info msg="connecting to shim ecbd729fabf522db9f5ab7902720753c9eed159911f2775047d46abdff9c4998" address="unix:///run/containerd/s/0e43657168a9f8ea732f3bc97e375c3dae50c09683c5e080c13ec523a2fa9098" protocol=ttrpc version=3 May 14 04:56:38.503855 systemd[1]: Started cri-containerd-ecbd729fabf522db9f5ab7902720753c9eed159911f2775047d46abdff9c4998.scope - libcontainer container ecbd729fabf522db9f5ab7902720753c9eed159911f2775047d46abdff9c4998. May 14 04:56:38.537927 containerd[1523]: time="2025-05-14T04:56:38.537888311Z" level=info msg="StartContainer for \"ecbd729fabf522db9f5ab7902720753c9eed159911f2775047d46abdff9c4998\" returns successfully" May 14 04:56:38.920288 kubelet[2770]: E0514 04:56:38.920186 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:38.920288 kubelet[2770]: W0514 04:56:38.920206 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:38.920288 kubelet[2770]: E0514 04:56:38.920220 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:38.921433 kubelet[2770]: E0514 04:56:38.921086 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:38.921433 kubelet[2770]: W0514 04:56:38.921098 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:38.921433 kubelet[2770]: E0514 04:56:38.921110 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:38.921821 kubelet[2770]: E0514 04:56:38.921790 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:38.921976 kubelet[2770]: W0514 04:56:38.921895 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:38.922054 kubelet[2770]: E0514 04:56:38.922041 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:38.922334 kubelet[2770]: E0514 04:56:38.922287 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:38.922334 kubelet[2770]: W0514 04:56:38.922307 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:38.922421 kubelet[2770]: E0514 04:56:38.922409 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:38.922674 kubelet[2770]: E0514 04:56:38.922661 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:38.922770 kubelet[2770]: W0514 04:56:38.922757 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:38.922909 kubelet[2770]: E0514 04:56:38.922896 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:38.923209 kubelet[2770]: E0514 04:56:38.923119 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:38.923209 kubelet[2770]: W0514 04:56:38.923130 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:38.923209 kubelet[2770]: E0514 04:56:38.923140 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:38.923382 kubelet[2770]: E0514 04:56:38.923370 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:38.923438 kubelet[2770]: W0514 04:56:38.923427 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:38.923486 kubelet[2770]: E0514 04:56:38.923477 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:38.923773 kubelet[2770]: E0514 04:56:38.923661 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:38.923773 kubelet[2770]: W0514 04:56:38.923672 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:38.923773 kubelet[2770]: E0514 04:56:38.923681 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:38.923917 kubelet[2770]: E0514 04:56:38.923906 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:38.923996 kubelet[2770]: W0514 04:56:38.923984 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:38.924048 kubelet[2770]: E0514 04:56:38.924038 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:38.924728 kubelet[2770]: E0514 04:56:38.924220 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:38.924728 kubelet[2770]: W0514 04:56:38.924654 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:38.924728 kubelet[2770]: E0514 04:56:38.924669 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:38.925733 kubelet[2770]: E0514 04:56:38.925492 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:38.926063 kubelet[2770]: W0514 04:56:38.925819 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:38.926063 kubelet[2770]: E0514 04:56:38.925843 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:38.926063 kubelet[2770]: I0514 04:56:38.926012 2770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7c987d98fd-b9xhc" podStartSLOduration=1.592335895 podStartE2EDuration="3.925999234s" podCreationTimestamp="2025-05-14 04:56:35 +0000 UTC" firstStartedPulling="2025-05-14 04:56:36.117450962 +0000 UTC m=+24.361062688" lastFinishedPulling="2025-05-14 04:56:38.451114261 +0000 UTC m=+26.694726027" observedRunningTime="2025-05-14 04:56:38.925102873 +0000 UTC m=+27.168714639" watchObservedRunningTime="2025-05-14 04:56:38.925999234 +0000 UTC m=+27.169611000" May 14 04:56:38.927186 kubelet[2770]: E0514 04:56:38.926612 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:38.927186 kubelet[2770]: W0514 04:56:38.926857 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:38.927186 kubelet[2770]: E0514 04:56:38.926872 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:38.927529 kubelet[2770]: E0514 04:56:38.927424 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:38.927529 kubelet[2770]: W0514 04:56:38.927439 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:38.927529 kubelet[2770]: E0514 04:56:38.927450 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:38.927767 kubelet[2770]: E0514 04:56:38.927753 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:38.927831 kubelet[2770]: W0514 04:56:38.927820 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:38.927883 kubelet[2770]: E0514 04:56:38.927873 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:38.928225 kubelet[2770]: E0514 04:56:38.928073 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:38.928225 kubelet[2770]: W0514 04:56:38.928084 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:38.928225 kubelet[2770]: E0514 04:56:38.928093 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:38.928380 kubelet[2770]: E0514 04:56:38.928367 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:38.928432 kubelet[2770]: W0514 04:56:38.928422 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:38.928485 kubelet[2770]: E0514 04:56:38.928475 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:38.928686 kubelet[2770]: E0514 04:56:38.928675 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:38.928783 kubelet[2770]: W0514 04:56:38.928771 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:38.928850 kubelet[2770]: E0514 04:56:38.928839 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:38.929075 kubelet[2770]: E0514 04:56:38.929044 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:38.929075 kubelet[2770]: W0514 04:56:38.929062 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:38.929075 kubelet[2770]: E0514 04:56:38.929080 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:38.929218 kubelet[2770]: E0514 04:56:38.929206 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:38.929218 kubelet[2770]: W0514 04:56:38.929213 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:38.929218 kubelet[2770]: E0514 04:56:38.929221 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:38.929901 kubelet[2770]: E0514 04:56:38.929357 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:38.929901 kubelet[2770]: W0514 04:56:38.929366 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:38.929901 kubelet[2770]: E0514 04:56:38.929375 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:38.929901 kubelet[2770]: E0514 04:56:38.929556 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:38.929901 kubelet[2770]: W0514 04:56:38.929570 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:38.929901 kubelet[2770]: E0514 04:56:38.929583 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:38.930104 kubelet[2770]: E0514 04:56:38.930071 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:38.930104 kubelet[2770]: W0514 04:56:38.930086 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:38.930104 kubelet[2770]: E0514 04:56:38.930103 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:38.930271 kubelet[2770]: E0514 04:56:38.930252 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:38.930271 kubelet[2770]: W0514 04:56:38.930263 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:38.930362 kubelet[2770]: E0514 04:56:38.930339 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:38.930426 kubelet[2770]: E0514 04:56:38.930412 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:38.930426 kubelet[2770]: W0514 04:56:38.930424 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:38.930487 kubelet[2770]: E0514 04:56:38.930447 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:38.930567 kubelet[2770]: E0514 04:56:38.930546 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:38.930567 kubelet[2770]: W0514 04:56:38.930557 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:38.930796 kubelet[2770]: E0514 04:56:38.930580 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:38.930796 kubelet[2770]: E0514 04:56:38.930661 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:38.930796 kubelet[2770]: W0514 04:56:38.930668 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:38.930796 kubelet[2770]: E0514 04:56:38.930682 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:38.931769 kubelet[2770]: E0514 04:56:38.930864 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:38.931769 kubelet[2770]: W0514 04:56:38.930875 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:38.931769 kubelet[2770]: E0514 04:56:38.930892 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:38.931769 kubelet[2770]: E0514 04:56:38.931083 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:38.931769 kubelet[2770]: W0514 04:56:38.931090 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:38.931769 kubelet[2770]: E0514 04:56:38.931098 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:38.931769 kubelet[2770]: E0514 04:56:38.931245 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:38.931769 kubelet[2770]: W0514 04:56:38.931253 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:38.931769 kubelet[2770]: E0514 04:56:38.931261 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:38.931769 kubelet[2770]: E0514 04:56:38.931558 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:38.932002 kubelet[2770]: W0514 04:56:38.931572 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:38.932002 kubelet[2770]: E0514 04:56:38.931583 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:38.932002 kubelet[2770]: E0514 04:56:38.931726 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:38.932002 kubelet[2770]: W0514 04:56:38.931736 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:38.932002 kubelet[2770]: E0514 04:56:38.931752 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:38.932002 kubelet[2770]: E0514 04:56:38.931987 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:38.932002 kubelet[2770]: W0514 04:56:38.932001 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:38.932243 kubelet[2770]: E0514 04:56:38.932021 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:38.932243 kubelet[2770]: E0514 04:56:38.932158 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 04:56:38.932243 kubelet[2770]: W0514 04:56:38.932166 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 04:56:38.932243 kubelet[2770]: E0514 04:56:38.932175 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 04:56:39.467064 containerd[1523]: time="2025-05-14T04:56:39.467008045Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:56:39.467403 containerd[1523]: time="2025-05-14T04:56:39.467348806Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" May 14 04:56:39.468215 containerd[1523]: time="2025-05-14T04:56:39.468176966Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:56:39.470148 containerd[1523]: time="2025-05-14T04:56:39.470113728Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:56:39.470788 containerd[1523]: time="2025-05-14T04:56:39.470757889Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 1.019196548s" May 14 04:56:39.470816 containerd[1523]: time="2025-05-14T04:56:39.470788609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 14 04:56:39.473168 containerd[1523]: time="2025-05-14T04:56:39.473138731Z" level=info msg="CreateContainer within sandbox \"de2ece7ef30c67ede80aa7634c3ed14d50d2a89ba16e098d02c20726610577a6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 14 04:56:39.491571 containerd[1523]: time="2025-05-14T04:56:39.491531509Z" level=info msg="Container 02d8d24ccaf4d20acd8f32b83867c43fe4966d63d7f92b8dc98a6a511ef72ced: CDI devices from CRI Config.CDIDevices: []" May 14 04:56:39.494616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4196476124.mount: Deactivated successfully. May 14 04:56:39.500249 containerd[1523]: time="2025-05-14T04:56:39.500209478Z" level=info msg="CreateContainer within sandbox \"de2ece7ef30c67ede80aa7634c3ed14d50d2a89ba16e098d02c20726610577a6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"02d8d24ccaf4d20acd8f32b83867c43fe4966d63d7f92b8dc98a6a511ef72ced\"" May 14 04:56:39.500631 containerd[1523]: time="2025-05-14T04:56:39.500599478Z" level=info msg="StartContainer for \"02d8d24ccaf4d20acd8f32b83867c43fe4966d63d7f92b8dc98a6a511ef72ced\"" May 14 04:56:39.502231 containerd[1523]: time="2025-05-14T04:56:39.502190319Z" level=info msg="connecting to shim 02d8d24ccaf4d20acd8f32b83867c43fe4966d63d7f92b8dc98a6a511ef72ced" address="unix:///run/containerd/s/addf587c45cee8358a3c4b993ef5206de5e78fc342a449e4c3ac67e25c678f99" protocol=ttrpc version=3 May 14 04:56:39.522854 systemd[1]: Started cri-containerd-02d8d24ccaf4d20acd8f32b83867c43fe4966d63d7f92b8dc98a6a511ef72ced.scope - libcontainer container 02d8d24ccaf4d20acd8f32b83867c43fe4966d63d7f92b8dc98a6a511ef72ced. May 14 04:56:39.571406 containerd[1523]: time="2025-05-14T04:56:39.571339867Z" level=info msg="StartContainer for \"02d8d24ccaf4d20acd8f32b83867c43fe4966d63d7f92b8dc98a6a511ef72ced\" returns successfully" May 14 04:56:39.588327 systemd[1]: cri-containerd-02d8d24ccaf4d20acd8f32b83867c43fe4966d63d7f92b8dc98a6a511ef72ced.scope: Deactivated successfully. May 14 04:56:39.604654 containerd[1523]: time="2025-05-14T04:56:39.604616939Z" level=info msg="received exit event container_id:\"02d8d24ccaf4d20acd8f32b83867c43fe4966d63d7f92b8dc98a6a511ef72ced\" id:\"02d8d24ccaf4d20acd8f32b83867c43fe4966d63d7f92b8dc98a6a511ef72ced\" pid:3438 exited_at:{seconds:1747198599 nanos:591491446}" May 14 04:56:39.604810 containerd[1523]: time="2025-05-14T04:56:39.604778939Z" level=info msg="TaskExit event in podsandbox handler container_id:\"02d8d24ccaf4d20acd8f32b83867c43fe4966d63d7f92b8dc98a6a511ef72ced\" id:\"02d8d24ccaf4d20acd8f32b83867c43fe4966d63d7f92b8dc98a6a511ef72ced\" pid:3438 exited_at:{seconds:1747198599 nanos:591491446}" May 14 04:56:39.638216 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-02d8d24ccaf4d20acd8f32b83867c43fe4966d63d7f92b8dc98a6a511ef72ced-rootfs.mount: Deactivated successfully. May 14 04:56:39.842436 kubelet[2770]: E0514 04:56:39.842072 2770 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n7f8z" podUID="31698ef2-ce30-4522-8f97-9ef88e5b07a0" May 14 04:56:39.919681 containerd[1523]: time="2025-05-14T04:56:39.919641126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 14 04:56:39.923768 kubelet[2770]: I0514 04:56:39.921899 2770 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 04:56:41.842627 kubelet[2770]: E0514 04:56:41.842588 2770 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n7f8z" podUID="31698ef2-ce30-4522-8f97-9ef88e5b07a0" May 14 04:56:43.249026 systemd[1]: Started sshd@7-10.0.0.80:22-10.0.0.1:46852.service - OpenSSH per-connection server daemon (10.0.0.1:46852). May 14 04:56:43.303078 sshd[3481]: Accepted publickey for core from 10.0.0.1 port 46852 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 04:56:43.304541 sshd-session[3481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 04:56:43.309509 systemd-logind[1507]: New session 8 of user core. May 14 04:56:43.319829 systemd[1]: Started session-8.scope - Session 8 of User core. May 14 04:56:43.442232 sshd[3483]: Connection closed by 10.0.0.1 port 46852 May 14 04:56:43.442667 sshd-session[3481]: pam_unix(sshd:session): session closed for user core May 14 04:56:43.446983 systemd[1]: sshd@7-10.0.0.80:22-10.0.0.1:46852.service: Deactivated successfully. May 14 04:56:43.448978 systemd[1]: session-8.scope: Deactivated successfully. May 14 04:56:43.450583 systemd-logind[1507]: Session 8 logged out. Waiting for processes to exit. May 14 04:56:43.452241 systemd-logind[1507]: Removed session 8. May 14 04:56:43.843119 kubelet[2770]: E0514 04:56:43.842942 2770 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n7f8z" podUID="31698ef2-ce30-4522-8f97-9ef88e5b07a0" May 14 04:56:44.837730 containerd[1523]: time="2025-05-14T04:56:44.837520837Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:56:44.838166 containerd[1523]: time="2025-05-14T04:56:44.838114797Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 14 04:56:44.838749 containerd[1523]: time="2025-05-14T04:56:44.838726878Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:56:44.844184 containerd[1523]: time="2025-05-14T04:56:44.844139241Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:56:44.845107 containerd[1523]: time="2025-05-14T04:56:44.845080602Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 4.925401956s" May 14 04:56:44.845300 containerd[1523]: time="2025-05-14T04:56:44.845108442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 14 04:56:44.847482 containerd[1523]: time="2025-05-14T04:56:44.847451764Z" level=info msg="CreateContainer within sandbox \"de2ece7ef30c67ede80aa7634c3ed14d50d2a89ba16e098d02c20726610577a6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 14 04:56:44.855167 containerd[1523]: time="2025-05-14T04:56:44.855059689Z" level=info msg="Container c905aa3356e1fe179fd39f730816f9bdb4cb67174c6c17841cf0cfffa366becd: CDI devices from CRI Config.CDIDevices: []" May 14 04:56:44.861840 containerd[1523]: time="2025-05-14T04:56:44.861803974Z" level=info msg="CreateContainer within sandbox \"de2ece7ef30c67ede80aa7634c3ed14d50d2a89ba16e098d02c20726610577a6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c905aa3356e1fe179fd39f730816f9bdb4cb67174c6c17841cf0cfffa366becd\"" May 14 04:56:44.862423 containerd[1523]: time="2025-05-14T04:56:44.862399374Z" level=info msg="StartContainer for \"c905aa3356e1fe179fd39f730816f9bdb4cb67174c6c17841cf0cfffa366becd\"" May 14 04:56:44.864844 containerd[1523]: time="2025-05-14T04:56:44.864811776Z" level=info msg="connecting to shim c905aa3356e1fe179fd39f730816f9bdb4cb67174c6c17841cf0cfffa366becd" address="unix:///run/containerd/s/addf587c45cee8358a3c4b993ef5206de5e78fc342a449e4c3ac67e25c678f99" protocol=ttrpc version=3 May 14 04:56:44.884854 systemd[1]: Started cri-containerd-c905aa3356e1fe179fd39f730816f9bdb4cb67174c6c17841cf0cfffa366becd.scope - libcontainer container c905aa3356e1fe179fd39f730816f9bdb4cb67174c6c17841cf0cfffa366becd. May 14 04:56:44.965814 containerd[1523]: time="2025-05-14T04:56:44.965771767Z" level=info msg="StartContainer for \"c905aa3356e1fe179fd39f730816f9bdb4cb67174c6c17841cf0cfffa366becd\" returns successfully" May 14 04:56:45.435976 systemd[1]: cri-containerd-c905aa3356e1fe179fd39f730816f9bdb4cb67174c6c17841cf0cfffa366becd.scope: Deactivated successfully. May 14 04:56:45.437748 systemd[1]: cri-containerd-c905aa3356e1fe179fd39f730816f9bdb4cb67174c6c17841cf0cfffa366becd.scope: Consumed 430ms CPU time, 159.8M memory peak, 48K read from disk, 150.3M written to disk. May 14 04:56:45.438977 containerd[1523]: time="2025-05-14T04:56:45.438940681Z" level=info msg="received exit event container_id:\"c905aa3356e1fe179fd39f730816f9bdb4cb67174c6c17841cf0cfffa366becd\" id:\"c905aa3356e1fe179fd39f730816f9bdb4cb67174c6c17841cf0cfffa366becd\" pid:3520 exited_at:{seconds:1747198605 nanos:438756321}" May 14 04:56:45.439152 containerd[1523]: time="2025-05-14T04:56:45.439126682Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c905aa3356e1fe179fd39f730816f9bdb4cb67174c6c17841cf0cfffa366becd\" id:\"c905aa3356e1fe179fd39f730816f9bdb4cb67174c6c17841cf0cfffa366becd\" pid:3520 exited_at:{seconds:1747198605 nanos:438756321}" May 14 04:56:45.455504 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c905aa3356e1fe179fd39f730816f9bdb4cb67174c6c17841cf0cfffa366becd-rootfs.mount: Deactivated successfully. May 14 04:56:45.537649 kubelet[2770]: I0514 04:56:45.537573 2770 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 14 04:56:45.569069 kubelet[2770]: I0514 04:56:45.569028 2770 topology_manager.go:215] "Topology Admit Handler" podUID="809f0cc0-f234-4b5a-b622-52435a22fb76" podNamespace="calico-system" podName="calico-kube-controllers-67b74c5b6f-snz4p" May 14 04:56:45.575663 kubelet[2770]: I0514 04:56:45.575623 2770 topology_manager.go:215] "Topology Admit Handler" podUID="1b538f75-bc5d-494b-b78e-445981161c1a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-j7wt8" May 14 04:56:45.576403 kubelet[2770]: I0514 04:56:45.576371 2770 topology_manager.go:215] "Topology Admit Handler" podUID="8a399c2a-61db-455d-987b-a416b563bd32" podNamespace="calico-apiserver" podName="calico-apiserver-78df6769c8-m7r2k" May 14 04:56:45.577210 kubelet[2770]: I0514 04:56:45.577128 2770 topology_manager.go:215] "Topology Admit Handler" podUID="6086ed4c-abb2-47f6-b905-a6bcc217a9e3" podNamespace="calico-apiserver" podName="calico-apiserver-78df6769c8-6hm46" May 14 04:56:45.577290 kubelet[2770]: I0514 04:56:45.577260 2770 topology_manager.go:215] "Topology Admit Handler" podUID="21401a24-e42e-46ae-b092-d6ef90fb720c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-j4ps6" May 14 04:56:45.588896 systemd[1]: Created slice kubepods-besteffort-pod8a399c2a_61db_455d_987b_a416b563bd32.slice - libcontainer container kubepods-besteffort-pod8a399c2a_61db_455d_987b_a416b563bd32.slice. May 14 04:56:45.595880 systemd[1]: Created slice kubepods-besteffort-pod809f0cc0_f234_4b5a_b622_52435a22fb76.slice - libcontainer container kubepods-besteffort-pod809f0cc0_f234_4b5a_b622_52435a22fb76.slice. May 14 04:56:45.603250 systemd[1]: Created slice kubepods-burstable-pod21401a24_e42e_46ae_b092_d6ef90fb720c.slice - libcontainer container kubepods-burstable-pod21401a24_e42e_46ae_b092_d6ef90fb720c.slice. May 14 04:56:45.604160 kubelet[2770]: I0514 04:56:45.604125 2770 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nps2h\" (UniqueName: \"kubernetes.io/projected/1b538f75-bc5d-494b-b78e-445981161c1a-kube-api-access-nps2h\") pod \"coredns-7db6d8ff4d-j7wt8\" (UID: \"1b538f75-bc5d-494b-b78e-445981161c1a\") " pod="kube-system/coredns-7db6d8ff4d-j7wt8" May 14 04:56:45.604218 kubelet[2770]: I0514 04:56:45.604166 2770 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm97k\" (UniqueName: \"kubernetes.io/projected/809f0cc0-f234-4b5a-b622-52435a22fb76-kube-api-access-qm97k\") pod \"calico-kube-controllers-67b74c5b6f-snz4p\" (UID: \"809f0cc0-f234-4b5a-b622-52435a22fb76\") " pod="calico-system/calico-kube-controllers-67b74c5b6f-snz4p" May 14 04:56:45.604218 kubelet[2770]: I0514 04:56:45.604187 2770 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/809f0cc0-f234-4b5a-b622-52435a22fb76-tigera-ca-bundle\") pod \"calico-kube-controllers-67b74c5b6f-snz4p\" (UID: \"809f0cc0-f234-4b5a-b622-52435a22fb76\") " pod="calico-system/calico-kube-controllers-67b74c5b6f-snz4p" May 14 04:56:45.604218 kubelet[2770]: I0514 04:56:45.604202 2770 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/21401a24-e42e-46ae-b092-d6ef90fb720c-config-volume\") pod \"coredns-7db6d8ff4d-j4ps6\" (UID: \"21401a24-e42e-46ae-b092-d6ef90fb720c\") " pod="kube-system/coredns-7db6d8ff4d-j4ps6" May 14 04:56:45.604317 kubelet[2770]: I0514 04:56:45.604220 2770 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpmml\" (UniqueName: \"kubernetes.io/projected/8a399c2a-61db-455d-987b-a416b563bd32-kube-api-access-mpmml\") pod \"calico-apiserver-78df6769c8-m7r2k\" (UID: \"8a399c2a-61db-455d-987b-a416b563bd32\") " pod="calico-apiserver/calico-apiserver-78df6769c8-m7r2k" May 14 04:56:45.604317 kubelet[2770]: I0514 04:56:45.604238 2770 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8a399c2a-61db-455d-987b-a416b563bd32-calico-apiserver-certs\") pod \"calico-apiserver-78df6769c8-m7r2k\" (UID: \"8a399c2a-61db-455d-987b-a416b563bd32\") " pod="calico-apiserver/calico-apiserver-78df6769c8-m7r2k" May 14 04:56:45.604317 kubelet[2770]: I0514 04:56:45.604252 2770 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b538f75-bc5d-494b-b78e-445981161c1a-config-volume\") pod \"coredns-7db6d8ff4d-j7wt8\" (UID: \"1b538f75-bc5d-494b-b78e-445981161c1a\") " pod="kube-system/coredns-7db6d8ff4d-j7wt8" May 14 04:56:45.604317 kubelet[2770]: I0514 04:56:45.604271 2770 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rqpq\" (UniqueName: \"kubernetes.io/projected/6086ed4c-abb2-47f6-b905-a6bcc217a9e3-kube-api-access-5rqpq\") pod \"calico-apiserver-78df6769c8-6hm46\" (UID: \"6086ed4c-abb2-47f6-b905-a6bcc217a9e3\") " pod="calico-apiserver/calico-apiserver-78df6769c8-6hm46" May 14 04:56:45.604317 kubelet[2770]: I0514 04:56:45.604298 2770 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msjq6\" (UniqueName: \"kubernetes.io/projected/21401a24-e42e-46ae-b092-d6ef90fb720c-kube-api-access-msjq6\") pod \"coredns-7db6d8ff4d-j4ps6\" (UID: \"21401a24-e42e-46ae-b092-d6ef90fb720c\") " pod="kube-system/coredns-7db6d8ff4d-j4ps6" May 14 04:56:45.604420 kubelet[2770]: I0514 04:56:45.604314 2770 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6086ed4c-abb2-47f6-b905-a6bcc217a9e3-calico-apiserver-certs\") pod \"calico-apiserver-78df6769c8-6hm46\" (UID: \"6086ed4c-abb2-47f6-b905-a6bcc217a9e3\") " pod="calico-apiserver/calico-apiserver-78df6769c8-6hm46" May 14 04:56:45.606719 systemd[1]: Created slice kubepods-besteffort-pod6086ed4c_abb2_47f6_b905_a6bcc217a9e3.slice - libcontainer container kubepods-besteffort-pod6086ed4c_abb2_47f6_b905_a6bcc217a9e3.slice. May 14 04:56:45.613129 systemd[1]: Created slice kubepods-burstable-pod1b538f75_bc5d_494b_b78e_445981161c1a.slice - libcontainer container kubepods-burstable-pod1b538f75_bc5d_494b_b78e_445981161c1a.slice. May 14 04:56:45.848198 systemd[1]: Created slice kubepods-besteffort-pod31698ef2_ce30_4522_8f97_9ef88e5b07a0.slice - libcontainer container kubepods-besteffort-pod31698ef2_ce30_4522_8f97_9ef88e5b07a0.slice. May 14 04:56:45.850661 containerd[1523]: time="2025-05-14T04:56:45.850628793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n7f8z,Uid:31698ef2-ce30-4522-8f97-9ef88e5b07a0,Namespace:calico-system,Attempt:0,}" May 14 04:56:45.895925 containerd[1523]: time="2025-05-14T04:56:45.895876303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78df6769c8-m7r2k,Uid:8a399c2a-61db-455d-987b-a416b563bd32,Namespace:calico-apiserver,Attempt:0,}" May 14 04:56:45.908783 containerd[1523]: time="2025-05-14T04:56:45.906671351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67b74c5b6f-snz4p,Uid:809f0cc0-f234-4b5a-b622-52435a22fb76,Namespace:calico-system,Attempt:0,}" May 14 04:56:45.912829 containerd[1523]: time="2025-05-14T04:56:45.912076474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j4ps6,Uid:21401a24-e42e-46ae-b092-d6ef90fb720c,Namespace:kube-system,Attempt:0,}" May 14 04:56:45.912829 containerd[1523]: time="2025-05-14T04:56:45.912313074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78df6769c8-6hm46,Uid:6086ed4c-abb2-47f6-b905-a6bcc217a9e3,Namespace:calico-apiserver,Attempt:0,}" May 14 04:56:45.927570 containerd[1523]: time="2025-05-14T04:56:45.926505404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j7wt8,Uid:1b538f75-bc5d-494b-b78e-445981161c1a,Namespace:kube-system,Attempt:0,}" May 14 04:56:45.976290 containerd[1523]: time="2025-05-14T04:56:45.976150716Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 14 04:56:46.226121 containerd[1523]: time="2025-05-14T04:56:46.225995352Z" level=error msg="Failed to destroy network for sandbox \"a2f744bc7853968f2847d54cc5e1af13d6167611907a38ba2b7003dca6b49f2a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 04:56:46.227035 containerd[1523]: time="2025-05-14T04:56:46.227005713Z" level=error msg="Failed to destroy network for sandbox \"5501a461343bdbf9abbf3cf3bd85366626288e018c627940492872482fda0d84\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 04:56:46.228048 containerd[1523]: time="2025-05-14T04:56:46.228009874Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n7f8z,Uid:31698ef2-ce30-4522-8f97-9ef88e5b07a0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2f744bc7853968f2847d54cc5e1af13d6167611907a38ba2b7003dca6b49f2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 04:56:46.228769 containerd[1523]: time="2025-05-14T04:56:46.228733714Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78df6769c8-m7r2k,Uid:8a399c2a-61db-455d-987b-a416b563bd32,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5501a461343bdbf9abbf3cf3bd85366626288e018c627940492872482fda0d84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 04:56:46.230401 kubelet[2770]: E0514 04:56:46.230344 2770 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2f744bc7853968f2847d54cc5e1af13d6167611907a38ba2b7003dca6b49f2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 04:56:46.230517 kubelet[2770]: E0514 04:56:46.230425 2770 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2f744bc7853968f2847d54cc5e1af13d6167611907a38ba2b7003dca6b49f2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n7f8z" May 14 04:56:46.230754 kubelet[2770]: E0514 04:56:46.230445 2770 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2f744bc7853968f2847d54cc5e1af13d6167611907a38ba2b7003dca6b49f2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n7f8z" May 14 04:56:46.232360 kubelet[2770]: E0514 04:56:46.230345 2770 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5501a461343bdbf9abbf3cf3bd85366626288e018c627940492872482fda0d84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 04:56:46.232360 kubelet[2770]: E0514 04:56:46.230794 2770 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-n7f8z_calico-system(31698ef2-ce30-4522-8f97-9ef88e5b07a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-n7f8z_calico-system(31698ef2-ce30-4522-8f97-9ef88e5b07a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a2f744bc7853968f2847d54cc5e1af13d6167611907a38ba2b7003dca6b49f2a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-n7f8z" podUID="31698ef2-ce30-4522-8f97-9ef88e5b07a0" May 14 04:56:46.232360 kubelet[2770]: E0514 04:56:46.230825 2770 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5501a461343bdbf9abbf3cf3bd85366626288e018c627940492872482fda0d84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78df6769c8-m7r2k" May 14 04:56:46.232490 kubelet[2770]: E0514 04:56:46.230846 2770 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5501a461343bdbf9abbf3cf3bd85366626288e018c627940492872482fda0d84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78df6769c8-m7r2k" May 14 04:56:46.232490 kubelet[2770]: E0514 04:56:46.230896 2770 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78df6769c8-m7r2k_calico-apiserver(8a399c2a-61db-455d-987b-a416b563bd32)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78df6769c8-m7r2k_calico-apiserver(8a399c2a-61db-455d-987b-a416b563bd32)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5501a461343bdbf9abbf3cf3bd85366626288e018c627940492872482fda0d84\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78df6769c8-m7r2k" podUID="8a399c2a-61db-455d-987b-a416b563bd32" May 14 04:56:46.235620 containerd[1523]: time="2025-05-14T04:56:46.235587318Z" level=error msg="Failed to destroy network for sandbox \"bf8aa5fb579fe51cc7d54c057112967726d6e32e964c2a045a890d5b78e59de1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 04:56:46.236603 containerd[1523]: time="2025-05-14T04:56:46.236562359Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j7wt8,Uid:1b538f75-bc5d-494b-b78e-445981161c1a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf8aa5fb579fe51cc7d54c057112967726d6e32e964c2a045a890d5b78e59de1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 04:56:46.236943 kubelet[2770]: E0514 04:56:46.236882 2770 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf8aa5fb579fe51cc7d54c057112967726d6e32e964c2a045a890d5b78e59de1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 04:56:46.236943 kubelet[2770]: E0514 04:56:46.236931 2770 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf8aa5fb579fe51cc7d54c057112967726d6e32e964c2a045a890d5b78e59de1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-j7wt8" May 14 04:56:46.236943 kubelet[2770]: E0514 04:56:46.236948 2770 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf8aa5fb579fe51cc7d54c057112967726d6e32e964c2a045a890d5b78e59de1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-j7wt8" May 14 04:56:46.237141 kubelet[2770]: E0514 04:56:46.236985 2770 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-j7wt8_kube-system(1b538f75-bc5d-494b-b78e-445981161c1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-j7wt8_kube-system(1b538f75-bc5d-494b-b78e-445981161c1a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bf8aa5fb579fe51cc7d54c057112967726d6e32e964c2a045a890d5b78e59de1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-j7wt8" podUID="1b538f75-bc5d-494b-b78e-445981161c1a" May 14 04:56:46.243787 containerd[1523]: time="2025-05-14T04:56:46.243747683Z" level=error msg="Failed to destroy network for sandbox \"8a353c1cdf8089bb4e04397b942236ef16bb76b9e7dd130df4c3732cc10c1b2f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 04:56:46.244945 containerd[1523]: time="2025-05-14T04:56:46.244909444Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67b74c5b6f-snz4p,Uid:809f0cc0-f234-4b5a-b622-52435a22fb76,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a353c1cdf8089bb4e04397b942236ef16bb76b9e7dd130df4c3732cc10c1b2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 04:56:46.245071 containerd[1523]: time="2025-05-14T04:56:46.245049644Z" level=error msg="Failed to destroy network for sandbox \"c76660694bc5a9916774e8b28f27b6ec9affdab153906b66e5d4387dac156312\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 04:56:46.245175 kubelet[2770]: E0514 04:56:46.245147 2770 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a353c1cdf8089bb4e04397b942236ef16bb76b9e7dd130df4c3732cc10c1b2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 04:56:46.245562 kubelet[2770]: E0514 04:56:46.245262 2770 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a353c1cdf8089bb4e04397b942236ef16bb76b9e7dd130df4c3732cc10c1b2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67b74c5b6f-snz4p" May 14 04:56:46.245562 kubelet[2770]: E0514 04:56:46.245565 2770 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a353c1cdf8089bb4e04397b942236ef16bb76b9e7dd130df4c3732cc10c1b2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67b74c5b6f-snz4p" May 14 04:56:46.245737 kubelet[2770]: E0514 04:56:46.245610 2770 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-67b74c5b6f-snz4p_calico-system(809f0cc0-f234-4b5a-b622-52435a22fb76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-67b74c5b6f-snz4p_calico-system(809f0cc0-f234-4b5a-b622-52435a22fb76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8a353c1cdf8089bb4e04397b942236ef16bb76b9e7dd130df4c3732cc10c1b2f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67b74c5b6f-snz4p" podUID="809f0cc0-f234-4b5a-b622-52435a22fb76" May 14 04:56:46.245906 containerd[1523]: time="2025-05-14T04:56:46.245875405Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78df6769c8-6hm46,Uid:6086ed4c-abb2-47f6-b905-a6bcc217a9e3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c76660694bc5a9916774e8b28f27b6ec9affdab153906b66e5d4387dac156312\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 04:56:46.246391 kubelet[2770]: E0514 04:56:46.246364 2770 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c76660694bc5a9916774e8b28f27b6ec9affdab153906b66e5d4387dac156312\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 04:56:46.246471 kubelet[2770]: E0514 04:56:46.246401 2770 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c76660694bc5a9916774e8b28f27b6ec9affdab153906b66e5d4387dac156312\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78df6769c8-6hm46" May 14 04:56:46.246471 kubelet[2770]: E0514 04:56:46.246420 2770 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c76660694bc5a9916774e8b28f27b6ec9affdab153906b66e5d4387dac156312\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78df6769c8-6hm46" May 14 04:56:46.246471 kubelet[2770]: E0514 04:56:46.246451 2770 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78df6769c8-6hm46_calico-apiserver(6086ed4c-abb2-47f6-b905-a6bcc217a9e3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78df6769c8-6hm46_calico-apiserver(6086ed4c-abb2-47f6-b905-a6bcc217a9e3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c76660694bc5a9916774e8b28f27b6ec9affdab153906b66e5d4387dac156312\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78df6769c8-6hm46" podUID="6086ed4c-abb2-47f6-b905-a6bcc217a9e3" May 14 04:56:46.249352 containerd[1523]: time="2025-05-14T04:56:46.249291567Z" level=error msg="Failed to destroy network for sandbox \"c50e082727b0a0e30fd993b1f197280bd9460dc4006de3ce239e6170dda67072\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 04:56:46.253918 containerd[1523]: time="2025-05-14T04:56:46.253868450Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j4ps6,Uid:21401a24-e42e-46ae-b092-d6ef90fb720c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c50e082727b0a0e30fd993b1f197280bd9460dc4006de3ce239e6170dda67072\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 04:56:46.254225 kubelet[2770]: E0514 04:56:46.254195 2770 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c50e082727b0a0e30fd993b1f197280bd9460dc4006de3ce239e6170dda67072\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 04:56:46.254294 kubelet[2770]: E0514 04:56:46.254240 2770 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c50e082727b0a0e30fd993b1f197280bd9460dc4006de3ce239e6170dda67072\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-j4ps6" May 14 04:56:46.254294 kubelet[2770]: E0514 04:56:46.254256 2770 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c50e082727b0a0e30fd993b1f197280bd9460dc4006de3ce239e6170dda67072\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-j4ps6" May 14 04:56:46.254344 kubelet[2770]: E0514 04:56:46.254293 2770 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-j4ps6_kube-system(21401a24-e42e-46ae-b092-d6ef90fb720c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-j4ps6_kube-system(21401a24-e42e-46ae-b092-d6ef90fb720c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c50e082727b0a0e30fd993b1f197280bd9460dc4006de3ce239e6170dda67072\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-j4ps6" podUID="21401a24-e42e-46ae-b092-d6ef90fb720c" May 14 04:56:46.855468 systemd[1]: run-netns-cni\x2d90c73459\x2dde7a\x2df136\x2dc3f4\x2d79f8fde95f13.mount: Deactivated successfully. May 14 04:56:46.855554 systemd[1]: run-netns-cni\x2dbb25921f\x2dc233\x2dc39b\x2d6e52\x2d17bfcd63d6ae.mount: Deactivated successfully. May 14 04:56:46.855600 systemd[1]: run-netns-cni\x2d5f9a2a52\x2d815f\x2d7adb\x2df922\x2d7984efde6351.mount: Deactivated successfully. May 14 04:56:46.855643 systemd[1]: run-netns-cni\x2debf868b3\x2d2680\x2d4bbb\x2dc476\x2de2da2a7b9184.mount: Deactivated successfully. May 14 04:56:48.457394 systemd[1]: Started sshd@8-10.0.0.80:22-10.0.0.1:46854.service - OpenSSH per-connection server daemon (10.0.0.1:46854). May 14 04:56:48.518119 sshd[3792]: Accepted publickey for core from 10.0.0.1 port 46854 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 04:56:48.520643 sshd-session[3792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 04:56:48.525310 systemd-logind[1507]: New session 9 of user core. May 14 04:56:48.534029 systemd[1]: Started session-9.scope - Session 9 of User core. May 14 04:56:48.670513 sshd[3794]: Connection closed by 10.0.0.1 port 46854 May 14 04:56:48.670833 sshd-session[3792]: pam_unix(sshd:session): session closed for user core May 14 04:56:48.675178 systemd[1]: sshd@8-10.0.0.80:22-10.0.0.1:46854.service: Deactivated successfully. May 14 04:56:48.678080 systemd[1]: session-9.scope: Deactivated successfully. May 14 04:56:48.678817 systemd-logind[1507]: Session 9 logged out. Waiting for processes to exit. May 14 04:56:48.680344 systemd-logind[1507]: Removed session 9. May 14 04:56:49.754024 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount964871050.mount: Deactivated successfully. May 14 04:56:50.052745 containerd[1523]: time="2025-05-14T04:56:50.052623293Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:56:50.053476 containerd[1523]: time="2025-05-14T04:56:50.053446933Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 14 04:56:50.054360 containerd[1523]: time="2025-05-14T04:56:50.054330814Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:56:50.055957 containerd[1523]: time="2025-05-14T04:56:50.055922495Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:56:50.056381 containerd[1523]: time="2025-05-14T04:56:50.056348855Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 4.080154539s" May 14 04:56:50.056414 containerd[1523]: time="2025-05-14T04:56:50.056378615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 14 04:56:50.066242 containerd[1523]: time="2025-05-14T04:56:50.066191540Z" level=info msg="CreateContainer within sandbox \"de2ece7ef30c67ede80aa7634c3ed14d50d2a89ba16e098d02c20726610577a6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 14 04:56:50.074901 containerd[1523]: time="2025-05-14T04:56:50.074863024Z" level=info msg="Container 05c674b43c62cf9dc76df8259057b69b4ca7c79e1451bce01fd437f0a2a9a35b: CDI devices from CRI Config.CDIDevices: []" May 14 04:56:50.082604 containerd[1523]: time="2025-05-14T04:56:50.082567707Z" level=info msg="CreateContainer within sandbox \"de2ece7ef30c67ede80aa7634c3ed14d50d2a89ba16e098d02c20726610577a6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"05c674b43c62cf9dc76df8259057b69b4ca7c79e1451bce01fd437f0a2a9a35b\"" May 14 04:56:50.084361 containerd[1523]: time="2025-05-14T04:56:50.084325668Z" level=info msg="StartContainer for \"05c674b43c62cf9dc76df8259057b69b4ca7c79e1451bce01fd437f0a2a9a35b\"" May 14 04:56:50.085712 containerd[1523]: time="2025-05-14T04:56:50.085679749Z" level=info msg="connecting to shim 05c674b43c62cf9dc76df8259057b69b4ca7c79e1451bce01fd437f0a2a9a35b" address="unix:///run/containerd/s/addf587c45cee8358a3c4b993ef5206de5e78fc342a449e4c3ac67e25c678f99" protocol=ttrpc version=3 May 14 04:56:50.111870 systemd[1]: Started cri-containerd-05c674b43c62cf9dc76df8259057b69b4ca7c79e1451bce01fd437f0a2a9a35b.scope - libcontainer container 05c674b43c62cf9dc76df8259057b69b4ca7c79e1451bce01fd437f0a2a9a35b. May 14 04:56:50.144920 containerd[1523]: time="2025-05-14T04:56:50.144865817Z" level=info msg="StartContainer for \"05c674b43c62cf9dc76df8259057b69b4ca7c79e1451bce01fd437f0a2a9a35b\" returns successfully" May 14 04:56:50.338672 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 14 04:56:50.338774 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 14 04:56:51.015907 kubelet[2770]: I0514 04:56:51.015828 2770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-4th9d" podStartSLOduration=2.218039069 podStartE2EDuration="16.015811354s" podCreationTimestamp="2025-05-14 04:56:35 +0000 UTC" firstStartedPulling="2025-05-14 04:56:36.25924777 +0000 UTC m=+24.502859536" lastFinishedPulling="2025-05-14 04:56:50.057020055 +0000 UTC m=+38.300631821" observedRunningTime="2025-05-14 04:56:51.009419871 +0000 UTC m=+39.253031637" watchObservedRunningTime="2025-05-14 04:56:51.015811354 +0000 UTC m=+39.259423120" May 14 04:56:51.996179 kubelet[2770]: I0514 04:56:51.996135 2770 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 04:56:53.693908 systemd[1]: Started sshd@9-10.0.0.80:22-10.0.0.1:48520.service - OpenSSH per-connection server daemon (10.0.0.1:48520). May 14 04:56:53.768466 sshd[3996]: Accepted publickey for core from 10.0.0.1 port 48520 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 04:56:53.770158 sshd-session[3996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 04:56:53.774993 systemd-logind[1507]: New session 10 of user core. May 14 04:56:53.779870 systemd[1]: Started session-10.scope - Session 10 of User core. May 14 04:56:53.897077 sshd[4020]: Connection closed by 10.0.0.1 port 48520 May 14 04:56:53.897399 sshd-session[3996]: pam_unix(sshd:session): session closed for user core May 14 04:56:53.906802 systemd[1]: sshd@9-10.0.0.80:22-10.0.0.1:48520.service: Deactivated successfully. May 14 04:56:53.908268 systemd[1]: session-10.scope: Deactivated successfully. May 14 04:56:53.909866 systemd-logind[1507]: Session 10 logged out. Waiting for processes to exit. May 14 04:56:53.912423 systemd[1]: Started sshd@10-10.0.0.80:22-10.0.0.1:48534.service - OpenSSH per-connection server daemon (10.0.0.1:48534). May 14 04:56:53.913392 systemd-logind[1507]: Removed session 10. May 14 04:56:53.963994 sshd[4034]: Accepted publickey for core from 10.0.0.1 port 48534 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 04:56:53.965257 sshd-session[4034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 04:56:53.969359 systemd-logind[1507]: New session 11 of user core. May 14 04:56:53.977902 systemd[1]: Started session-11.scope - Session 11 of User core. May 14 04:56:54.117180 sshd[4036]: Connection closed by 10.0.0.1 port 48534 May 14 04:56:54.117853 sshd-session[4034]: pam_unix(sshd:session): session closed for user core May 14 04:56:54.127637 systemd[1]: sshd@10-10.0.0.80:22-10.0.0.1:48534.service: Deactivated successfully. May 14 04:56:54.129428 systemd[1]: session-11.scope: Deactivated successfully. May 14 04:56:54.132623 systemd-logind[1507]: Session 11 logged out. Waiting for processes to exit. May 14 04:56:54.139169 systemd[1]: Started sshd@11-10.0.0.80:22-10.0.0.1:48548.service - OpenSSH per-connection server daemon (10.0.0.1:48548). May 14 04:56:54.141014 systemd-logind[1507]: Removed session 11. May 14 04:56:54.198779 sshd[4048]: Accepted publickey for core from 10.0.0.1 port 48548 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 04:56:54.199998 sshd-session[4048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 04:56:54.203974 systemd-logind[1507]: New session 12 of user core. May 14 04:56:54.214860 systemd[1]: Started session-12.scope - Session 12 of User core. May 14 04:56:54.325227 sshd[4050]: Connection closed by 10.0.0.1 port 48548 May 14 04:56:54.325752 sshd-session[4048]: pam_unix(sshd:session): session closed for user core May 14 04:56:54.329071 systemd[1]: sshd@11-10.0.0.80:22-10.0.0.1:48548.service: Deactivated successfully. May 14 04:56:54.332093 systemd[1]: session-12.scope: Deactivated successfully. May 14 04:56:54.332751 systemd-logind[1507]: Session 12 logged out. Waiting for processes to exit. May 14 04:56:54.334082 systemd-logind[1507]: Removed session 12. May 14 04:56:56.842850 containerd[1523]: time="2025-05-14T04:56:56.842790881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n7f8z,Uid:31698ef2-ce30-4522-8f97-9ef88e5b07a0,Namespace:calico-system,Attempt:0,}" May 14 04:56:56.843403 containerd[1523]: time="2025-05-14T04:56:56.842809881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78df6769c8-6hm46,Uid:6086ed4c-abb2-47f6-b905-a6bcc217a9e3,Namespace:calico-apiserver,Attempt:0,}" May 14 04:56:57.144307 systemd-networkd[1443]: cali8f9c4c63864: Link UP May 14 04:56:57.144492 systemd-networkd[1443]: cali8f9c4c63864: Gained carrier May 14 04:56:57.175285 containerd[1523]: 2025-05-14 04:56:56.881 [INFO][4142] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 14 04:56:57.175285 containerd[1523]: 2025-05-14 04:56:56.940 [INFO][4142] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--78df6769c8--6hm46-eth0 calico-apiserver-78df6769c8- calico-apiserver 6086ed4c-abb2-47f6-b905-a6bcc217a9e3 727 0 2025-05-14 04:56:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:78df6769c8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-78df6769c8-6hm46 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8f9c4c63864 [] []}} ContainerID="23d2a7b6fbca367b9eae41f04230e7e10f9ae66060ba6e47564bf1f1381410e0" Namespace="calico-apiserver" Pod="calico-apiserver-78df6769c8-6hm46" WorkloadEndpoint="localhost-k8s-calico--apiserver--78df6769c8--6hm46-" May 14 04:56:57.175285 containerd[1523]: 2025-05-14 04:56:56.940 [INFO][4142] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="23d2a7b6fbca367b9eae41f04230e7e10f9ae66060ba6e47564bf1f1381410e0" Namespace="calico-apiserver" Pod="calico-apiserver-78df6769c8-6hm46" WorkloadEndpoint="localhost-k8s-calico--apiserver--78df6769c8--6hm46-eth0" May 14 04:56:57.175285 containerd[1523]: 2025-05-14 04:56:57.078 [INFO][4170] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="23d2a7b6fbca367b9eae41f04230e7e10f9ae66060ba6e47564bf1f1381410e0" HandleID="k8s-pod-network.23d2a7b6fbca367b9eae41f04230e7e10f9ae66060ba6e47564bf1f1381410e0" Workload="localhost-k8s-calico--apiserver--78df6769c8--6hm46-eth0" May 14 04:56:57.175522 containerd[1523]: 2025-05-14 04:56:57.103 [INFO][4170] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="23d2a7b6fbca367b9eae41f04230e7e10f9ae66060ba6e47564bf1f1381410e0" HandleID="k8s-pod-network.23d2a7b6fbca367b9eae41f04230e7e10f9ae66060ba6e47564bf1f1381410e0" Workload="localhost-k8s-calico--apiserver--78df6769c8--6hm46-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c2400), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-78df6769c8-6hm46", "timestamp":"2025-05-14 04:56:57.078589796 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 04:56:57.175522 containerd[1523]: 2025-05-14 04:56:57.103 [INFO][4170] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 04:56:57.175522 containerd[1523]: 2025-05-14 04:56:57.103 [INFO][4170] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 04:56:57.175522 containerd[1523]: 2025-05-14 04:56:57.104 [INFO][4170] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 04:56:57.175522 containerd[1523]: 2025-05-14 04:56:57.105 [INFO][4170] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.23d2a7b6fbca367b9eae41f04230e7e10f9ae66060ba6e47564bf1f1381410e0" host="localhost" May 14 04:56:57.175522 containerd[1523]: 2025-05-14 04:56:57.112 [INFO][4170] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 04:56:57.175522 containerd[1523]: 2025-05-14 04:56:57.116 [INFO][4170] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 04:56:57.175522 containerd[1523]: 2025-05-14 04:56:57.117 [INFO][4170] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 04:56:57.175522 containerd[1523]: 2025-05-14 04:56:57.119 [INFO][4170] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 04:56:57.175522 containerd[1523]: 2025-05-14 04:56:57.119 [INFO][4170] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.23d2a7b6fbca367b9eae41f04230e7e10f9ae66060ba6e47564bf1f1381410e0" host="localhost" May 14 04:56:57.175734 containerd[1523]: 2025-05-14 04:56:57.120 [INFO][4170] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.23d2a7b6fbca367b9eae41f04230e7e10f9ae66060ba6e47564bf1f1381410e0 May 14 04:56:57.175734 containerd[1523]: 2025-05-14 04:56:57.124 [INFO][4170] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.23d2a7b6fbca367b9eae41f04230e7e10f9ae66060ba6e47564bf1f1381410e0" host="localhost" May 14 04:56:57.175734 containerd[1523]: 2025-05-14 04:56:57.128 [INFO][4170] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.23d2a7b6fbca367b9eae41f04230e7e10f9ae66060ba6e47564bf1f1381410e0" host="localhost" May 14 04:56:57.175734 containerd[1523]: 2025-05-14 04:56:57.128 [INFO][4170] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.23d2a7b6fbca367b9eae41f04230e7e10f9ae66060ba6e47564bf1f1381410e0" host="localhost" May 14 04:56:57.175734 containerd[1523]: 2025-05-14 04:56:57.128 [INFO][4170] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 04:56:57.175734 containerd[1523]: 2025-05-14 04:56:57.128 [INFO][4170] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="23d2a7b6fbca367b9eae41f04230e7e10f9ae66060ba6e47564bf1f1381410e0" HandleID="k8s-pod-network.23d2a7b6fbca367b9eae41f04230e7e10f9ae66060ba6e47564bf1f1381410e0" Workload="localhost-k8s-calico--apiserver--78df6769c8--6hm46-eth0" May 14 04:56:57.175846 containerd[1523]: 2025-05-14 04:56:57.131 [INFO][4142] cni-plugin/k8s.go 386: Populated endpoint ContainerID="23d2a7b6fbca367b9eae41f04230e7e10f9ae66060ba6e47564bf1f1381410e0" Namespace="calico-apiserver" Pod="calico-apiserver-78df6769c8-6hm46" WorkloadEndpoint="localhost-k8s-calico--apiserver--78df6769c8--6hm46-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--78df6769c8--6hm46-eth0", GenerateName:"calico-apiserver-78df6769c8-", Namespace:"calico-apiserver", SelfLink:"", UID:"6086ed4c-abb2-47f6-b905-a6bcc217a9e3", ResourceVersion:"727", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 4, 56, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78df6769c8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-78df6769c8-6hm46", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8f9c4c63864", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 04:56:57.175901 containerd[1523]: 2025-05-14 04:56:57.131 [INFO][4142] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="23d2a7b6fbca367b9eae41f04230e7e10f9ae66060ba6e47564bf1f1381410e0" Namespace="calico-apiserver" Pod="calico-apiserver-78df6769c8-6hm46" WorkloadEndpoint="localhost-k8s-calico--apiserver--78df6769c8--6hm46-eth0" May 14 04:56:57.175901 containerd[1523]: 2025-05-14 04:56:57.131 [INFO][4142] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8f9c4c63864 ContainerID="23d2a7b6fbca367b9eae41f04230e7e10f9ae66060ba6e47564bf1f1381410e0" Namespace="calico-apiserver" Pod="calico-apiserver-78df6769c8-6hm46" WorkloadEndpoint="localhost-k8s-calico--apiserver--78df6769c8--6hm46-eth0" May 14 04:56:57.175901 containerd[1523]: 2025-05-14 04:56:57.145 [INFO][4142] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="23d2a7b6fbca367b9eae41f04230e7e10f9ae66060ba6e47564bf1f1381410e0" Namespace="calico-apiserver" Pod="calico-apiserver-78df6769c8-6hm46" WorkloadEndpoint="localhost-k8s-calico--apiserver--78df6769c8--6hm46-eth0" May 14 04:56:57.175955 containerd[1523]: 2025-05-14 04:56:57.145 [INFO][4142] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="23d2a7b6fbca367b9eae41f04230e7e10f9ae66060ba6e47564bf1f1381410e0" Namespace="calico-apiserver" Pod="calico-apiserver-78df6769c8-6hm46" WorkloadEndpoint="localhost-k8s-calico--apiserver--78df6769c8--6hm46-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--78df6769c8--6hm46-eth0", GenerateName:"calico-apiserver-78df6769c8-", Namespace:"calico-apiserver", SelfLink:"", UID:"6086ed4c-abb2-47f6-b905-a6bcc217a9e3", ResourceVersion:"727", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 4, 56, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78df6769c8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"23d2a7b6fbca367b9eae41f04230e7e10f9ae66060ba6e47564bf1f1381410e0", Pod:"calico-apiserver-78df6769c8-6hm46", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8f9c4c63864", MAC:"9e:f1:82:d8:27:98", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 04:56:57.176001 containerd[1523]: 2025-05-14 04:56:57.173 [INFO][4142] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="23d2a7b6fbca367b9eae41f04230e7e10f9ae66060ba6e47564bf1f1381410e0" Namespace="calico-apiserver" Pod="calico-apiserver-78df6769c8-6hm46" WorkloadEndpoint="localhost-k8s-calico--apiserver--78df6769c8--6hm46-eth0" May 14 04:56:57.258432 systemd-networkd[1443]: calie122bb91a0b: Link UP May 14 04:56:57.258843 systemd-networkd[1443]: calie122bb91a0b: Gained carrier May 14 04:56:57.288931 containerd[1523]: 2025-05-14 04:56:56.871 [INFO][4116] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 14 04:56:57.288931 containerd[1523]: 2025-05-14 04:56:56.940 [INFO][4116] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--n7f8z-eth0 csi-node-driver- calico-system 31698ef2-ce30-4522-8f97-9ef88e5b07a0 604 0 2025-05-14 04:56:36 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-n7f8z eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie122bb91a0b [] []}} ContainerID="ce41c2b5afc21f7cd932a83365dc62e3940725c7b6db4c9542d7cdd0f4c3ed6f" Namespace="calico-system" Pod="csi-node-driver-n7f8z" WorkloadEndpoint="localhost-k8s-csi--node--driver--n7f8z-" May 14 04:56:57.288931 containerd[1523]: 2025-05-14 04:56:56.940 [INFO][4116] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ce41c2b5afc21f7cd932a83365dc62e3940725c7b6db4c9542d7cdd0f4c3ed6f" Namespace="calico-system" Pod="csi-node-driver-n7f8z" WorkloadEndpoint="localhost-k8s-csi--node--driver--n7f8z-eth0" May 14 04:56:57.288931 containerd[1523]: 2025-05-14 04:56:57.078 [INFO][4172] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ce41c2b5afc21f7cd932a83365dc62e3940725c7b6db4c9542d7cdd0f4c3ed6f" HandleID="k8s-pod-network.ce41c2b5afc21f7cd932a83365dc62e3940725c7b6db4c9542d7cdd0f4c3ed6f" Workload="localhost-k8s-csi--node--driver--n7f8z-eth0" May 14 04:56:57.289138 containerd[1523]: 2025-05-14 04:56:57.103 [INFO][4172] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ce41c2b5afc21f7cd932a83365dc62e3940725c7b6db4c9542d7cdd0f4c3ed6f" HandleID="k8s-pod-network.ce41c2b5afc21f7cd932a83365dc62e3940725c7b6db4c9542d7cdd0f4c3ed6f" Workload="localhost-k8s-csi--node--driver--n7f8z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400037c2a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-n7f8z", "timestamp":"2025-05-14 04:56:57.078565036 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 04:56:57.289138 containerd[1523]: 2025-05-14 04:56:57.103 [INFO][4172] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 04:56:57.289138 containerd[1523]: 2025-05-14 04:56:57.128 [INFO][4172] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 04:56:57.289138 containerd[1523]: 2025-05-14 04:56:57.128 [INFO][4172] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 04:56:57.289138 containerd[1523]: 2025-05-14 04:56:57.130 [INFO][4172] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ce41c2b5afc21f7cd932a83365dc62e3940725c7b6db4c9542d7cdd0f4c3ed6f" host="localhost" May 14 04:56:57.289138 containerd[1523]: 2025-05-14 04:56:57.139 [INFO][4172] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 04:56:57.289138 containerd[1523]: 2025-05-14 04:56:57.145 [INFO][4172] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 04:56:57.289138 containerd[1523]: 2025-05-14 04:56:57.147 [INFO][4172] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 04:56:57.289138 containerd[1523]: 2025-05-14 04:56:57.151 [INFO][4172] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 04:56:57.289138 containerd[1523]: 2025-05-14 04:56:57.151 [INFO][4172] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ce41c2b5afc21f7cd932a83365dc62e3940725c7b6db4c9542d7cdd0f4c3ed6f" host="localhost" May 14 04:56:57.289377 containerd[1523]: 2025-05-14 04:56:57.158 [INFO][4172] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ce41c2b5afc21f7cd932a83365dc62e3940725c7b6db4c9542d7cdd0f4c3ed6f May 14 04:56:57.289377 containerd[1523]: 2025-05-14 04:56:57.183 [INFO][4172] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ce41c2b5afc21f7cd932a83365dc62e3940725c7b6db4c9542d7cdd0f4c3ed6f" host="localhost" May 14 04:56:57.289377 containerd[1523]: 2025-05-14 04:56:57.252 [INFO][4172] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.ce41c2b5afc21f7cd932a83365dc62e3940725c7b6db4c9542d7cdd0f4c3ed6f" host="localhost" May 14 04:56:57.289377 containerd[1523]: 2025-05-14 04:56:57.252 [INFO][4172] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.ce41c2b5afc21f7cd932a83365dc62e3940725c7b6db4c9542d7cdd0f4c3ed6f" host="localhost" May 14 04:56:57.289377 containerd[1523]: 2025-05-14 04:56:57.252 [INFO][4172] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 04:56:57.289377 containerd[1523]: 2025-05-14 04:56:57.252 [INFO][4172] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="ce41c2b5afc21f7cd932a83365dc62e3940725c7b6db4c9542d7cdd0f4c3ed6f" HandleID="k8s-pod-network.ce41c2b5afc21f7cd932a83365dc62e3940725c7b6db4c9542d7cdd0f4c3ed6f" Workload="localhost-k8s-csi--node--driver--n7f8z-eth0" May 14 04:56:57.289483 containerd[1523]: 2025-05-14 04:56:57.256 [INFO][4116] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ce41c2b5afc21f7cd932a83365dc62e3940725c7b6db4c9542d7cdd0f4c3ed6f" Namespace="calico-system" Pod="csi-node-driver-n7f8z" WorkloadEndpoint="localhost-k8s-csi--node--driver--n7f8z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--n7f8z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"31698ef2-ce30-4522-8f97-9ef88e5b07a0", ResourceVersion:"604", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 4, 56, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-n7f8z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie122bb91a0b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 04:56:57.289483 containerd[1523]: 2025-05-14 04:56:57.256 [INFO][4116] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="ce41c2b5afc21f7cd932a83365dc62e3940725c7b6db4c9542d7cdd0f4c3ed6f" Namespace="calico-system" Pod="csi-node-driver-n7f8z" WorkloadEndpoint="localhost-k8s-csi--node--driver--n7f8z-eth0" May 14 04:56:57.289549 containerd[1523]: 2025-05-14 04:56:57.256 [INFO][4116] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie122bb91a0b ContainerID="ce41c2b5afc21f7cd932a83365dc62e3940725c7b6db4c9542d7cdd0f4c3ed6f" Namespace="calico-system" Pod="csi-node-driver-n7f8z" WorkloadEndpoint="localhost-k8s-csi--node--driver--n7f8z-eth0" May 14 04:56:57.289549 containerd[1523]: 2025-05-14 04:56:57.258 [INFO][4116] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ce41c2b5afc21f7cd932a83365dc62e3940725c7b6db4c9542d7cdd0f4c3ed6f" Namespace="calico-system" Pod="csi-node-driver-n7f8z" WorkloadEndpoint="localhost-k8s-csi--node--driver--n7f8z-eth0" May 14 04:56:57.289589 containerd[1523]: 2025-05-14 04:56:57.259 [INFO][4116] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ce41c2b5afc21f7cd932a83365dc62e3940725c7b6db4c9542d7cdd0f4c3ed6f" Namespace="calico-system" Pod="csi-node-driver-n7f8z" WorkloadEndpoint="localhost-k8s-csi--node--driver--n7f8z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--n7f8z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"31698ef2-ce30-4522-8f97-9ef88e5b07a0", ResourceVersion:"604", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 4, 56, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ce41c2b5afc21f7cd932a83365dc62e3940725c7b6db4c9542d7cdd0f4c3ed6f", Pod:"csi-node-driver-n7f8z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie122bb91a0b", MAC:"5a:d7:ce:37:62:59", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 04:56:57.289634 containerd[1523]: 2025-05-14 04:56:57.286 [INFO][4116] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ce41c2b5afc21f7cd932a83365dc62e3940725c7b6db4c9542d7cdd0f4c3ed6f" Namespace="calico-system" Pod="csi-node-driver-n7f8z" WorkloadEndpoint="localhost-k8s-csi--node--driver--n7f8z-eth0" May 14 04:56:57.363209 containerd[1523]: time="2025-05-14T04:56:57.363143922Z" level=info msg="connecting to shim 23d2a7b6fbca367b9eae41f04230e7e10f9ae66060ba6e47564bf1f1381410e0" address="unix:///run/containerd/s/492b9418b584aa97c9a4325089ade134d5006c3c7331fe2c3fc6383b1e01a24d" namespace=k8s.io protocol=ttrpc version=3 May 14 04:56:57.367475 containerd[1523]: time="2025-05-14T04:56:57.367440804Z" level=info msg="connecting to shim ce41c2b5afc21f7cd932a83365dc62e3940725c7b6db4c9542d7cdd0f4c3ed6f" address="unix:///run/containerd/s/b071ccd654dadeb168c6545f7c8d85cfc1dcee82c7056aa90cf55b169ddb4a83" namespace=k8s.io protocol=ttrpc version=3 May 14 04:56:57.389871 systemd[1]: Started cri-containerd-ce41c2b5afc21f7cd932a83365dc62e3940725c7b6db4c9542d7cdd0f4c3ed6f.scope - libcontainer container ce41c2b5afc21f7cd932a83365dc62e3940725c7b6db4c9542d7cdd0f4c3ed6f. May 14 04:56:57.393683 systemd[1]: Started cri-containerd-23d2a7b6fbca367b9eae41f04230e7e10f9ae66060ba6e47564bf1f1381410e0.scope - libcontainer container 23d2a7b6fbca367b9eae41f04230e7e10f9ae66060ba6e47564bf1f1381410e0. May 14 04:56:57.403785 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 04:56:57.411031 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 04:56:57.434039 containerd[1523]: time="2025-05-14T04:56:57.433997064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n7f8z,Uid:31698ef2-ce30-4522-8f97-9ef88e5b07a0,Namespace:calico-system,Attempt:0,} returns sandbox id \"ce41c2b5afc21f7cd932a83365dc62e3940725c7b6db4c9542d7cdd0f4c3ed6f\"" May 14 04:56:57.436900 containerd[1523]: time="2025-05-14T04:56:57.436871745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 14 04:56:57.439892 containerd[1523]: time="2025-05-14T04:56:57.439851426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78df6769c8-6hm46,Uid:6086ed4c-abb2-47f6-b905-a6bcc217a9e3,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"23d2a7b6fbca367b9eae41f04230e7e10f9ae66060ba6e47564bf1f1381410e0\"" May 14 04:56:57.655530 kubelet[2770]: I0514 04:56:57.655414 2770 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 04:56:57.730160 containerd[1523]: time="2025-05-14T04:56:57.730116354Z" level=info msg="TaskExit event in podsandbox handler container_id:\"05c674b43c62cf9dc76df8259057b69b4ca7c79e1451bce01fd437f0a2a9a35b\" id:\"e4ae8d23aed5044e78098bf93e48090b2ec4a216f6d1e59ea7d3d198c5a1b6b9\" pid:4317 exit_status:1 exited_at:{seconds:1747198617 nanos:729818594}" May 14 04:56:57.801561 containerd[1523]: time="2025-05-14T04:56:57.801525616Z" level=info msg="TaskExit event in podsandbox handler container_id:\"05c674b43c62cf9dc76df8259057b69b4ca7c79e1451bce01fd437f0a2a9a35b\" id:\"786a705d077390c40e7965584c0931b28109ef0b25fbbfcee774abc05e0cfa24\" pid:4341 exit_status:1 exited_at:{seconds:1747198617 nanos:801263896}" May 14 04:56:58.321308 kubelet[2770]: I0514 04:56:58.321274 2770 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 04:56:58.367855 systemd-networkd[1443]: cali8f9c4c63864: Gained IPv6LL May 14 04:56:58.521939 containerd[1523]: time="2025-05-14T04:56:58.521891265Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:56:58.522523 containerd[1523]: time="2025-05-14T04:56:58.522495785Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 14 04:56:58.523124 containerd[1523]: time="2025-05-14T04:56:58.523097946Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:56:58.524794 containerd[1523]: time="2025-05-14T04:56:58.524767186Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:56:58.525495 containerd[1523]: time="2025-05-14T04:56:58.525454186Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 1.088547881s" May 14 04:56:58.525495 containerd[1523]: time="2025-05-14T04:56:58.525492506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 14 04:56:58.526362 containerd[1523]: time="2025-05-14T04:56:58.526344787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 14 04:56:58.528193 containerd[1523]: time="2025-05-14T04:56:58.528162587Z" level=info msg="CreateContainer within sandbox \"ce41c2b5afc21f7cd932a83365dc62e3940725c7b6db4c9542d7cdd0f4c3ed6f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 14 04:56:58.536188 containerd[1523]: time="2025-05-14T04:56:58.536148029Z" level=info msg="Container 8473e2ec7aed138f966a1f189ba539e8ef69cf68437610749ccc8fdc2c37f48d: CDI devices from CRI Config.CDIDevices: []" May 14 04:56:58.540364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1585986774.mount: Deactivated successfully. May 14 04:56:58.543913 containerd[1523]: time="2025-05-14T04:56:58.543807672Z" level=info msg="CreateContainer within sandbox \"ce41c2b5afc21f7cd932a83365dc62e3940725c7b6db4c9542d7cdd0f4c3ed6f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8473e2ec7aed138f966a1f189ba539e8ef69cf68437610749ccc8fdc2c37f48d\"" May 14 04:56:58.544406 containerd[1523]: time="2025-05-14T04:56:58.544281832Z" level=info msg="StartContainer for \"8473e2ec7aed138f966a1f189ba539e8ef69cf68437610749ccc8fdc2c37f48d\"" May 14 04:56:58.546180 containerd[1523]: time="2025-05-14T04:56:58.546153512Z" level=info msg="connecting to shim 8473e2ec7aed138f966a1f189ba539e8ef69cf68437610749ccc8fdc2c37f48d" address="unix:///run/containerd/s/b071ccd654dadeb168c6545f7c8d85cfc1dcee82c7056aa90cf55b169ddb4a83" protocol=ttrpc version=3 May 14 04:56:58.568846 systemd[1]: Started cri-containerd-8473e2ec7aed138f966a1f189ba539e8ef69cf68437610749ccc8fdc2c37f48d.scope - libcontainer container 8473e2ec7aed138f966a1f189ba539e8ef69cf68437610749ccc8fdc2c37f48d. May 14 04:56:58.599401 containerd[1523]: time="2025-05-14T04:56:58.599264087Z" level=info msg="StartContainer for \"8473e2ec7aed138f966a1f189ba539e8ef69cf68437610749ccc8fdc2c37f48d\" returns successfully" May 14 04:56:58.751859 systemd-networkd[1443]: calie122bb91a0b: Gained IPv6LL May 14 04:56:58.843397 containerd[1523]: time="2025-05-14T04:56:58.843152077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j7wt8,Uid:1b538f75-bc5d-494b-b78e-445981161c1a,Namespace:kube-system,Attempt:0,}" May 14 04:56:58.843500 containerd[1523]: time="2025-05-14T04:56:58.843433317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j4ps6,Uid:21401a24-e42e-46ae-b092-d6ef90fb720c,Namespace:kube-system,Attempt:0,}" May 14 04:56:58.982738 systemd-networkd[1443]: cali9c43d18d9fd: Link UP May 14 04:56:58.984366 systemd-networkd[1443]: cali9c43d18d9fd: Gained carrier May 14 04:56:58.999296 containerd[1523]: 2025-05-14 04:56:58.898 [INFO][4426] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--j7wt8-eth0 coredns-7db6d8ff4d- kube-system 1b538f75-bc5d-494b-b78e-445981161c1a 724 0 2025-05-14 04:56:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-j7wt8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9c43d18d9fd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b40774e6399f8f2d7efc05707c7d96c4c5c74efdcb456638deaf745502e9bb02" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j7wt8" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j7wt8-" May 14 04:56:58.999296 containerd[1523]: 2025-05-14 04:56:58.899 [INFO][4426] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b40774e6399f8f2d7efc05707c7d96c4c5c74efdcb456638deaf745502e9bb02" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j7wt8" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j7wt8-eth0" May 14 04:56:58.999296 containerd[1523]: 2025-05-14 04:56:58.933 [INFO][4466] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b40774e6399f8f2d7efc05707c7d96c4c5c74efdcb456638deaf745502e9bb02" HandleID="k8s-pod-network.b40774e6399f8f2d7efc05707c7d96c4c5c74efdcb456638deaf745502e9bb02" Workload="localhost-k8s-coredns--7db6d8ff4d--j7wt8-eth0" May 14 04:56:58.999530 containerd[1523]: 2025-05-14 04:56:58.947 [INFO][4466] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b40774e6399f8f2d7efc05707c7d96c4c5c74efdcb456638deaf745502e9bb02" HandleID="k8s-pod-network.b40774e6399f8f2d7efc05707c7d96c4c5c74efdcb456638deaf745502e9bb02" Workload="localhost-k8s-coredns--7db6d8ff4d--j7wt8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d8de0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-j7wt8", "timestamp":"2025-05-14 04:56:58.933495303 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 04:56:58.999530 containerd[1523]: 2025-05-14 04:56:58.947 [INFO][4466] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 04:56:58.999530 containerd[1523]: 2025-05-14 04:56:58.947 [INFO][4466] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 04:56:58.999530 containerd[1523]: 2025-05-14 04:56:58.947 [INFO][4466] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 04:56:58.999530 containerd[1523]: 2025-05-14 04:56:58.949 [INFO][4466] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b40774e6399f8f2d7efc05707c7d96c4c5c74efdcb456638deaf745502e9bb02" host="localhost" May 14 04:56:58.999530 containerd[1523]: 2025-05-14 04:56:58.953 [INFO][4466] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 04:56:58.999530 containerd[1523]: 2025-05-14 04:56:58.957 [INFO][4466] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 04:56:58.999530 containerd[1523]: 2025-05-14 04:56:58.959 [INFO][4466] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 04:56:58.999530 containerd[1523]: 2025-05-14 04:56:58.961 [INFO][4466] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 04:56:58.999530 containerd[1523]: 2025-05-14 04:56:58.961 [INFO][4466] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b40774e6399f8f2d7efc05707c7d96c4c5c74efdcb456638deaf745502e9bb02" host="localhost" May 14 04:56:58.999751 containerd[1523]: 2025-05-14 04:56:58.963 [INFO][4466] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b40774e6399f8f2d7efc05707c7d96c4c5c74efdcb456638deaf745502e9bb02 May 14 04:56:58.999751 containerd[1523]: 2025-05-14 04:56:58.966 [INFO][4466] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b40774e6399f8f2d7efc05707c7d96c4c5c74efdcb456638deaf745502e9bb02" host="localhost" May 14 04:56:58.999751 containerd[1523]: 2025-05-14 04:56:58.973 [INFO][4466] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.b40774e6399f8f2d7efc05707c7d96c4c5c74efdcb456638deaf745502e9bb02" host="localhost" May 14 04:56:58.999751 containerd[1523]: 2025-05-14 04:56:58.974 [INFO][4466] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.b40774e6399f8f2d7efc05707c7d96c4c5c74efdcb456638deaf745502e9bb02" host="localhost" May 14 04:56:58.999751 containerd[1523]: 2025-05-14 04:56:58.974 [INFO][4466] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 04:56:58.999751 containerd[1523]: 2025-05-14 04:56:58.974 [INFO][4466] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="b40774e6399f8f2d7efc05707c7d96c4c5c74efdcb456638deaf745502e9bb02" HandleID="k8s-pod-network.b40774e6399f8f2d7efc05707c7d96c4c5c74efdcb456638deaf745502e9bb02" Workload="localhost-k8s-coredns--7db6d8ff4d--j7wt8-eth0" May 14 04:56:58.999862 containerd[1523]: 2025-05-14 04:56:58.978 [INFO][4426] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b40774e6399f8f2d7efc05707c7d96c4c5c74efdcb456638deaf745502e9bb02" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j7wt8" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j7wt8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--j7wt8-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"1b538f75-bc5d-494b-b78e-445981161c1a", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 4, 56, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-j7wt8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9c43d18d9fd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 04:56:58.999918 containerd[1523]: 2025-05-14 04:56:58.978 [INFO][4426] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="b40774e6399f8f2d7efc05707c7d96c4c5c74efdcb456638deaf745502e9bb02" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j7wt8" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j7wt8-eth0" May 14 04:56:58.999918 containerd[1523]: 2025-05-14 04:56:58.978 [INFO][4426] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9c43d18d9fd ContainerID="b40774e6399f8f2d7efc05707c7d96c4c5c74efdcb456638deaf745502e9bb02" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j7wt8" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j7wt8-eth0" May 14 04:56:58.999918 containerd[1523]: 2025-05-14 04:56:58.984 [INFO][4426] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b40774e6399f8f2d7efc05707c7d96c4c5c74efdcb456638deaf745502e9bb02" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j7wt8" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j7wt8-eth0" May 14 04:56:58.999976 containerd[1523]: 2025-05-14 04:56:58.985 [INFO][4426] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b40774e6399f8f2d7efc05707c7d96c4c5c74efdcb456638deaf745502e9bb02" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j7wt8" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j7wt8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--j7wt8-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"1b538f75-bc5d-494b-b78e-445981161c1a", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 4, 56, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b40774e6399f8f2d7efc05707c7d96c4c5c74efdcb456638deaf745502e9bb02", Pod:"coredns-7db6d8ff4d-j7wt8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9c43d18d9fd", MAC:"be:c0:7f:cc:51:a6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 04:56:58.999976 containerd[1523]: 2025-05-14 04:56:58.995 [INFO][4426] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b40774e6399f8f2d7efc05707c7d96c4c5c74efdcb456638deaf745502e9bb02" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j7wt8" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j7wt8-eth0" May 14 04:56:59.032869 systemd-networkd[1443]: cali28ab57b1443: Link UP May 14 04:56:59.033426 systemd-networkd[1443]: cali28ab57b1443: Gained carrier May 14 04:56:59.055062 containerd[1523]: 2025-05-14 04:56:58.903 [INFO][4442] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--j4ps6-eth0 coredns-7db6d8ff4d- kube-system 21401a24-e42e-46ae-b092-d6ef90fb720c 726 0 2025-05-14 04:56:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-j4ps6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali28ab57b1443 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="2d77c636df2a33142db49e4a013584d7141a1a019a14d6302f59b549662bfb61" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j4ps6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j4ps6-" May 14 04:56:59.055062 containerd[1523]: 2025-05-14 04:56:58.904 [INFO][4442] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2d77c636df2a33142db49e4a013584d7141a1a019a14d6302f59b549662bfb61" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j4ps6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j4ps6-eth0" May 14 04:56:59.055062 containerd[1523]: 2025-05-14 04:56:58.937 [INFO][4473] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2d77c636df2a33142db49e4a013584d7141a1a019a14d6302f59b549662bfb61" HandleID="k8s-pod-network.2d77c636df2a33142db49e4a013584d7141a1a019a14d6302f59b549662bfb61" Workload="localhost-k8s-coredns--7db6d8ff4d--j4ps6-eth0" May 14 04:56:59.055062 containerd[1523]: 2025-05-14 04:56:58.952 [INFO][4473] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2d77c636df2a33142db49e4a013584d7141a1a019a14d6302f59b549662bfb61" HandleID="k8s-pod-network.2d77c636df2a33142db49e4a013584d7141a1a019a14d6302f59b549662bfb61" Workload="localhost-k8s-coredns--7db6d8ff4d--j4ps6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d9200), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-j4ps6", "timestamp":"2025-05-14 04:56:58.937814624 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 04:56:59.055062 containerd[1523]: 2025-05-14 04:56:58.952 [INFO][4473] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 04:56:59.055062 containerd[1523]: 2025-05-14 04:56:58.974 [INFO][4473] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 04:56:59.055062 containerd[1523]: 2025-05-14 04:56:58.974 [INFO][4473] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 04:56:59.055062 containerd[1523]: 2025-05-14 04:56:58.977 [INFO][4473] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2d77c636df2a33142db49e4a013584d7141a1a019a14d6302f59b549662bfb61" host="localhost" May 14 04:56:59.055062 containerd[1523]: 2025-05-14 04:56:58.980 [INFO][4473] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 04:56:59.055062 containerd[1523]: 2025-05-14 04:56:58.985 [INFO][4473] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 04:56:59.055062 containerd[1523]: 2025-05-14 04:56:58.989 [INFO][4473] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 04:56:59.055062 containerd[1523]: 2025-05-14 04:56:58.993 [INFO][4473] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 04:56:59.055062 containerd[1523]: 2025-05-14 04:56:58.993 [INFO][4473] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2d77c636df2a33142db49e4a013584d7141a1a019a14d6302f59b549662bfb61" host="localhost" May 14 04:56:59.055062 containerd[1523]: 2025-05-14 04:56:58.995 [INFO][4473] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2d77c636df2a33142db49e4a013584d7141a1a019a14d6302f59b549662bfb61 May 14 04:56:59.055062 containerd[1523]: 2025-05-14 04:56:59.000 [INFO][4473] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2d77c636df2a33142db49e4a013584d7141a1a019a14d6302f59b549662bfb61" host="localhost" May 14 04:56:59.055062 containerd[1523]: 2025-05-14 04:56:59.026 [INFO][4473] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.2d77c636df2a33142db49e4a013584d7141a1a019a14d6302f59b549662bfb61" host="localhost" May 14 04:56:59.055062 containerd[1523]: 2025-05-14 04:56:59.026 [INFO][4473] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.2d77c636df2a33142db49e4a013584d7141a1a019a14d6302f59b549662bfb61" host="localhost" May 14 04:56:59.055062 containerd[1523]: 2025-05-14 04:56:59.026 [INFO][4473] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 04:56:59.055062 containerd[1523]: 2025-05-14 04:56:59.026 [INFO][4473] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="2d77c636df2a33142db49e4a013584d7141a1a019a14d6302f59b549662bfb61" HandleID="k8s-pod-network.2d77c636df2a33142db49e4a013584d7141a1a019a14d6302f59b549662bfb61" Workload="localhost-k8s-coredns--7db6d8ff4d--j4ps6-eth0" May 14 04:56:59.055626 containerd[1523]: 2025-05-14 04:56:59.029 [INFO][4442] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2d77c636df2a33142db49e4a013584d7141a1a019a14d6302f59b549662bfb61" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j4ps6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j4ps6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--j4ps6-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"21401a24-e42e-46ae-b092-d6ef90fb720c", ResourceVersion:"726", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 4, 56, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-j4ps6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali28ab57b1443", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 04:56:59.055626 containerd[1523]: 2025-05-14 04:56:59.029 [INFO][4442] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="2d77c636df2a33142db49e4a013584d7141a1a019a14d6302f59b549662bfb61" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j4ps6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j4ps6-eth0" May 14 04:56:59.055626 containerd[1523]: 2025-05-14 04:56:59.029 [INFO][4442] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali28ab57b1443 ContainerID="2d77c636df2a33142db49e4a013584d7141a1a019a14d6302f59b549662bfb61" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j4ps6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j4ps6-eth0" May 14 04:56:59.055626 containerd[1523]: 2025-05-14 04:56:59.034 [INFO][4442] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2d77c636df2a33142db49e4a013584d7141a1a019a14d6302f59b549662bfb61" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j4ps6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j4ps6-eth0" May 14 04:56:59.055626 containerd[1523]: 2025-05-14 04:56:59.034 [INFO][4442] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2d77c636df2a33142db49e4a013584d7141a1a019a14d6302f59b549662bfb61" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j4ps6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j4ps6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--j4ps6-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"21401a24-e42e-46ae-b092-d6ef90fb720c", ResourceVersion:"726", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 4, 56, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2d77c636df2a33142db49e4a013584d7141a1a019a14d6302f59b549662bfb61", Pod:"coredns-7db6d8ff4d-j4ps6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali28ab57b1443", MAC:"8e:18:54:6b:25:39", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 04:56:59.055626 containerd[1523]: 2025-05-14 04:56:59.047 [INFO][4442] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2d77c636df2a33142db49e4a013584d7141a1a019a14d6302f59b549662bfb61" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j4ps6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j4ps6-eth0" May 14 04:56:59.056542 containerd[1523]: time="2025-05-14T04:56:59.056051897Z" level=info msg="connecting to shim b40774e6399f8f2d7efc05707c7d96c4c5c74efdcb456638deaf745502e9bb02" address="unix:///run/containerd/s/59e45739a14fb0dfa052583d69a65c0f38929c50b0ae9c0d3edd00c2603d5561" namespace=k8s.io protocol=ttrpc version=3 May 14 04:56:59.096832 containerd[1523]: time="2025-05-14T04:56:59.096781388Z" level=info msg="connecting to shim 2d77c636df2a33142db49e4a013584d7141a1a019a14d6302f59b549662bfb61" address="unix:///run/containerd/s/5a396be95906798198749370889f98cbeacb7e553eceaf35839494fc257dd988" namespace=k8s.io protocol=ttrpc version=3 May 14 04:56:59.098061 systemd[1]: Started cri-containerd-b40774e6399f8f2d7efc05707c7d96c4c5c74efdcb456638deaf745502e9bb02.scope - libcontainer container b40774e6399f8f2d7efc05707c7d96c4c5c74efdcb456638deaf745502e9bb02. May 14 04:56:59.123311 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 04:56:59.134881 systemd[1]: Started cri-containerd-2d77c636df2a33142db49e4a013584d7141a1a019a14d6302f59b549662bfb61.scope - libcontainer container 2d77c636df2a33142db49e4a013584d7141a1a019a14d6302f59b549662bfb61. May 14 04:56:59.155131 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 04:56:59.167386 containerd[1523]: time="2025-05-14T04:56:59.167346407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j7wt8,Uid:1b538f75-bc5d-494b-b78e-445981161c1a,Namespace:kube-system,Attempt:0,} returns sandbox id \"b40774e6399f8f2d7efc05707c7d96c4c5c74efdcb456638deaf745502e9bb02\"" May 14 04:56:59.169742 systemd-networkd[1443]: vxlan.calico: Link UP May 14 04:56:59.169755 systemd-networkd[1443]: vxlan.calico: Gained carrier May 14 04:56:59.172188 containerd[1523]: time="2025-05-14T04:56:59.171784648Z" level=info msg="CreateContainer within sandbox \"b40774e6399f8f2d7efc05707c7d96c4c5c74efdcb456638deaf745502e9bb02\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 04:56:59.180628 containerd[1523]: time="2025-05-14T04:56:59.180596890Z" level=info msg="Container fceb3e79d97060760708d67efdfe559834eadc7b7af2eb89f6d13ecb6cb77996: CDI devices from CRI Config.CDIDevices: []" May 14 04:56:59.187359 containerd[1523]: time="2025-05-14T04:56:59.187326852Z" level=info msg="CreateContainer within sandbox \"b40774e6399f8f2d7efc05707c7d96c4c5c74efdcb456638deaf745502e9bb02\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fceb3e79d97060760708d67efdfe559834eadc7b7af2eb89f6d13ecb6cb77996\"" May 14 04:56:59.187950 containerd[1523]: time="2025-05-14T04:56:59.187917892Z" level=info msg="StartContainer for \"fceb3e79d97060760708d67efdfe559834eadc7b7af2eb89f6d13ecb6cb77996\"" May 14 04:56:59.188155 containerd[1523]: time="2025-05-14T04:56:59.188117212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j4ps6,Uid:21401a24-e42e-46ae-b092-d6ef90fb720c,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d77c636df2a33142db49e4a013584d7141a1a019a14d6302f59b549662bfb61\"" May 14 04:56:59.188675 containerd[1523]: time="2025-05-14T04:56:59.188648132Z" level=info msg="connecting to shim fceb3e79d97060760708d67efdfe559834eadc7b7af2eb89f6d13ecb6cb77996" address="unix:///run/containerd/s/59e45739a14fb0dfa052583d69a65c0f38929c50b0ae9c0d3edd00c2603d5561" protocol=ttrpc version=3 May 14 04:56:59.192109 containerd[1523]: time="2025-05-14T04:56:59.192060093Z" level=info msg="CreateContainer within sandbox \"2d77c636df2a33142db49e4a013584d7141a1a019a14d6302f59b549662bfb61\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 04:56:59.227916 systemd[1]: Started cri-containerd-fceb3e79d97060760708d67efdfe559834eadc7b7af2eb89f6d13ecb6cb77996.scope - libcontainer container fceb3e79d97060760708d67efdfe559834eadc7b7af2eb89f6d13ecb6cb77996. May 14 04:56:59.246773 containerd[1523]: time="2025-05-14T04:56:59.246054588Z" level=info msg="Container af06271a8057f5497e138fa7f1c436dc2e4d2c2d7320bf89ded2b98da2e12bc2: CDI devices from CRI Config.CDIDevices: []" May 14 04:56:59.254773 containerd[1523]: time="2025-05-14T04:56:59.254737870Z" level=info msg="CreateContainer within sandbox \"2d77c636df2a33142db49e4a013584d7141a1a019a14d6302f59b549662bfb61\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"af06271a8057f5497e138fa7f1c436dc2e4d2c2d7320bf89ded2b98da2e12bc2\"" May 14 04:56:59.256321 containerd[1523]: time="2025-05-14T04:56:59.256049990Z" level=info msg="StartContainer for \"af06271a8057f5497e138fa7f1c436dc2e4d2c2d7320bf89ded2b98da2e12bc2\"" May 14 04:56:59.258420 containerd[1523]: time="2025-05-14T04:56:59.258164071Z" level=info msg="connecting to shim af06271a8057f5497e138fa7f1c436dc2e4d2c2d7320bf89ded2b98da2e12bc2" address="unix:///run/containerd/s/5a396be95906798198749370889f98cbeacb7e553eceaf35839494fc257dd988" protocol=ttrpc version=3 May 14 04:56:59.282254 containerd[1523]: time="2025-05-14T04:56:59.282204237Z" level=info msg="StartContainer for \"fceb3e79d97060760708d67efdfe559834eadc7b7af2eb89f6d13ecb6cb77996\" returns successfully" May 14 04:56:59.283080 systemd[1]: Started cri-containerd-af06271a8057f5497e138fa7f1c436dc2e4d2c2d7320bf89ded2b98da2e12bc2.scope - libcontainer container af06271a8057f5497e138fa7f1c436dc2e4d2c2d7320bf89ded2b98da2e12bc2. May 14 04:56:59.329828 containerd[1523]: time="2025-05-14T04:56:59.329783330Z" level=info msg="StartContainer for \"af06271a8057f5497e138fa7f1c436dc2e4d2c2d7320bf89ded2b98da2e12bc2\" returns successfully" May 14 04:56:59.345100 systemd[1]: Started sshd@12-10.0.0.80:22-10.0.0.1:48556.service - OpenSSH per-connection server daemon (10.0.0.1:48556). May 14 04:56:59.433717 sshd[4744]: Accepted publickey for core from 10.0.0.1 port 48556 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 04:56:59.433744 sshd-session[4744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 04:56:59.440496 systemd-logind[1507]: New session 13 of user core. May 14 04:56:59.446889 systemd[1]: Started session-13.scope - Session 13 of User core. May 14 04:56:59.575996 sshd[4767]: Connection closed by 10.0.0.1 port 48556 May 14 04:56:59.576340 sshd-session[4744]: pam_unix(sshd:session): session closed for user core May 14 04:56:59.586949 systemd[1]: sshd@12-10.0.0.80:22-10.0.0.1:48556.service: Deactivated successfully. May 14 04:56:59.589076 systemd[1]: session-13.scope: Deactivated successfully. May 14 04:56:59.592170 systemd-logind[1507]: Session 13 logged out. Waiting for processes to exit. May 14 04:56:59.594011 systemd[1]: Started sshd@13-10.0.0.80:22-10.0.0.1:48564.service - OpenSSH per-connection server daemon (10.0.0.1:48564). May 14 04:56:59.598480 systemd-logind[1507]: Removed session 13. May 14 04:56:59.640977 sshd[4800]: Accepted publickey for core from 10.0.0.1 port 48564 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 04:56:59.642139 sshd-session[4800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 04:56:59.645806 systemd-logind[1507]: New session 14 of user core. May 14 04:56:59.655834 systemd[1]: Started session-14.scope - Session 14 of User core. May 14 04:56:59.849932 containerd[1523]: time="2025-05-14T04:56:59.849841149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67b74c5b6f-snz4p,Uid:809f0cc0-f234-4b5a-b622-52435a22fb76,Namespace:calico-system,Attempt:0,}" May 14 04:56:59.978289 sshd[4804]: Connection closed by 10.0.0.1 port 48564 May 14 04:56:59.978271 sshd-session[4800]: pam_unix(sshd:session): session closed for user core May 14 04:56:59.983193 systemd-networkd[1443]: cali73f2c6bc153: Link UP May 14 04:56:59.983582 systemd-networkd[1443]: cali73f2c6bc153: Gained carrier May 14 04:56:59.985829 systemd[1]: session-14.scope: Deactivated successfully. May 14 04:56:59.990210 systemd[1]: sshd@13-10.0.0.80:22-10.0.0.1:48564.service: Deactivated successfully. May 14 04:56:59.994217 systemd-logind[1507]: Session 14 logged out. Waiting for processes to exit. May 14 04:56:59.999876 systemd[1]: Started sshd@14-10.0.0.80:22-10.0.0.1:48566.service - OpenSSH per-connection server daemon (10.0.0.1:48566). May 14 04:57:00.002605 systemd-logind[1507]: Removed session 14. May 14 04:57:00.006470 containerd[1523]: 2025-05-14 04:56:59.898 [INFO][4815] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--67b74c5b6f--snz4p-eth0 calico-kube-controllers-67b74c5b6f- calico-system 809f0cc0-f234-4b5a-b622-52435a22fb76 721 0 2025-05-14 04:56:36 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:67b74c5b6f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-67b74c5b6f-snz4p eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali73f2c6bc153 [] []}} ContainerID="44ef2a52d998ff7ff206460a03411e21ce2cd2458a46725c497d737088c7d982" Namespace="calico-system" Pod="calico-kube-controllers-67b74c5b6f-snz4p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67b74c5b6f--snz4p-" May 14 04:57:00.006470 containerd[1523]: 2025-05-14 04:56:59.898 [INFO][4815] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="44ef2a52d998ff7ff206460a03411e21ce2cd2458a46725c497d737088c7d982" Namespace="calico-system" Pod="calico-kube-controllers-67b74c5b6f-snz4p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67b74c5b6f--snz4p-eth0" May 14 04:57:00.006470 containerd[1523]: 2025-05-14 04:56:59.932 [INFO][4830] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="44ef2a52d998ff7ff206460a03411e21ce2cd2458a46725c497d737088c7d982" HandleID="k8s-pod-network.44ef2a52d998ff7ff206460a03411e21ce2cd2458a46725c497d737088c7d982" Workload="localhost-k8s-calico--kube--controllers--67b74c5b6f--snz4p-eth0" May 14 04:57:00.006470 containerd[1523]: 2025-05-14 04:56:59.947 [INFO][4830] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="44ef2a52d998ff7ff206460a03411e21ce2cd2458a46725c497d737088c7d982" HandleID="k8s-pod-network.44ef2a52d998ff7ff206460a03411e21ce2cd2458a46725c497d737088c7d982" Workload="localhost-k8s-calico--kube--controllers--67b74c5b6f--snz4p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400036b680), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-67b74c5b6f-snz4p", "timestamp":"2025-05-14 04:56:59.932170891 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 04:57:00.006470 containerd[1523]: 2025-05-14 04:56:59.947 [INFO][4830] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 04:57:00.006470 containerd[1523]: 2025-05-14 04:56:59.947 [INFO][4830] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 04:57:00.006470 containerd[1523]: 2025-05-14 04:56:59.947 [INFO][4830] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 04:57:00.006470 containerd[1523]: 2025-05-14 04:56:59.949 [INFO][4830] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.44ef2a52d998ff7ff206460a03411e21ce2cd2458a46725c497d737088c7d982" host="localhost" May 14 04:57:00.006470 containerd[1523]: 2025-05-14 04:56:59.953 [INFO][4830] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 04:57:00.006470 containerd[1523]: 2025-05-14 04:56:59.958 [INFO][4830] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 04:57:00.006470 containerd[1523]: 2025-05-14 04:56:59.960 [INFO][4830] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 04:57:00.006470 containerd[1523]: 2025-05-14 04:56:59.963 [INFO][4830] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 04:57:00.006470 containerd[1523]: 2025-05-14 04:56:59.963 [INFO][4830] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.44ef2a52d998ff7ff206460a03411e21ce2cd2458a46725c497d737088c7d982" host="localhost" May 14 04:57:00.006470 containerd[1523]: 2025-05-14 04:56:59.965 [INFO][4830] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.44ef2a52d998ff7ff206460a03411e21ce2cd2458a46725c497d737088c7d982 May 14 04:57:00.006470 containerd[1523]: 2025-05-14 04:56:59.968 [INFO][4830] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.44ef2a52d998ff7ff206460a03411e21ce2cd2458a46725c497d737088c7d982" host="localhost" May 14 04:57:00.006470 containerd[1523]: 2025-05-14 04:56:59.974 [INFO][4830] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.44ef2a52d998ff7ff206460a03411e21ce2cd2458a46725c497d737088c7d982" host="localhost" May 14 04:57:00.006470 containerd[1523]: 2025-05-14 04:56:59.975 [INFO][4830] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.44ef2a52d998ff7ff206460a03411e21ce2cd2458a46725c497d737088c7d982" host="localhost" May 14 04:57:00.006470 containerd[1523]: 2025-05-14 04:56:59.975 [INFO][4830] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 04:57:00.006470 containerd[1523]: 2025-05-14 04:56:59.975 [INFO][4830] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="44ef2a52d998ff7ff206460a03411e21ce2cd2458a46725c497d737088c7d982" HandleID="k8s-pod-network.44ef2a52d998ff7ff206460a03411e21ce2cd2458a46725c497d737088c7d982" Workload="localhost-k8s-calico--kube--controllers--67b74c5b6f--snz4p-eth0" May 14 04:57:00.007643 containerd[1523]: 2025-05-14 04:56:59.979 [INFO][4815] cni-plugin/k8s.go 386: Populated endpoint ContainerID="44ef2a52d998ff7ff206460a03411e21ce2cd2458a46725c497d737088c7d982" Namespace="calico-system" Pod="calico-kube-controllers-67b74c5b6f-snz4p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67b74c5b6f--snz4p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--67b74c5b6f--snz4p-eth0", GenerateName:"calico-kube-controllers-67b74c5b6f-", Namespace:"calico-system", SelfLink:"", UID:"809f0cc0-f234-4b5a-b622-52435a22fb76", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 4, 56, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67b74c5b6f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-67b74c5b6f-snz4p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali73f2c6bc153", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 04:57:00.007643 containerd[1523]: 2025-05-14 04:56:59.979 [INFO][4815] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="44ef2a52d998ff7ff206460a03411e21ce2cd2458a46725c497d737088c7d982" Namespace="calico-system" Pod="calico-kube-controllers-67b74c5b6f-snz4p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67b74c5b6f--snz4p-eth0" May 14 04:57:00.007643 containerd[1523]: 2025-05-14 04:56:59.979 [INFO][4815] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali73f2c6bc153 ContainerID="44ef2a52d998ff7ff206460a03411e21ce2cd2458a46725c497d737088c7d982" Namespace="calico-system" Pod="calico-kube-controllers-67b74c5b6f-snz4p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67b74c5b6f--snz4p-eth0" May 14 04:57:00.007643 containerd[1523]: 2025-05-14 04:56:59.991 [INFO][4815] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="44ef2a52d998ff7ff206460a03411e21ce2cd2458a46725c497d737088c7d982" Namespace="calico-system" Pod="calico-kube-controllers-67b74c5b6f-snz4p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67b74c5b6f--snz4p-eth0" May 14 04:57:00.007643 containerd[1523]: 2025-05-14 04:56:59.991 [INFO][4815] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="44ef2a52d998ff7ff206460a03411e21ce2cd2458a46725c497d737088c7d982" Namespace="calico-system" Pod="calico-kube-controllers-67b74c5b6f-snz4p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67b74c5b6f--snz4p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--67b74c5b6f--snz4p-eth0", GenerateName:"calico-kube-controllers-67b74c5b6f-", Namespace:"calico-system", SelfLink:"", UID:"809f0cc0-f234-4b5a-b622-52435a22fb76", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 4, 56, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67b74c5b6f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"44ef2a52d998ff7ff206460a03411e21ce2cd2458a46725c497d737088c7d982", Pod:"calico-kube-controllers-67b74c5b6f-snz4p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali73f2c6bc153", MAC:"ce:8c:5c:a5:3a:9d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 04:57:00.007643 containerd[1523]: 2025-05-14 04:57:00.004 [INFO][4815] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="44ef2a52d998ff7ff206460a03411e21ce2cd2458a46725c497d737088c7d982" Namespace="calico-system" Pod="calico-kube-controllers-67b74c5b6f-snz4p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67b74c5b6f--snz4p-eth0" May 14 04:57:00.042096 containerd[1523]: time="2025-05-14T04:57:00.042016480Z" level=info msg="connecting to shim 44ef2a52d998ff7ff206460a03411e21ce2cd2458a46725c497d737088c7d982" address="unix:///run/containerd/s/65ef414ad33d6b39fdbabf5f135f09916c508acc2d89a507512dbb35fa0fdd8c" namespace=k8s.io protocol=ttrpc version=3 May 14 04:57:00.044306 kubelet[2770]: I0514 04:57:00.044205 2770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-j7wt8" podStartSLOduration=33.044184001 podStartE2EDuration="33.044184001s" podCreationTimestamp="2025-05-14 04:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 04:57:00.0425474 +0000 UTC m=+48.286159166" watchObservedRunningTime="2025-05-14 04:57:00.044184001 +0000 UTC m=+48.287795767" May 14 04:57:00.075269 kubelet[2770]: I0514 04:57:00.075181 2770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-j4ps6" podStartSLOduration=33.075161289 podStartE2EDuration="33.075161289s" podCreationTimestamp="2025-05-14 04:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 04:57:00.073778288 +0000 UTC m=+48.317390054" watchObservedRunningTime="2025-05-14 04:57:00.075161289 +0000 UTC m=+48.318773055" May 14 04:57:00.088921 sshd[4843]: Accepted publickey for core from 10.0.0.1 port 48566 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 04:57:00.089111 systemd[1]: Started cri-containerd-44ef2a52d998ff7ff206460a03411e21ce2cd2458a46725c497d737088c7d982.scope - libcontainer container 44ef2a52d998ff7ff206460a03411e21ce2cd2458a46725c497d737088c7d982. May 14 04:57:00.093614 sshd-session[4843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 04:57:00.105498 systemd-logind[1507]: New session 15 of user core. May 14 04:57:00.112856 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 04:57:00.121514 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 04:57:00.175108 containerd[1523]: time="2025-05-14T04:57:00.175036634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67b74c5b6f-snz4p,Uid:809f0cc0-f234-4b5a-b622-52435a22fb76,Namespace:calico-system,Attempt:0,} returns sandbox id \"44ef2a52d998ff7ff206460a03411e21ce2cd2458a46725c497d737088c7d982\"" May 14 04:57:00.287821 systemd-networkd[1443]: cali9c43d18d9fd: Gained IPv6LL May 14 04:57:00.351867 systemd-networkd[1443]: cali28ab57b1443: Gained IPv6LL May 14 04:57:00.403873 containerd[1523]: time="2025-05-14T04:57:00.403773451Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:57:00.404560 containerd[1523]: time="2025-05-14T04:57:00.404454131Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 14 04:57:00.405302 containerd[1523]: time="2025-05-14T04:57:00.405258451Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:57:00.407871 containerd[1523]: time="2025-05-14T04:57:00.407776652Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:57:00.408724 containerd[1523]: time="2025-05-14T04:57:00.408661692Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 1.882294265s" May 14 04:57:00.408762 containerd[1523]: time="2025-05-14T04:57:00.408740732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 14 04:57:00.410061 containerd[1523]: time="2025-05-14T04:57:00.410035453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 14 04:57:00.412851 containerd[1523]: time="2025-05-14T04:57:00.412472373Z" level=info msg="CreateContainer within sandbox \"23d2a7b6fbca367b9eae41f04230e7e10f9ae66060ba6e47564bf1f1381410e0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 14 04:57:00.420940 containerd[1523]: time="2025-05-14T04:57:00.420894295Z" level=info msg="Container a360d15a16a5fbf7e2c9b0da6c90233f1366aa15169a269fc89fc590ccaba404: CDI devices from CRI Config.CDIDevices: []" May 14 04:57:00.427711 containerd[1523]: time="2025-05-14T04:57:00.427663337Z" level=info msg="CreateContainer within sandbox \"23d2a7b6fbca367b9eae41f04230e7e10f9ae66060ba6e47564bf1f1381410e0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a360d15a16a5fbf7e2c9b0da6c90233f1366aa15169a269fc89fc590ccaba404\"" May 14 04:57:00.428082 containerd[1523]: time="2025-05-14T04:57:00.428058377Z" level=info msg="StartContainer for \"a360d15a16a5fbf7e2c9b0da6c90233f1366aa15169a269fc89fc590ccaba404\"" May 14 04:57:00.429041 containerd[1523]: time="2025-05-14T04:57:00.428999737Z" level=info msg="connecting to shim a360d15a16a5fbf7e2c9b0da6c90233f1366aa15169a269fc89fc590ccaba404" address="unix:///run/containerd/s/492b9418b584aa97c9a4325089ade134d5006c3c7331fe2c3fc6383b1e01a24d" protocol=ttrpc version=3 May 14 04:57:00.455847 systemd[1]: Started cri-containerd-a360d15a16a5fbf7e2c9b0da6c90233f1366aa15169a269fc89fc590ccaba404.scope - libcontainer container a360d15a16a5fbf7e2c9b0da6c90233f1366aa15169a269fc89fc590ccaba404. May 14 04:57:00.495593 containerd[1523]: time="2025-05-14T04:57:00.495562314Z" level=info msg="StartContainer for \"a360d15a16a5fbf7e2c9b0da6c90233f1366aa15169a269fc89fc590ccaba404\" returns successfully" May 14 04:57:00.607867 systemd-networkd[1443]: vxlan.calico: Gained IPv6LL May 14 04:57:00.843000 containerd[1523]: time="2025-05-14T04:57:00.842957921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78df6769c8-m7r2k,Uid:8a399c2a-61db-455d-987b-a416b563bd32,Namespace:calico-apiserver,Attempt:0,}" May 14 04:57:00.974546 systemd-networkd[1443]: calief4a00810d4: Link UP May 14 04:57:00.977117 systemd-networkd[1443]: calief4a00810d4: Gained carrier May 14 04:57:00.993442 containerd[1523]: 2025-05-14 04:57:00.887 [INFO][4965] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--78df6769c8--m7r2k-eth0 calico-apiserver-78df6769c8- calico-apiserver 8a399c2a-61db-455d-987b-a416b563bd32 725 0 2025-05-14 04:56:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:78df6769c8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-78df6769c8-m7r2k eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calief4a00810d4 [] []}} ContainerID="31bd3b3a9637747ec7056d6e4e6f2e2064aaba61c7287c145c647790ec2a54b1" Namespace="calico-apiserver" Pod="calico-apiserver-78df6769c8-m7r2k" WorkloadEndpoint="localhost-k8s-calico--apiserver--78df6769c8--m7r2k-" May 14 04:57:00.993442 containerd[1523]: 2025-05-14 04:57:00.887 [INFO][4965] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="31bd3b3a9637747ec7056d6e4e6f2e2064aaba61c7287c145c647790ec2a54b1" Namespace="calico-apiserver" Pod="calico-apiserver-78df6769c8-m7r2k" WorkloadEndpoint="localhost-k8s-calico--apiserver--78df6769c8--m7r2k-eth0" May 14 04:57:00.993442 containerd[1523]: 2025-05-14 04:57:00.921 [INFO][4979] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="31bd3b3a9637747ec7056d6e4e6f2e2064aaba61c7287c145c647790ec2a54b1" HandleID="k8s-pod-network.31bd3b3a9637747ec7056d6e4e6f2e2064aaba61c7287c145c647790ec2a54b1" Workload="localhost-k8s-calico--apiserver--78df6769c8--m7r2k-eth0" May 14 04:57:00.993442 containerd[1523]: 2025-05-14 04:57:00.934 [INFO][4979] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="31bd3b3a9637747ec7056d6e4e6f2e2064aaba61c7287c145c647790ec2a54b1" HandleID="k8s-pod-network.31bd3b3a9637747ec7056d6e4e6f2e2064aaba61c7287c145c647790ec2a54b1" Workload="localhost-k8s-calico--apiserver--78df6769c8--m7r2k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000307490), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-78df6769c8-m7r2k", "timestamp":"2025-05-14 04:57:00.921611781 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 04:57:00.993442 containerd[1523]: 2025-05-14 04:57:00.934 [INFO][4979] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 04:57:00.993442 containerd[1523]: 2025-05-14 04:57:00.934 [INFO][4979] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 04:57:00.993442 containerd[1523]: 2025-05-14 04:57:00.934 [INFO][4979] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 04:57:00.993442 containerd[1523]: 2025-05-14 04:57:00.936 [INFO][4979] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.31bd3b3a9637747ec7056d6e4e6f2e2064aaba61c7287c145c647790ec2a54b1" host="localhost" May 14 04:57:00.993442 containerd[1523]: 2025-05-14 04:57:00.940 [INFO][4979] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 04:57:00.993442 containerd[1523]: 2025-05-14 04:57:00.945 [INFO][4979] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 04:57:00.993442 containerd[1523]: 2025-05-14 04:57:00.947 [INFO][4979] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 04:57:00.993442 containerd[1523]: 2025-05-14 04:57:00.950 [INFO][4979] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 04:57:00.993442 containerd[1523]: 2025-05-14 04:57:00.951 [INFO][4979] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.31bd3b3a9637747ec7056d6e4e6f2e2064aaba61c7287c145c647790ec2a54b1" host="localhost" May 14 04:57:00.993442 containerd[1523]: 2025-05-14 04:57:00.952 [INFO][4979] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.31bd3b3a9637747ec7056d6e4e6f2e2064aaba61c7287c145c647790ec2a54b1 May 14 04:57:00.993442 containerd[1523]: 2025-05-14 04:57:00.956 [INFO][4979] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.31bd3b3a9637747ec7056d6e4e6f2e2064aaba61c7287c145c647790ec2a54b1" host="localhost" May 14 04:57:00.993442 containerd[1523]: 2025-05-14 04:57:00.967 [INFO][4979] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.31bd3b3a9637747ec7056d6e4e6f2e2064aaba61c7287c145c647790ec2a54b1" host="localhost" May 14 04:57:00.993442 containerd[1523]: 2025-05-14 04:57:00.968 [INFO][4979] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.31bd3b3a9637747ec7056d6e4e6f2e2064aaba61c7287c145c647790ec2a54b1" host="localhost" May 14 04:57:00.993442 containerd[1523]: 2025-05-14 04:57:00.968 [INFO][4979] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 04:57:00.993442 containerd[1523]: 2025-05-14 04:57:00.968 [INFO][4979] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="31bd3b3a9637747ec7056d6e4e6f2e2064aaba61c7287c145c647790ec2a54b1" HandleID="k8s-pod-network.31bd3b3a9637747ec7056d6e4e6f2e2064aaba61c7287c145c647790ec2a54b1" Workload="localhost-k8s-calico--apiserver--78df6769c8--m7r2k-eth0" May 14 04:57:00.994098 containerd[1523]: 2025-05-14 04:57:00.971 [INFO][4965] cni-plugin/k8s.go 386: Populated endpoint ContainerID="31bd3b3a9637747ec7056d6e4e6f2e2064aaba61c7287c145c647790ec2a54b1" Namespace="calico-apiserver" Pod="calico-apiserver-78df6769c8-m7r2k" WorkloadEndpoint="localhost-k8s-calico--apiserver--78df6769c8--m7r2k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--78df6769c8--m7r2k-eth0", GenerateName:"calico-apiserver-78df6769c8-", Namespace:"calico-apiserver", SelfLink:"", UID:"8a399c2a-61db-455d-987b-a416b563bd32", ResourceVersion:"725", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 4, 56, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78df6769c8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-78df6769c8-m7r2k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calief4a00810d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 04:57:00.994098 containerd[1523]: 2025-05-14 04:57:00.972 [INFO][4965] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="31bd3b3a9637747ec7056d6e4e6f2e2064aaba61c7287c145c647790ec2a54b1" Namespace="calico-apiserver" Pod="calico-apiserver-78df6769c8-m7r2k" WorkloadEndpoint="localhost-k8s-calico--apiserver--78df6769c8--m7r2k-eth0" May 14 04:57:00.994098 containerd[1523]: 2025-05-14 04:57:00.972 [INFO][4965] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calief4a00810d4 ContainerID="31bd3b3a9637747ec7056d6e4e6f2e2064aaba61c7287c145c647790ec2a54b1" Namespace="calico-apiserver" Pod="calico-apiserver-78df6769c8-m7r2k" WorkloadEndpoint="localhost-k8s-calico--apiserver--78df6769c8--m7r2k-eth0" May 14 04:57:00.994098 containerd[1523]: 2025-05-14 04:57:00.977 [INFO][4965] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="31bd3b3a9637747ec7056d6e4e6f2e2064aaba61c7287c145c647790ec2a54b1" Namespace="calico-apiserver" Pod="calico-apiserver-78df6769c8-m7r2k" WorkloadEndpoint="localhost-k8s-calico--apiserver--78df6769c8--m7r2k-eth0" May 14 04:57:00.994098 containerd[1523]: 2025-05-14 04:57:00.977 [INFO][4965] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="31bd3b3a9637747ec7056d6e4e6f2e2064aaba61c7287c145c647790ec2a54b1" Namespace="calico-apiserver" Pod="calico-apiserver-78df6769c8-m7r2k" WorkloadEndpoint="localhost-k8s-calico--apiserver--78df6769c8--m7r2k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--78df6769c8--m7r2k-eth0", GenerateName:"calico-apiserver-78df6769c8-", Namespace:"calico-apiserver", SelfLink:"", UID:"8a399c2a-61db-455d-987b-a416b563bd32", ResourceVersion:"725", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 4, 56, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78df6769c8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"31bd3b3a9637747ec7056d6e4e6f2e2064aaba61c7287c145c647790ec2a54b1", Pod:"calico-apiserver-78df6769c8-m7r2k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calief4a00810d4", MAC:"06:1f:65:1c:6b:0d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 04:57:00.994098 containerd[1523]: 2025-05-14 04:57:00.987 [INFO][4965] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="31bd3b3a9637747ec7056d6e4e6f2e2064aaba61c7287c145c647790ec2a54b1" Namespace="calico-apiserver" Pod="calico-apiserver-78df6769c8-m7r2k" WorkloadEndpoint="localhost-k8s-calico--apiserver--78df6769c8--m7r2k-eth0" May 14 04:57:01.028885 containerd[1523]: time="2025-05-14T04:57:01.028332807Z" level=info msg="connecting to shim 31bd3b3a9637747ec7056d6e4e6f2e2064aaba61c7287c145c647790ec2a54b1" address="unix:///run/containerd/s/6f2b6dee52c0bb0f7d8fc960812f96b21d0035261a97b84e77d1c071e19320ef" namespace=k8s.io protocol=ttrpc version=3 May 14 04:57:01.061741 kubelet[2770]: I0514 04:57:01.061676 2770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-78df6769c8-6hm46" podStartSLOduration=24.092722148 podStartE2EDuration="27.061659215s" podCreationTimestamp="2025-05-14 04:56:34 +0000 UTC" firstStartedPulling="2025-05-14 04:56:57.440908146 +0000 UTC m=+45.684519912" lastFinishedPulling="2025-05-14 04:57:00.409845253 +0000 UTC m=+48.653456979" observedRunningTime="2025-05-14 04:57:01.059903815 +0000 UTC m=+49.303515581" watchObservedRunningTime="2025-05-14 04:57:01.061659215 +0000 UTC m=+49.305270981" May 14 04:57:01.062855 systemd[1]: Started cri-containerd-31bd3b3a9637747ec7056d6e4e6f2e2064aaba61c7287c145c647790ec2a54b1.scope - libcontainer container 31bd3b3a9637747ec7056d6e4e6f2e2064aaba61c7287c145c647790ec2a54b1. May 14 04:57:01.084788 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 04:57:01.108970 containerd[1523]: time="2025-05-14T04:57:01.108795066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78df6769c8-m7r2k,Uid:8a399c2a-61db-455d-987b-a416b563bd32,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"31bd3b3a9637747ec7056d6e4e6f2e2064aaba61c7287c145c647790ec2a54b1\"" May 14 04:57:01.113261 containerd[1523]: time="2025-05-14T04:57:01.113196587Z" level=info msg="CreateContainer within sandbox \"31bd3b3a9637747ec7056d6e4e6f2e2064aaba61c7287c145c647790ec2a54b1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 14 04:57:01.119713 containerd[1523]: time="2025-05-14T04:57:01.119673309Z" level=info msg="Container fb1a3ff873845e483d8c03405fc10368335f881173a7e954a5b58574860ec94a: CDI devices from CRI Config.CDIDevices: []" May 14 04:57:01.129100 containerd[1523]: time="2025-05-14T04:57:01.129060591Z" level=info msg="CreateContainer within sandbox \"31bd3b3a9637747ec7056d6e4e6f2e2064aaba61c7287c145c647790ec2a54b1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fb1a3ff873845e483d8c03405fc10368335f881173a7e954a5b58574860ec94a\"" May 14 04:57:01.130618 containerd[1523]: time="2025-05-14T04:57:01.129539071Z" level=info msg="StartContainer for \"fb1a3ff873845e483d8c03405fc10368335f881173a7e954a5b58574860ec94a\"" May 14 04:57:01.131609 containerd[1523]: time="2025-05-14T04:57:01.131582712Z" level=info msg="connecting to shim fb1a3ff873845e483d8c03405fc10368335f881173a7e954a5b58574860ec94a" address="unix:///run/containerd/s/6f2b6dee52c0bb0f7d8fc960812f96b21d0035261a97b84e77d1c071e19320ef" protocol=ttrpc version=3 May 14 04:57:01.165166 systemd[1]: Started cri-containerd-fb1a3ff873845e483d8c03405fc10368335f881173a7e954a5b58574860ec94a.scope - libcontainer container fb1a3ff873845e483d8c03405fc10368335f881173a7e954a5b58574860ec94a. May 14 04:57:01.206990 containerd[1523]: time="2025-05-14T04:57:01.206951449Z" level=info msg="StartContainer for \"fb1a3ff873845e483d8c03405fc10368335f881173a7e954a5b58574860ec94a\" returns successfully" May 14 04:57:01.568520 systemd-networkd[1443]: cali73f2c6bc153: Gained IPv6LL May 14 04:57:01.646147 containerd[1523]: time="2025-05-14T04:57:01.646095353Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:57:01.648045 containerd[1523]: time="2025-05-14T04:57:01.646803353Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 14 04:57:01.648045 containerd[1523]: time="2025-05-14T04:57:01.647852273Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:57:01.650568 containerd[1523]: time="2025-05-14T04:57:01.650340274Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:57:01.651226 containerd[1523]: time="2025-05-14T04:57:01.651182554Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 1.241117741s" May 14 04:57:01.651285 containerd[1523]: time="2025-05-14T04:57:01.651228074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 14 04:57:01.656715 containerd[1523]: time="2025-05-14T04:57:01.655259035Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 14 04:57:01.657946 containerd[1523]: time="2025-05-14T04:57:01.657923635Z" level=info msg="CreateContainer within sandbox \"ce41c2b5afc21f7cd932a83365dc62e3940725c7b6db4c9542d7cdd0f4c3ed6f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 14 04:57:01.671127 containerd[1523]: time="2025-05-14T04:57:01.670173478Z" level=info msg="Container 9a7b21386e06d28868ce69adefb26f99bf53e4f67f4ca557361f561b1ac3e1ae: CDI devices from CRI Config.CDIDevices: []" May 14 04:57:01.688350 containerd[1523]: time="2025-05-14T04:57:01.688297603Z" level=info msg="CreateContainer within sandbox \"ce41c2b5afc21f7cd932a83365dc62e3940725c7b6db4c9542d7cdd0f4c3ed6f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"9a7b21386e06d28868ce69adefb26f99bf53e4f67f4ca557361f561b1ac3e1ae\"" May 14 04:57:01.689440 containerd[1523]: time="2025-05-14T04:57:01.689249883Z" level=info msg="StartContainer for \"9a7b21386e06d28868ce69adefb26f99bf53e4f67f4ca557361f561b1ac3e1ae\"" May 14 04:57:01.690793 containerd[1523]: time="2025-05-14T04:57:01.690769643Z" level=info msg="connecting to shim 9a7b21386e06d28868ce69adefb26f99bf53e4f67f4ca557361f561b1ac3e1ae" address="unix:///run/containerd/s/b071ccd654dadeb168c6545f7c8d85cfc1dcee82c7056aa90cf55b169ddb4a83" protocol=ttrpc version=3 May 14 04:57:01.721934 systemd[1]: Started cri-containerd-9a7b21386e06d28868ce69adefb26f99bf53e4f67f4ca557361f561b1ac3e1ae.scope - libcontainer container 9a7b21386e06d28868ce69adefb26f99bf53e4f67f4ca557361f561b1ac3e1ae. May 14 04:57:01.788399 containerd[1523]: time="2025-05-14T04:57:01.788345546Z" level=info msg="StartContainer for \"9a7b21386e06d28868ce69adefb26f99bf53e4f67f4ca557361f561b1ac3e1ae\" returns successfully" May 14 04:57:01.832956 sshd[4904]: Connection closed by 10.0.0.1 port 48566 May 14 04:57:01.833261 sshd-session[4843]: pam_unix(sshd:session): session closed for user core May 14 04:57:01.857477 systemd[1]: sshd@14-10.0.0.80:22-10.0.0.1:48566.service: Deactivated successfully. May 14 04:57:01.860465 systemd[1]: session-15.scope: Deactivated successfully. May 14 04:57:01.860725 systemd[1]: session-15.scope: Consumed 512ms CPU time, 66.3M memory peak. May 14 04:57:01.861530 systemd-logind[1507]: Session 15 logged out. Waiting for processes to exit. May 14 04:57:01.867255 systemd[1]: Started sshd@15-10.0.0.80:22-10.0.0.1:48578.service - OpenSSH per-connection server daemon (10.0.0.1:48578). May 14 04:57:01.867778 systemd-logind[1507]: Removed session 15. May 14 04:57:01.924245 sshd[5122]: Accepted publickey for core from 10.0.0.1 port 48578 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 04:57:01.925563 sshd-session[5122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 04:57:01.930420 systemd-logind[1507]: New session 16 of user core. May 14 04:57:01.939864 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 04:57:01.950082 kubelet[2770]: I0514 04:57:01.949982 2770 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 14 04:57:01.950082 kubelet[2770]: I0514 04:57:01.950028 2770 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 14 04:57:02.063721 kubelet[2770]: I0514 04:57:02.063683 2770 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 04:57:02.072374 kubelet[2770]: I0514 04:57:02.072253 2770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-78df6769c8-m7r2k" podStartSLOduration=28.072238092 podStartE2EDuration="28.072238092s" podCreationTimestamp="2025-05-14 04:56:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 04:57:02.070848572 +0000 UTC m=+50.314460338" watchObservedRunningTime="2025-05-14 04:57:02.072238092 +0000 UTC m=+50.315849898" May 14 04:57:02.084856 kubelet[2770]: I0514 04:57:02.084696 2770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-n7f8z" podStartSLOduration=21.865351244 podStartE2EDuration="26.084678895s" podCreationTimestamp="2025-05-14 04:56:36 +0000 UTC" firstStartedPulling="2025-05-14 04:56:57.435553264 +0000 UTC m=+45.679165030" lastFinishedPulling="2025-05-14 04:57:01.654880875 +0000 UTC m=+49.898492681" observedRunningTime="2025-05-14 04:57:02.084234575 +0000 UTC m=+50.327846341" watchObservedRunningTime="2025-05-14 04:57:02.084678895 +0000 UTC m=+50.328290701" May 14 04:57:02.258926 sshd[5125]: Connection closed by 10.0.0.1 port 48578 May 14 04:57:02.257962 sshd-session[5122]: pam_unix(sshd:session): session closed for user core May 14 04:57:02.274150 systemd[1]: sshd@15-10.0.0.80:22-10.0.0.1:48578.service: Deactivated successfully. May 14 04:57:02.275765 systemd[1]: session-16.scope: Deactivated successfully. May 14 04:57:02.279404 systemd-logind[1507]: Session 16 logged out. Waiting for processes to exit. May 14 04:57:02.282764 systemd-logind[1507]: Removed session 16. May 14 04:57:02.288979 systemd[1]: Started sshd@16-10.0.0.80:22-10.0.0.1:48590.service - OpenSSH per-connection server daemon (10.0.0.1:48590). May 14 04:57:02.349401 sshd[5141]: Accepted publickey for core from 10.0.0.1 port 48590 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 04:57:02.351279 sshd-session[5141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 04:57:02.355280 systemd-logind[1507]: New session 17 of user core. May 14 04:57:02.361874 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 04:57:02.506604 sshd[5143]: Connection closed by 10.0.0.1 port 48590 May 14 04:57:02.506844 sshd-session[5141]: pam_unix(sshd:session): session closed for user core May 14 04:57:02.510058 systemd[1]: sshd@16-10.0.0.80:22-10.0.0.1:48590.service: Deactivated successfully. May 14 04:57:02.511536 systemd[1]: session-17.scope: Deactivated successfully. May 14 04:57:02.512169 systemd-logind[1507]: Session 17 logged out. Waiting for processes to exit. May 14 04:57:02.513178 systemd-logind[1507]: Removed session 17. May 14 04:57:02.848010 systemd-networkd[1443]: calief4a00810d4: Gained IPv6LL May 14 04:57:03.067180 kubelet[2770]: I0514 04:57:03.066683 2770 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 04:57:03.238876 containerd[1523]: time="2025-05-14T04:57:03.238837306Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:57:03.239403 containerd[1523]: time="2025-05-14T04:57:03.239371826Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" May 14 04:57:03.240552 containerd[1523]: time="2025-05-14T04:57:03.240510706Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:57:03.243737 containerd[1523]: time="2025-05-14T04:57:03.243222107Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 04:57:03.244058 containerd[1523]: time="2025-05-14T04:57:03.244024347Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 1.588728112s" May 14 04:57:03.244098 containerd[1523]: time="2025-05-14T04:57:03.244058827Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 14 04:57:03.253755 containerd[1523]: time="2025-05-14T04:57:03.251811269Z" level=info msg="CreateContainer within sandbox \"44ef2a52d998ff7ff206460a03411e21ce2cd2458a46725c497d737088c7d982\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 14 04:57:03.259941 containerd[1523]: time="2025-05-14T04:57:03.259237990Z" level=info msg="Container ae938c0b2cf6fcc3808314166b56cd76f17ac5fe5f1b865afcc40d268eb53f52: CDI devices from CRI Config.CDIDevices: []" May 14 04:57:03.266525 containerd[1523]: time="2025-05-14T04:57:03.266490392Z" level=info msg="CreateContainer within sandbox \"44ef2a52d998ff7ff206460a03411e21ce2cd2458a46725c497d737088c7d982\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ae938c0b2cf6fcc3808314166b56cd76f17ac5fe5f1b865afcc40d268eb53f52\"" May 14 04:57:03.267697 containerd[1523]: time="2025-05-14T04:57:03.267660272Z" level=info msg="StartContainer for \"ae938c0b2cf6fcc3808314166b56cd76f17ac5fe5f1b865afcc40d268eb53f52\"" May 14 04:57:03.268960 containerd[1523]: time="2025-05-14T04:57:03.268925672Z" level=info msg="connecting to shim ae938c0b2cf6fcc3808314166b56cd76f17ac5fe5f1b865afcc40d268eb53f52" address="unix:///run/containerd/s/65ef414ad33d6b39fdbabf5f135f09916c508acc2d89a507512dbb35fa0fdd8c" protocol=ttrpc version=3 May 14 04:57:03.289006 systemd[1]: Started cri-containerd-ae938c0b2cf6fcc3808314166b56cd76f17ac5fe5f1b865afcc40d268eb53f52.scope - libcontainer container ae938c0b2cf6fcc3808314166b56cd76f17ac5fe5f1b865afcc40d268eb53f52. May 14 04:57:03.330134 containerd[1523]: time="2025-05-14T04:57:03.330091285Z" level=info msg="StartContainer for \"ae938c0b2cf6fcc3808314166b56cd76f17ac5fe5f1b865afcc40d268eb53f52\" returns successfully" May 14 04:57:04.121615 containerd[1523]: time="2025-05-14T04:57:04.121566287Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ae938c0b2cf6fcc3808314166b56cd76f17ac5fe5f1b865afcc40d268eb53f52\" id:\"9281b0047ebf309824eaeb10807f65d43a12b52787aeb2bf05d480147ef0be8f\" pid:5215 exited_at:{seconds:1747198624 nanos:121118527}" May 14 04:57:04.133173 kubelet[2770]: I0514 04:57:04.132949 2770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-67b74c5b6f-snz4p" podStartSLOduration=25.063503775 podStartE2EDuration="28.132932529s" podCreationTimestamp="2025-05-14 04:56:36 +0000 UTC" firstStartedPulling="2025-05-14 04:57:00.176863794 +0000 UTC m=+48.420475560" lastFinishedPulling="2025-05-14 04:57:03.246292548 +0000 UTC m=+51.489904314" observedRunningTime="2025-05-14 04:57:04.092767761 +0000 UTC m=+52.336379567" watchObservedRunningTime="2025-05-14 04:57:04.132932529 +0000 UTC m=+52.376544295" May 14 04:57:07.521201 systemd[1]: Started sshd@17-10.0.0.80:22-10.0.0.1:35082.service - OpenSSH per-connection server daemon (10.0.0.1:35082). May 14 04:57:07.587591 sshd[5239]: Accepted publickey for core from 10.0.0.1 port 35082 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 04:57:07.589019 sshd-session[5239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 04:57:07.594071 systemd-logind[1507]: New session 18 of user core. May 14 04:57:07.602866 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 04:57:07.741665 sshd[5241]: Connection closed by 10.0.0.1 port 35082 May 14 04:57:07.742151 sshd-session[5239]: pam_unix(sshd:session): session closed for user core May 14 04:57:07.745572 systemd[1]: sshd@17-10.0.0.80:22-10.0.0.1:35082.service: Deactivated successfully. May 14 04:57:07.747114 systemd[1]: session-18.scope: Deactivated successfully. May 14 04:57:07.748515 systemd-logind[1507]: Session 18 logged out. Waiting for processes to exit. May 14 04:57:07.750054 systemd-logind[1507]: Removed session 18. May 14 04:57:12.754526 systemd[1]: Started sshd@18-10.0.0.80:22-10.0.0.1:35836.service - OpenSSH per-connection server daemon (10.0.0.1:35836). May 14 04:57:12.812751 sshd[5265]: Accepted publickey for core from 10.0.0.1 port 35836 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 04:57:12.814051 sshd-session[5265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 04:57:12.819394 systemd-logind[1507]: New session 19 of user core. May 14 04:57:12.826867 systemd[1]: Started session-19.scope - Session 19 of User core. May 14 04:57:12.972292 sshd[5267]: Connection closed by 10.0.0.1 port 35836 May 14 04:57:12.972853 sshd-session[5265]: pam_unix(sshd:session): session closed for user core May 14 04:57:12.976229 systemd[1]: sshd@18-10.0.0.80:22-10.0.0.1:35836.service: Deactivated successfully. May 14 04:57:12.977933 systemd[1]: session-19.scope: Deactivated successfully. May 14 04:57:12.978622 systemd-logind[1507]: Session 19 logged out. Waiting for processes to exit. May 14 04:57:12.979699 systemd-logind[1507]: Removed session 19. May 14 04:57:13.802764 kubelet[2770]: I0514 04:57:13.802722 2770 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 04:57:15.941182 containerd[1523]: time="2025-05-14T04:57:15.941055232Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ae938c0b2cf6fcc3808314166b56cd76f17ac5fe5f1b865afcc40d268eb53f52\" id:\"8cce24bcf11763f8d918d71f40e4e3f7d1d83839ae3fb1f488755dbcd01ee7e9\" pid:5293 exited_at:{seconds:1747198635 nanos:940857032}" May 14 04:57:17.987963 systemd[1]: Started sshd@19-10.0.0.80:22-10.0.0.1:35848.service - OpenSSH per-connection server daemon (10.0.0.1:35848). May 14 04:57:18.048934 sshd[5304]: Accepted publickey for core from 10.0.0.1 port 35848 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 04:57:18.050344 sshd-session[5304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 04:57:18.054453 systemd-logind[1507]: New session 20 of user core. May 14 04:57:18.064838 systemd[1]: Started session-20.scope - Session 20 of User core. May 14 04:57:18.208040 sshd[5306]: Connection closed by 10.0.0.1 port 35848 May 14 04:57:18.207336 sshd-session[5304]: pam_unix(sshd:session): session closed for user core May 14 04:57:18.210686 systemd[1]: sshd@19-10.0.0.80:22-10.0.0.1:35848.service: Deactivated successfully. May 14 04:57:18.212502 systemd[1]: session-20.scope: Deactivated successfully. May 14 04:57:18.213428 systemd-logind[1507]: Session 20 logged out. Waiting for processes to exit. May 14 04:57:18.214783 systemd-logind[1507]: Removed session 20.