May 13 12:32:31.838819 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 13 12:32:31.838839 kernel: Linux version 6.12.28-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue May 13 11:28:23 -00 2025 May 13 12:32:31.838848 kernel: KASLR enabled May 13 12:32:31.838854 kernel: efi: EFI v2.7 by EDK II May 13 12:32:31.838859 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 May 13 12:32:31.838865 kernel: random: crng init done May 13 12:32:31.838871 kernel: secureboot: Secure boot disabled May 13 12:32:31.838877 kernel: ACPI: Early table checksum verification disabled May 13 12:32:31.838883 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) May 13 12:32:31.838890 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 13 12:32:31.838896 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:32:31.838901 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:32:31.838907 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:32:31.838913 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:32:31.838920 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:32:31.838927 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:32:31.838933 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:32:31.838939 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:32:31.838945 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:32:31.838951 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 13 12:32:31.838957 kernel: ACPI: Use ACPI SPCR as default console: Yes May 13 12:32:31.838963 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 13 12:32:31.838969 kernel: NODE_DATA(0) allocated [mem 0xdc965dc0-0xdc96cfff] May 13 12:32:31.838974 kernel: Zone ranges: May 13 12:32:31.838981 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 13 12:32:31.838988 kernel: DMA32 empty May 13 12:32:31.838994 kernel: Normal empty May 13 12:32:31.838999 kernel: Device empty May 13 12:32:31.839005 kernel: Movable zone start for each node May 13 12:32:31.839011 kernel: Early memory node ranges May 13 12:32:31.839017 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] May 13 12:32:31.839023 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] May 13 12:32:31.839029 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] May 13 12:32:31.839035 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] May 13 12:32:31.839040 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] May 13 12:32:31.839046 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] May 13 12:32:31.839052 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] May 13 12:32:31.839059 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] May 13 12:32:31.839065 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] May 13 12:32:31.839071 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 13 12:32:31.839080 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 13 12:32:31.839086 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 13 12:32:31.839093 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 13 12:32:31.839101 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 13 12:32:31.839107 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 13 12:32:31.839113 kernel: psci: probing for conduit method from ACPI. May 13 12:32:31.839120 kernel: psci: PSCIv1.1 detected in firmware. May 13 12:32:31.839126 kernel: psci: Using standard PSCI v0.2 function IDs May 13 12:32:31.839132 kernel: psci: Trusted OS migration not required May 13 12:32:31.839138 kernel: psci: SMC Calling Convention v1.1 May 13 12:32:31.839145 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 13 12:32:31.839151 kernel: percpu: Embedded 33 pages/cpu s98136 r8192 d28840 u135168 May 13 12:32:31.839157 kernel: pcpu-alloc: s98136 r8192 d28840 u135168 alloc=33*4096 May 13 12:32:31.839165 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 13 12:32:31.839172 kernel: Detected PIPT I-cache on CPU0 May 13 12:32:31.839178 kernel: CPU features: detected: GIC system register CPU interface May 13 12:32:31.839184 kernel: CPU features: detected: Spectre-v4 May 13 12:32:31.839190 kernel: CPU features: detected: Spectre-BHB May 13 12:32:31.839197 kernel: CPU features: kernel page table isolation forced ON by KASLR May 13 12:32:31.839203 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 13 12:32:31.839209 kernel: CPU features: detected: ARM erratum 1418040 May 13 12:32:31.839216 kernel: CPU features: detected: SSBS not fully self-synchronizing May 13 12:32:31.839222 kernel: alternatives: applying boot alternatives May 13 12:32:31.839229 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=b20e935bbd8772a1b0c6883755acb6e2a52b7a903a0b8e12c8ff59ca86b84928 May 13 12:32:31.839238 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 12:32:31.839244 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 12:32:31.839251 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 12:32:31.839257 kernel: Fallback order for Node 0: 0 May 13 12:32:31.839263 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 May 13 12:32:31.839269 kernel: Policy zone: DMA May 13 12:32:31.839276 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 12:32:31.839282 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB May 13 12:32:31.839288 kernel: software IO TLB: area num 4. May 13 12:32:31.839295 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB May 13 12:32:31.839301 kernel: software IO TLB: mapped [mem 0x00000000d8c00000-0x00000000d9000000] (4MB) May 13 12:32:31.839307 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 12:32:31.839315 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 12:32:31.839322 kernel: rcu: RCU event tracing is enabled. May 13 12:32:31.839329 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 12:32:31.839335 kernel: Trampoline variant of Tasks RCU enabled. May 13 12:32:31.839341 kernel: Tracing variant of Tasks RCU enabled. May 13 12:32:31.839348 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 12:32:31.839354 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 12:32:31.839360 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 12:32:31.839367 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 12:32:31.839373 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 13 12:32:31.839379 kernel: GICv3: 256 SPIs implemented May 13 12:32:31.839387 kernel: GICv3: 0 Extended SPIs implemented May 13 12:32:31.839393 kernel: Root IRQ handler: gic_handle_irq May 13 12:32:31.839399 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 13 12:32:31.839411 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 May 13 12:32:31.839417 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 13 12:32:31.839424 kernel: ITS [mem 0x08080000-0x0809ffff] May 13 12:32:31.839430 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400e0000 (indirect, esz 8, psz 64K, shr 1) May 13 12:32:31.839437 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400f0000 (flat, esz 8, psz 64K, shr 1) May 13 12:32:31.839443 kernel: GICv3: using LPI property table @0x0000000040100000 May 13 12:32:31.839449 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040110000 May 13 12:32:31.839456 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 12:32:31.839462 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 12:32:31.839470 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 13 12:32:31.839477 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 13 12:32:31.839483 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 13 12:32:31.839490 kernel: arm-pv: using stolen time PV May 13 12:32:31.839496 kernel: Console: colour dummy device 80x25 May 13 12:32:31.839503 kernel: ACPI: Core revision 20240827 May 13 12:32:31.839510 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 13 12:32:31.839516 kernel: pid_max: default: 32768 minimum: 301 May 13 12:32:31.839522 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 13 12:32:31.839530 kernel: landlock: Up and running. May 13 12:32:31.839543 kernel: SELinux: Initializing. May 13 12:32:31.839586 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 12:32:31.839593 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 12:32:31.839599 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 13 12:32:31.839606 kernel: rcu: Hierarchical SRCU implementation. May 13 12:32:31.839613 kernel: rcu: Max phase no-delay instances is 400. May 13 12:32:31.839620 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 13 12:32:31.839626 kernel: Remapping and enabling EFI services. May 13 12:32:31.839635 kernel: smp: Bringing up secondary CPUs ... May 13 12:32:31.839646 kernel: Detected PIPT I-cache on CPU1 May 13 12:32:31.839653 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 13 12:32:31.839661 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040120000 May 13 12:32:31.839672 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 12:32:31.839679 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 13 12:32:31.839685 kernel: Detected PIPT I-cache on CPU2 May 13 12:32:31.839692 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 13 12:32:31.839699 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040130000 May 13 12:32:31.839708 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 12:32:31.839714 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 13 12:32:31.839721 kernel: Detected PIPT I-cache on CPU3 May 13 12:32:31.839728 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 13 12:32:31.839735 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040140000 May 13 12:32:31.839742 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 12:32:31.839748 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 13 12:32:31.839755 kernel: smp: Brought up 1 node, 4 CPUs May 13 12:32:31.839762 kernel: SMP: Total of 4 processors activated. May 13 12:32:31.839770 kernel: CPU: All CPU(s) started at EL1 May 13 12:32:31.839777 kernel: CPU features: detected: 32-bit EL0 Support May 13 12:32:31.839784 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 13 12:32:31.839791 kernel: CPU features: detected: Common not Private translations May 13 12:32:31.839798 kernel: CPU features: detected: CRC32 instructions May 13 12:32:31.839804 kernel: CPU features: detected: Enhanced Virtualization Traps May 13 12:32:31.839811 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 13 12:32:31.839818 kernel: CPU features: detected: LSE atomic instructions May 13 12:32:31.839825 kernel: CPU features: detected: Privileged Access Never May 13 12:32:31.839833 kernel: CPU features: detected: RAS Extension Support May 13 12:32:31.839840 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 13 12:32:31.839846 kernel: alternatives: applying system-wide alternatives May 13 12:32:31.839853 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 May 13 12:32:31.839861 kernel: Memory: 2440920K/2572288K available (11072K kernel code, 2276K rwdata, 8932K rodata, 39488K init, 1034K bss, 125600K reserved, 0K cma-reserved) May 13 12:32:31.839868 kernel: devtmpfs: initialized May 13 12:32:31.839875 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 12:32:31.839882 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 12:32:31.839888 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 13 12:32:31.839898 kernel: 0 pages in range for non-PLT usage May 13 12:32:31.839904 kernel: 508528 pages in range for PLT usage May 13 12:32:31.839911 kernel: pinctrl core: initialized pinctrl subsystem May 13 12:32:31.839918 kernel: SMBIOS 3.0.0 present. May 13 12:32:31.839925 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 13 12:32:31.839931 kernel: DMI: Memory slots populated: 1/1 May 13 12:32:31.839938 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 12:32:31.839945 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 13 12:32:31.839952 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 13 12:32:31.839961 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 13 12:32:31.839967 kernel: audit: initializing netlink subsys (disabled) May 13 12:32:31.839974 kernel: audit: type=2000 audit(0.029:1): state=initialized audit_enabled=0 res=1 May 13 12:32:31.839981 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 12:32:31.839988 kernel: cpuidle: using governor menu May 13 12:32:31.839995 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 13 12:32:31.840001 kernel: ASID allocator initialised with 32768 entries May 13 12:32:31.840008 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 12:32:31.840015 kernel: Serial: AMBA PL011 UART driver May 13 12:32:31.840023 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 12:32:31.840030 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 13 12:32:31.840037 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 13 12:32:31.840044 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 13 12:32:31.840050 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 12:32:31.840057 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 13 12:32:31.840064 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 13 12:32:31.840071 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 13 12:32:31.840078 kernel: ACPI: Added _OSI(Module Device) May 13 12:32:31.840086 kernel: ACPI: Added _OSI(Processor Device) May 13 12:32:31.840093 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 12:32:31.840099 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 12:32:31.840106 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 12:32:31.840113 kernel: ACPI: Interpreter enabled May 13 12:32:31.840119 kernel: ACPI: Using GIC for interrupt routing May 13 12:32:31.840126 kernel: ACPI: MCFG table detected, 1 entries May 13 12:32:31.840133 kernel: ACPI: CPU0 has been hot-added May 13 12:32:31.840140 kernel: ACPI: CPU1 has been hot-added May 13 12:32:31.840148 kernel: ACPI: CPU2 has been hot-added May 13 12:32:31.840155 kernel: ACPI: CPU3 has been hot-added May 13 12:32:31.840162 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 13 12:32:31.840169 kernel: printk: legacy console [ttyAMA0] enabled May 13 12:32:31.840175 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 12:32:31.840302 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 12:32:31.840366 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 13 12:32:31.840423 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 13 12:32:31.840482 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 13 12:32:31.840566 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 13 12:32:31.840576 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 13 12:32:31.840583 kernel: PCI host bridge to bus 0000:00 May 13 12:32:31.840653 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 13 12:32:31.840710 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 13 12:32:31.840771 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 13 12:32:31.840826 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 12:32:31.840899 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint May 13 12:32:31.840969 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 13 12:32:31.841028 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] May 13 12:32:31.841088 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] May 13 12:32:31.841146 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] May 13 12:32:31.841204 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned May 13 12:32:31.841264 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned May 13 12:32:31.841322 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned May 13 12:32:31.841373 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 13 12:32:31.841424 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 13 12:32:31.841475 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 13 12:32:31.841484 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 13 12:32:31.841491 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 13 12:32:31.841499 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 13 12:32:31.841506 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 13 12:32:31.841513 kernel: iommu: Default domain type: Translated May 13 12:32:31.841520 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 13 12:32:31.841527 kernel: efivars: Registered efivars operations May 13 12:32:31.841534 kernel: vgaarb: loaded May 13 12:32:31.841558 kernel: clocksource: Switched to clocksource arch_sys_counter May 13 12:32:31.841565 kernel: VFS: Disk quotas dquot_6.6.0 May 13 12:32:31.841572 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 12:32:31.841582 kernel: pnp: PnP ACPI init May 13 12:32:31.841659 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 13 12:32:31.841669 kernel: pnp: PnP ACPI: found 1 devices May 13 12:32:31.841676 kernel: NET: Registered PF_INET protocol family May 13 12:32:31.841683 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 12:32:31.841690 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 12:32:31.841697 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 12:32:31.841704 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 12:32:31.841713 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 13 12:32:31.841719 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 12:32:31.841726 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 12:32:31.841733 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 12:32:31.841740 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 12:32:31.841747 kernel: PCI: CLS 0 bytes, default 64 May 13 12:32:31.841754 kernel: kvm [1]: HYP mode not available May 13 12:32:31.841761 kernel: Initialise system trusted keyrings May 13 12:32:31.841768 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 12:32:31.841776 kernel: Key type asymmetric registered May 13 12:32:31.841783 kernel: Asymmetric key parser 'x509' registered May 13 12:32:31.841790 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 13 12:32:31.841797 kernel: io scheduler mq-deadline registered May 13 12:32:31.841804 kernel: io scheduler kyber registered May 13 12:32:31.841811 kernel: io scheduler bfq registered May 13 12:32:31.841822 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 13 12:32:31.841829 kernel: ACPI: button: Power Button [PWRB] May 13 12:32:31.841837 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 13 12:32:31.841903 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 13 12:32:31.841912 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 12:32:31.841919 kernel: thunder_xcv, ver 1.0 May 13 12:32:31.841926 kernel: thunder_bgx, ver 1.0 May 13 12:32:31.841932 kernel: nicpf, ver 1.0 May 13 12:32:31.841939 kernel: nicvf, ver 1.0 May 13 12:32:31.842007 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 13 12:32:31.842063 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-13T12:32:31 UTC (1747139551) May 13 12:32:31.842073 kernel: hid: raw HID events driver (C) Jiri Kosina May 13 12:32:31.842080 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available May 13 12:32:31.842088 kernel: watchdog: NMI not fully supported May 13 12:32:31.842095 kernel: watchdog: Hard watchdog permanently disabled May 13 12:32:31.842101 kernel: NET: Registered PF_INET6 protocol family May 13 12:32:31.842108 kernel: Segment Routing with IPv6 May 13 12:32:31.842115 kernel: In-situ OAM (IOAM) with IPv6 May 13 12:32:31.842122 kernel: NET: Registered PF_PACKET protocol family May 13 12:32:31.842129 kernel: Key type dns_resolver registered May 13 12:32:31.842137 kernel: registered taskstats version 1 May 13 12:32:31.842144 kernel: Loading compiled-in X.509 certificates May 13 12:32:31.842151 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.28-flatcar: f8df872077a0531ef71a44c67653908e8a70c520' May 13 12:32:31.842158 kernel: Demotion targets for Node 0: null May 13 12:32:31.842164 kernel: Key type .fscrypt registered May 13 12:32:31.842171 kernel: Key type fscrypt-provisioning registered May 13 12:32:31.842178 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 12:32:31.842185 kernel: ima: Allocated hash algorithm: sha1 May 13 12:32:31.842191 kernel: ima: No architecture policies found May 13 12:32:31.842204 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 13 12:32:31.842211 kernel: clk: Disabling unused clocks May 13 12:32:31.842218 kernel: PM: genpd: Disabling unused power domains May 13 12:32:31.842225 kernel: Warning: unable to open an initial console. May 13 12:32:31.842232 kernel: Freeing unused kernel memory: 39488K May 13 12:32:31.842239 kernel: Run /init as init process May 13 12:32:31.842245 kernel: with arguments: May 13 12:32:31.842252 kernel: /init May 13 12:32:31.842258 kernel: with environment: May 13 12:32:31.842266 kernel: HOME=/ May 13 12:32:31.842273 kernel: TERM=linux May 13 12:32:31.842280 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 12:32:31.842288 systemd[1]: Successfully made /usr/ read-only. May 13 12:32:31.842298 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 12:32:31.842305 systemd[1]: Detected virtualization kvm. May 13 12:32:31.842313 systemd[1]: Detected architecture arm64. May 13 12:32:31.842319 systemd[1]: Running in initrd. May 13 12:32:31.842328 systemd[1]: No hostname configured, using default hostname. May 13 12:32:31.842335 systemd[1]: Hostname set to . May 13 12:32:31.842343 systemd[1]: Initializing machine ID from VM UUID. May 13 12:32:31.842350 systemd[1]: Queued start job for default target initrd.target. May 13 12:32:31.842357 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 12:32:31.842365 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 12:32:31.842372 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 12:32:31.842380 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 12:32:31.842389 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 12:32:31.842396 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 12:32:31.842405 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 12:32:31.842412 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 12:32:31.842422 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 12:32:31.842430 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 12:32:31.842439 systemd[1]: Reached target paths.target - Path Units. May 13 12:32:31.842446 systemd[1]: Reached target slices.target - Slice Units. May 13 12:32:31.842453 systemd[1]: Reached target swap.target - Swaps. May 13 12:32:31.842460 systemd[1]: Reached target timers.target - Timer Units. May 13 12:32:31.842468 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 12:32:31.842475 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 12:32:31.842482 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 12:32:31.842492 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 13 12:32:31.842500 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 12:32:31.842509 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 12:32:31.842516 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 12:32:31.842523 systemd[1]: Reached target sockets.target - Socket Units. May 13 12:32:31.842531 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 12:32:31.842543 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 12:32:31.842561 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 12:32:31.842569 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 13 12:32:31.842576 systemd[1]: Starting systemd-fsck-usr.service... May 13 12:32:31.842585 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 12:32:31.842593 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 12:32:31.842600 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 12:32:31.842610 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 12:32:31.842618 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 12:32:31.842627 systemd[1]: Finished systemd-fsck-usr.service. May 13 12:32:31.842634 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 12:32:31.842657 systemd-journald[245]: Collecting audit messages is disabled. May 13 12:32:31.842675 systemd-journald[245]: Journal started May 13 12:32:31.842694 systemd-journald[245]: Runtime Journal (/run/log/journal/db43862be9a949528ad34a1834f1121f) is 6M, max 48.5M, 42.4M free. May 13 12:32:31.834029 systemd-modules-load[247]: Inserted module 'overlay' May 13 12:32:31.848154 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 12:32:31.848173 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 12:32:31.850104 systemd-modules-load[247]: Inserted module 'br_netfilter' May 13 12:32:31.851694 kernel: Bridge firewalling registered May 13 12:32:31.851711 systemd[1]: Started systemd-journald.service - Journal Service. May 13 12:32:31.852918 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 12:32:31.854165 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 12:32:31.858491 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 12:32:31.860712 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 12:32:31.862757 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 12:32:31.869115 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 12:32:31.876261 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 12:32:31.879766 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 12:32:31.879774 systemd-tmpfiles[271]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 13 12:32:31.883064 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 12:32:31.886594 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 12:32:31.888599 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 12:32:31.891138 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 12:32:31.913679 dracut-cmdline[288]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=b20e935bbd8772a1b0c6883755acb6e2a52b7a903a0b8e12c8ff59ca86b84928 May 13 12:32:31.928951 systemd-resolved[289]: Positive Trust Anchors: May 13 12:32:31.928966 systemd-resolved[289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 12:32:31.928996 systemd-resolved[289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 12:32:31.933736 systemd-resolved[289]: Defaulting to hostname 'linux'. May 13 12:32:31.934635 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 12:32:31.938338 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 12:32:31.989559 kernel: SCSI subsystem initialized May 13 12:32:31.993572 kernel: Loading iSCSI transport class v2.0-870. May 13 12:32:32.002587 kernel: iscsi: registered transport (tcp) May 13 12:32:32.016669 kernel: iscsi: registered transport (qla4xxx) May 13 12:32:32.016691 kernel: QLogic iSCSI HBA Driver May 13 12:32:32.032267 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 12:32:32.051522 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 12:32:32.055302 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 12:32:32.097703 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 12:32:32.099531 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 12:32:32.164600 kernel: raid6: neonx8 gen() 15691 MB/s May 13 12:32:32.181576 kernel: raid6: neonx4 gen() 15706 MB/s May 13 12:32:32.198569 kernel: raid6: neonx2 gen() 13158 MB/s May 13 12:32:32.215578 kernel: raid6: neonx1 gen() 10543 MB/s May 13 12:32:32.232575 kernel: raid6: int64x8 gen() 6889 MB/s May 13 12:32:32.249577 kernel: raid6: int64x4 gen() 7352 MB/s May 13 12:32:32.266577 kernel: raid6: int64x2 gen() 6099 MB/s May 13 12:32:32.283658 kernel: raid6: int64x1 gen() 5055 MB/s May 13 12:32:32.283696 kernel: raid6: using algorithm neonx4 gen() 15706 MB/s May 13 12:32:32.301636 kernel: raid6: .... xor() 12394 MB/s, rmw enabled May 13 12:32:32.301651 kernel: raid6: using neon recovery algorithm May 13 12:32:32.308805 kernel: xor: measuring software checksum speed May 13 12:32:32.308822 kernel: 8regs : 21601 MB/sec May 13 12:32:32.309574 kernel: 32regs : 21676 MB/sec May 13 12:32:32.310681 kernel: arm64_neon : 21670 MB/sec May 13 12:32:32.310703 kernel: xor: using function: 32regs (21676 MB/sec) May 13 12:32:32.361575 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 12:32:32.368607 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 12:32:32.371092 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 12:32:32.396751 systemd-udevd[499]: Using default interface naming scheme 'v255'. May 13 12:32:32.400805 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 12:32:32.403082 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 12:32:32.425485 dracut-pre-trigger[508]: rd.md=0: removing MD RAID activation May 13 12:32:32.446059 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 12:32:32.448247 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 12:32:32.497247 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 12:32:32.499383 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 12:32:32.548114 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 13 12:32:32.553650 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 12:32:32.550441 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 12:32:32.550509 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 12:32:32.555925 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 12:32:32.558914 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 12:32:32.566645 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 12:32:32.566688 kernel: GPT:9289727 != 19775487 May 13 12:32:32.566698 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 12:32:32.566707 kernel: GPT:9289727 != 19775487 May 13 12:32:32.566732 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 12:32:32.566744 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 12:32:32.588669 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 13 12:32:32.590094 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 12:32:32.592261 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 12:32:32.605984 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 13 12:32:32.617848 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 12:32:32.624031 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 13 12:32:32.625243 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 13 12:32:32.628193 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 12:32:32.630400 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 12:32:32.632493 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 12:32:32.635184 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 12:32:32.636992 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 12:32:32.656098 disk-uuid[591]: Primary Header is updated. May 13 12:32:32.656098 disk-uuid[591]: Secondary Entries is updated. May 13 12:32:32.656098 disk-uuid[591]: Secondary Header is updated. May 13 12:32:32.660567 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 12:32:32.663117 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 12:32:33.668568 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 12:32:33.668968 disk-uuid[595]: The operation has completed successfully. May 13 12:32:33.693025 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 12:32:33.693125 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 12:32:33.720973 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 12:32:33.742075 sh[610]: Success May 13 12:32:33.758974 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 12:32:33.759023 kernel: device-mapper: uevent: version 1.0.3 May 13 12:32:33.760173 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 13 12:32:33.772566 kernel: device-mapper: verity: sha256 using shash "sha256-ce" May 13 12:32:33.797314 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 12:32:33.800163 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 12:32:33.811426 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 12:32:33.817688 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 13 12:32:33.820568 kernel: BTRFS: device fsid 5ded7f9d-c045-4eec-a161-ff9af5b01d28 devid 1 transid 40 /dev/mapper/usr (253:0) scanned by mount (622) May 13 12:32:33.820595 kernel: BTRFS info (device dm-0): first mount of filesystem 5ded7f9d-c045-4eec-a161-ff9af5b01d28 May 13 12:32:33.820606 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 13 12:32:33.822059 kernel: BTRFS info (device dm-0): using free-space-tree May 13 12:32:33.827064 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 12:32:33.828328 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 13 12:32:33.829988 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 12:32:33.831035 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 12:32:33.834009 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 12:32:33.862001 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (652) May 13 12:32:33.862037 kernel: BTRFS info (device vda6): first mount of filesystem 79dad06b-b9d3-4cc5-b052-ebf459e9d4d7 May 13 12:32:33.862048 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 12:32:33.863106 kernel: BTRFS info (device vda6): using free-space-tree May 13 12:32:33.873560 kernel: BTRFS info (device vda6): last unmount of filesystem 79dad06b-b9d3-4cc5-b052-ebf459e9d4d7 May 13 12:32:33.874094 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 12:32:33.876036 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 12:32:33.940272 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 12:32:33.943397 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 12:32:33.990893 systemd-networkd[796]: lo: Link UP May 13 12:32:33.990904 systemd-networkd[796]: lo: Gained carrier May 13 12:32:33.991592 systemd-networkd[796]: Enumeration completed May 13 12:32:33.991704 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 12:32:33.991981 systemd-networkd[796]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 12:32:33.991984 systemd-networkd[796]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 12:32:33.992482 systemd-networkd[796]: eth0: Link UP May 13 12:32:33.992485 systemd-networkd[796]: eth0: Gained carrier May 13 12:32:33.992492 systemd-networkd[796]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 12:32:33.993765 systemd[1]: Reached target network.target - Network. May 13 12:32:34.015602 systemd-networkd[796]: eth0: DHCPv4 address 10.0.0.26/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 12:32:34.022919 ignition[703]: Ignition 2.21.0 May 13 12:32:34.022938 ignition[703]: Stage: fetch-offline May 13 12:32:34.022965 ignition[703]: no configs at "/usr/lib/ignition/base.d" May 13 12:32:34.022972 ignition[703]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 12:32:34.023190 ignition[703]: parsed url from cmdline: "" May 13 12:32:34.023193 ignition[703]: no config URL provided May 13 12:32:34.023197 ignition[703]: reading system config file "/usr/lib/ignition/user.ign" May 13 12:32:34.023204 ignition[703]: no config at "/usr/lib/ignition/user.ign" May 13 12:32:34.023222 ignition[703]: op(1): [started] loading QEMU firmware config module May 13 12:32:34.023227 ignition[703]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 12:32:34.030202 ignition[703]: op(1): [finished] loading QEMU firmware config module May 13 12:32:34.068353 ignition[703]: parsing config with SHA512: a05e59b94ed024439964513c74462dcf589ab42078b6d69e257b75977c12ba8411d14998063e50960a6bd81094bae8d28c9267cecda5cb39cf13d423fee1d975 May 13 12:32:34.072862 unknown[703]: fetched base config from "system" May 13 12:32:34.072873 unknown[703]: fetched user config from "qemu" May 13 12:32:34.073270 ignition[703]: fetch-offline: fetch-offline passed May 13 12:32:34.073328 ignition[703]: Ignition finished successfully May 13 12:32:34.075811 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 12:32:34.077772 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 12:32:34.078575 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 12:32:34.108495 ignition[811]: Ignition 2.21.0 May 13 12:32:34.108512 ignition[811]: Stage: kargs May 13 12:32:34.108687 ignition[811]: no configs at "/usr/lib/ignition/base.d" May 13 12:32:34.108697 ignition[811]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 12:32:34.110490 ignition[811]: kargs: kargs passed May 13 12:32:34.110590 ignition[811]: Ignition finished successfully May 13 12:32:34.114881 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 12:32:34.117688 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 12:32:34.147231 ignition[819]: Ignition 2.21.0 May 13 12:32:34.147248 ignition[819]: Stage: disks May 13 12:32:34.147396 ignition[819]: no configs at "/usr/lib/ignition/base.d" May 13 12:32:34.147405 ignition[819]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 12:32:34.148748 ignition[819]: disks: disks passed May 13 12:32:34.151104 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 12:32:34.148821 ignition[819]: Ignition finished successfully May 13 12:32:34.152503 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 12:32:34.153919 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 12:32:34.155880 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 12:32:34.157431 systemd[1]: Reached target sysinit.target - System Initialization. May 13 12:32:34.159324 systemd[1]: Reached target basic.target - Basic System. May 13 12:32:34.162036 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 12:32:34.197046 systemd-fsck[829]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 13 12:32:34.201006 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 12:32:34.205222 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 12:32:34.281574 kernel: EXT4-fs (vda9): mounted filesystem 02660b30-6941-48da-9f0e-501a024e2c48 r/w with ordered data mode. Quota mode: none. May 13 12:32:34.281896 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 12:32:34.283162 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 12:32:34.285586 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 12:32:34.287182 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 12:32:34.288199 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 12:32:34.288238 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 12:32:34.288260 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 12:32:34.298320 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 12:32:34.300527 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 12:32:34.305513 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (838) May 13 12:32:34.305553 kernel: BTRFS info (device vda6): first mount of filesystem 79dad06b-b9d3-4cc5-b052-ebf459e9d4d7 May 13 12:32:34.306681 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 12:32:34.307554 kernel: BTRFS info (device vda6): using free-space-tree May 13 12:32:34.309570 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 12:32:34.348368 initrd-setup-root[862]: cut: /sysroot/etc/passwd: No such file or directory May 13 12:32:34.352608 initrd-setup-root[869]: cut: /sysroot/etc/group: No such file or directory May 13 12:32:34.356108 initrd-setup-root[876]: cut: /sysroot/etc/shadow: No such file or directory May 13 12:32:34.360171 initrd-setup-root[883]: cut: /sysroot/etc/gshadow: No such file or directory May 13 12:32:34.423645 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 12:32:34.425646 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 12:32:34.427143 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 12:32:34.441568 kernel: BTRFS info (device vda6): last unmount of filesystem 79dad06b-b9d3-4cc5-b052-ebf459e9d4d7 May 13 12:32:34.455320 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 12:32:34.459233 ignition[951]: INFO : Ignition 2.21.0 May 13 12:32:34.459233 ignition[951]: INFO : Stage: mount May 13 12:32:34.462180 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 12:32:34.462180 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 12:32:34.462180 ignition[951]: INFO : mount: mount passed May 13 12:32:34.462180 ignition[951]: INFO : Ignition finished successfully May 13 12:32:34.462813 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 12:32:34.464917 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 12:32:34.827918 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 12:32:34.829524 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 12:32:34.852621 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (965) May 13 12:32:34.852651 kernel: BTRFS info (device vda6): first mount of filesystem 79dad06b-b9d3-4cc5-b052-ebf459e9d4d7 May 13 12:32:34.853710 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 12:32:34.853724 kernel: BTRFS info (device vda6): using free-space-tree May 13 12:32:34.856841 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 12:32:34.886627 ignition[982]: INFO : Ignition 2.21.0 May 13 12:32:34.886627 ignition[982]: INFO : Stage: files May 13 12:32:34.888251 ignition[982]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 12:32:34.888251 ignition[982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 12:32:34.888251 ignition[982]: DEBUG : files: compiled without relabeling support, skipping May 13 12:32:34.891765 ignition[982]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 12:32:34.891765 ignition[982]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 12:32:34.891765 ignition[982]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 12:32:34.891765 ignition[982]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 12:32:34.891765 ignition[982]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 12:32:34.890913 unknown[982]: wrote ssh authorized keys file for user: core May 13 12:32:34.899651 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 13 12:32:34.899651 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 13 12:32:34.932014 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 12:32:35.182419 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 13 12:32:35.182419 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 12:32:35.186156 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 13 12:32:35.548574 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 13 12:32:35.696156 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 12:32:35.696156 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 13 12:32:35.700288 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 13 12:32:35.700288 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 12:32:35.700288 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 12:32:35.700288 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 12:32:35.700288 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 12:32:35.700288 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 12:32:35.700288 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 12:32:35.700288 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 12:32:35.700288 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 12:32:35.700288 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 12:32:35.700288 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 12:32:35.700288 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 12:32:35.700288 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 13 12:32:35.817737 systemd-networkd[796]: eth0: Gained IPv6LL May 13 12:32:35.864619 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 13 12:32:36.032130 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 12:32:36.032130 ignition[982]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 13 12:32:36.035812 ignition[982]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 12:32:36.038786 ignition[982]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 12:32:36.038786 ignition[982]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 13 12:32:36.038786 ignition[982]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 13 12:32:36.043127 ignition[982]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 12:32:36.043127 ignition[982]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 12:32:36.043127 ignition[982]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 13 12:32:36.043127 ignition[982]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 13 12:32:36.055369 ignition[982]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 12:32:36.058974 ignition[982]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 12:32:36.060605 ignition[982]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 13 12:32:36.060605 ignition[982]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 13 12:32:36.060605 ignition[982]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 13 12:32:36.060605 ignition[982]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 12:32:36.060605 ignition[982]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 12:32:36.060605 ignition[982]: INFO : files: files passed May 13 12:32:36.060605 ignition[982]: INFO : Ignition finished successfully May 13 12:32:36.063580 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 12:32:36.066717 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 12:32:36.070670 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 12:32:36.084285 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 12:32:36.085360 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 12:32:36.088251 initrd-setup-root-after-ignition[1010]: grep: /sysroot/oem/oem-release: No such file or directory May 13 12:32:36.089669 initrd-setup-root-after-ignition[1012]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 12:32:36.089669 initrd-setup-root-after-ignition[1012]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 12:32:36.093186 initrd-setup-root-after-ignition[1017]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 12:32:36.091338 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 12:32:36.094915 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 12:32:36.097696 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 12:32:36.129424 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 12:32:36.129571 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 12:32:36.131872 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 12:32:36.133739 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 12:32:36.135615 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 12:32:36.136333 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 12:32:36.149883 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 12:32:36.152203 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 12:32:36.175180 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 12:32:36.176527 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 12:32:36.178723 systemd[1]: Stopped target timers.target - Timer Units. May 13 12:32:36.180563 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 12:32:36.180695 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 12:32:36.183365 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 12:32:36.184541 systemd[1]: Stopped target basic.target - Basic System. May 13 12:32:36.186521 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 12:32:36.188554 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 12:32:36.190395 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 12:32:36.192392 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 13 12:32:36.194487 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 12:32:36.196520 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 12:32:36.198647 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 12:32:36.200530 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 12:32:36.202651 systemd[1]: Stopped target swap.target - Swaps. May 13 12:32:36.204405 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 12:32:36.204542 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 12:32:36.207017 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 12:32:36.208984 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 12:32:36.210927 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 12:32:36.211626 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 12:32:36.213022 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 12:32:36.213138 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 12:32:36.215943 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 12:32:36.216065 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 12:32:36.218430 systemd[1]: Stopped target paths.target - Path Units. May 13 12:32:36.220090 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 12:32:36.223600 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 12:32:36.225213 systemd[1]: Stopped target slices.target - Slice Units. May 13 12:32:36.226940 systemd[1]: Stopped target sockets.target - Socket Units. May 13 12:32:36.229041 systemd[1]: iscsid.socket: Deactivated successfully. May 13 12:32:36.229129 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 12:32:36.230837 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 12:32:36.230935 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 12:32:36.232725 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 12:32:36.232854 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 12:32:36.234697 systemd[1]: ignition-files.service: Deactivated successfully. May 13 12:32:36.234805 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 12:32:36.237329 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 12:32:36.239674 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 12:32:36.240847 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 12:32:36.240967 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 12:32:36.243156 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 12:32:36.243257 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 12:32:36.248378 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 12:32:36.249719 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 12:32:36.258238 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 12:32:36.264475 ignition[1037]: INFO : Ignition 2.21.0 May 13 12:32:36.264475 ignition[1037]: INFO : Stage: umount May 13 12:32:36.266602 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 12:32:36.266602 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 12:32:36.268862 ignition[1037]: INFO : umount: umount passed May 13 12:32:36.268862 ignition[1037]: INFO : Ignition finished successfully May 13 12:32:36.269369 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 12:32:36.269468 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 12:32:36.270839 systemd[1]: Stopped target network.target - Network. May 13 12:32:36.272202 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 12:32:36.272265 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 12:32:36.273994 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 12:32:36.274041 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 12:32:36.275792 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 12:32:36.275840 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 12:32:36.277443 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 12:32:36.277482 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 12:32:36.279377 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 12:32:36.281204 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 12:32:36.288111 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 12:32:36.288215 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 12:32:36.291707 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 13 12:32:36.291890 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 12:32:36.291975 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 12:32:36.294390 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 13 12:32:36.294944 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 13 12:32:36.296093 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 12:32:36.296136 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 12:32:36.298998 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 12:32:36.299971 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 12:32:36.300028 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 12:32:36.302134 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 12:32:36.302186 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 12:32:36.305248 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 12:32:36.305290 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 12:32:36.307796 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 12:32:36.307843 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 12:32:36.310984 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 12:32:36.314213 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 12:32:36.314275 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 13 12:32:36.328425 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 12:32:36.328841 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 12:32:36.331005 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 12:32:36.331097 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 12:32:36.332292 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 12:32:36.332375 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 12:32:36.334848 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 12:32:36.334914 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 12:32:36.336165 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 12:32:36.336198 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 12:32:36.337916 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 12:32:36.337971 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 12:32:36.340474 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 12:32:36.340539 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 12:32:36.343322 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 12:32:36.343376 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 12:32:36.346363 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 12:32:36.346415 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 12:32:36.349017 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 12:32:36.350067 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 13 12:32:36.350126 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 13 12:32:36.352856 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 12:32:36.352898 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 12:32:36.356008 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 13 12:32:36.356048 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 12:32:36.359079 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 12:32:36.359122 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 12:32:36.362002 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 12:32:36.362054 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 12:32:36.365977 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 13 12:32:36.366024 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. May 13 12:32:36.366052 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 13 12:32:36.366082 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 12:32:36.366389 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 12:32:36.366467 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 12:32:36.369034 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 12:32:36.371293 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 12:32:36.388229 systemd[1]: Switching root. May 13 12:32:36.414812 systemd-journald[245]: Journal stopped May 13 12:32:37.183447 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). May 13 12:32:37.183505 kernel: SELinux: policy capability network_peer_controls=1 May 13 12:32:37.183526 kernel: SELinux: policy capability open_perms=1 May 13 12:32:37.183536 kernel: SELinux: policy capability extended_socket_class=1 May 13 12:32:37.183589 kernel: SELinux: policy capability always_check_network=0 May 13 12:32:37.183600 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 12:32:37.183612 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 12:32:37.183621 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 12:32:37.183630 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 12:32:37.183639 kernel: SELinux: policy capability userspace_initial_context=0 May 13 12:32:37.183648 kernel: audit: type=1403 audit(1747139556.588:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 12:32:37.183666 systemd[1]: Successfully loaded SELinux policy in 43.160ms. May 13 12:32:37.183685 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.246ms. May 13 12:32:37.183698 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 12:32:37.183709 systemd[1]: Detected virtualization kvm. May 13 12:32:37.183720 systemd[1]: Detected architecture arm64. May 13 12:32:37.183731 systemd[1]: Detected first boot. May 13 12:32:37.183741 systemd[1]: Initializing machine ID from VM UUID. May 13 12:32:37.183751 zram_generator::config[1083]: No configuration found. May 13 12:32:37.183762 kernel: NET: Registered PF_VSOCK protocol family May 13 12:32:37.183771 systemd[1]: Populated /etc with preset unit settings. May 13 12:32:37.183782 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 13 12:32:37.183792 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 12:32:37.183804 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 12:32:37.183818 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 12:32:37.183829 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 12:32:37.183839 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 12:32:37.183849 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 12:32:37.183859 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 12:32:37.183869 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 12:32:37.183879 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 12:32:37.183889 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 12:32:37.183901 systemd[1]: Created slice user.slice - User and Session Slice. May 13 12:32:37.183911 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 12:32:37.183921 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 12:32:37.183932 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 12:32:37.183942 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 12:32:37.183952 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 12:32:37.183962 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 12:32:37.183972 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 13 12:32:37.183984 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 12:32:37.183994 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 12:32:37.184004 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 12:32:37.184015 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 12:32:37.184025 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 12:32:37.184035 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 12:32:37.184045 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 12:32:37.184055 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 12:32:37.184067 systemd[1]: Reached target slices.target - Slice Units. May 13 12:32:37.184078 systemd[1]: Reached target swap.target - Swaps. May 13 12:32:37.184088 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 12:32:37.184099 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 12:32:37.184108 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 13 12:32:37.184118 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 12:32:37.184128 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 12:32:37.184138 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 12:32:37.184149 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 12:32:37.184159 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 12:32:37.184170 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 12:32:37.184180 systemd[1]: Mounting media.mount - External Media Directory... May 13 12:32:37.184190 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 12:32:37.184201 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 12:32:37.184211 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 12:32:37.184221 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 12:32:37.184232 systemd[1]: Reached target machines.target - Containers. May 13 12:32:37.184242 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 12:32:37.184254 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 12:32:37.184264 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 12:32:37.184275 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 12:32:37.184285 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 12:32:37.184295 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 12:32:37.184305 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 12:32:37.184315 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 12:32:37.184326 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 12:32:37.184336 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 12:32:37.184348 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 12:32:37.184358 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 12:32:37.184369 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 12:32:37.184379 systemd[1]: Stopped systemd-fsck-usr.service. May 13 12:32:37.184390 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 12:32:37.184400 kernel: loop: module loaded May 13 12:32:37.184410 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 12:32:37.184420 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 12:32:37.184432 kernel: ACPI: bus type drm_connector registered May 13 12:32:37.184441 kernel: fuse: init (API version 7.41) May 13 12:32:37.184451 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 12:32:37.184462 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 12:32:37.184472 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 13 12:32:37.184482 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 12:32:37.184494 systemd[1]: verity-setup.service: Deactivated successfully. May 13 12:32:37.184512 systemd[1]: Stopped verity-setup.service. May 13 12:32:37.184523 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 12:32:37.184533 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 12:32:37.184571 systemd-journald[1155]: Collecting audit messages is disabled. May 13 12:32:37.184595 systemd[1]: Mounted media.mount - External Media Directory. May 13 12:32:37.184607 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 12:32:37.184618 systemd-journald[1155]: Journal started May 13 12:32:37.184639 systemd-journald[1155]: Runtime Journal (/run/log/journal/db43862be9a949528ad34a1834f1121f) is 6M, max 48.5M, 42.4M free. May 13 12:32:36.953731 systemd[1]: Queued start job for default target multi-user.target. May 13 12:32:36.976430 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 13 12:32:36.976839 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 12:32:37.187579 systemd[1]: Started systemd-journald.service - Journal Service. May 13 12:32:37.188025 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 12:32:37.189364 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 12:32:37.190693 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 12:32:37.192144 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 12:32:37.193632 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 12:32:37.193786 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 12:32:37.195190 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 12:32:37.195369 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 12:32:37.196763 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 12:32:37.196912 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 12:32:37.198361 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 12:32:37.198531 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 12:32:37.199961 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 12:32:37.200125 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 12:32:37.201422 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 12:32:37.201626 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 12:32:37.202922 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 12:32:37.204315 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 12:32:37.205838 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 12:32:37.207451 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 13 12:32:37.219352 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 12:32:37.221796 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 12:32:37.223870 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 12:32:37.225063 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 12:32:37.225091 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 12:32:37.226954 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 13 12:32:37.231295 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 12:32:37.232683 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 12:32:37.234117 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 12:32:37.236114 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 12:32:37.237367 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 12:32:37.240443 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 12:32:37.241681 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 12:32:37.243672 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 12:32:37.246045 systemd-journald[1155]: Time spent on flushing to /var/log/journal/db43862be9a949528ad34a1834f1121f is 23.012ms for 891 entries. May 13 12:32:37.246045 systemd-journald[1155]: System Journal (/var/log/journal/db43862be9a949528ad34a1834f1121f) is 8M, max 195.6M, 187.6M free. May 13 12:32:37.281382 systemd-journald[1155]: Received client request to flush runtime journal. May 13 12:32:37.281427 kernel: loop0: detected capacity change from 0 to 138376 May 13 12:32:37.281452 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 12:32:37.247724 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 12:32:37.250747 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 12:32:37.254512 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 12:32:37.256062 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 12:32:37.258857 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 12:32:37.266392 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 12:32:37.270349 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 12:32:37.275734 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 13 12:32:37.283952 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 12:32:37.284774 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. May 13 12:32:37.284784 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. May 13 12:32:37.288932 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 12:32:37.294211 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 12:32:37.297593 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 12:32:37.301575 kernel: loop1: detected capacity change from 0 to 107312 May 13 12:32:37.311533 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 13 12:32:37.330263 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 12:32:37.331387 kernel: loop2: detected capacity change from 0 to 194096 May 13 12:32:37.334798 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 12:32:37.357943 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. May 13 12:32:37.357962 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. May 13 12:32:37.360559 kernel: loop3: detected capacity change from 0 to 138376 May 13 12:32:37.365003 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 12:32:37.370569 kernel: loop4: detected capacity change from 0 to 107312 May 13 12:32:37.375565 kernel: loop5: detected capacity change from 0 to 194096 May 13 12:32:37.379244 (sd-merge)[1223]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 13 12:32:37.379639 (sd-merge)[1223]: Merged extensions into '/usr'. May 13 12:32:37.384684 systemd[1]: Reload requested from client PID 1199 ('systemd-sysext') (unit systemd-sysext.service)... May 13 12:32:37.384702 systemd[1]: Reloading... May 13 12:32:37.463582 zram_generator::config[1262]: No configuration found. May 13 12:32:37.515371 ldconfig[1194]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 12:32:37.521584 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 12:32:37.584385 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 12:32:37.584682 systemd[1]: Reloading finished in 199 ms. May 13 12:32:37.616219 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 12:32:37.617715 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 12:32:37.628672 systemd[1]: Starting ensure-sysext.service... May 13 12:32:37.630359 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 12:32:37.638673 systemd[1]: Reload requested from client PID 1284 ('systemctl') (unit ensure-sysext.service)... May 13 12:32:37.638687 systemd[1]: Reloading... May 13 12:32:37.645733 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 13 12:32:37.646077 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 13 12:32:37.646353 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 12:32:37.646646 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 12:32:37.647337 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 12:32:37.647664 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. May 13 12:32:37.647768 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. May 13 12:32:37.650662 systemd-tmpfiles[1285]: Detected autofs mount point /boot during canonicalization of boot. May 13 12:32:37.650761 systemd-tmpfiles[1285]: Skipping /boot May 13 12:32:37.659958 systemd-tmpfiles[1285]: Detected autofs mount point /boot during canonicalization of boot. May 13 12:32:37.660059 systemd-tmpfiles[1285]: Skipping /boot May 13 12:32:37.697614 zram_generator::config[1312]: No configuration found. May 13 12:32:37.760348 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 12:32:37.822327 systemd[1]: Reloading finished in 183 ms. May 13 12:32:37.833960 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 12:32:37.840757 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 12:32:37.852543 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 12:32:37.855225 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 12:32:37.857454 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 12:32:37.860331 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 12:32:37.862730 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 12:32:37.868282 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 12:32:37.880771 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 12:32:37.883522 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 12:32:37.891059 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 12:32:37.892776 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 12:32:37.895276 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 12:32:37.899922 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 12:32:37.900965 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 12:32:37.901136 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 12:32:37.906933 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 12:32:37.907989 systemd-udevd[1353]: Using default interface naming scheme 'v255'. May 13 12:32:37.910736 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 12:32:37.910897 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 12:32:37.912910 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 12:32:37.913108 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 12:32:37.915558 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 12:32:37.915735 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 12:32:37.929909 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 12:32:37.933749 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 12:32:37.940460 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 12:32:37.947739 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 12:32:37.949573 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 12:32:37.952707 augenrules[1402]: No rules May 13 12:32:37.953340 systemd[1]: audit-rules.service: Deactivated successfully. May 13 12:32:37.953918 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 12:32:37.960113 systemd[1]: Finished ensure-sysext.service. May 13 12:32:37.966000 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 12:32:37.967117 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 12:32:37.968335 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 12:32:37.971762 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 12:32:37.986382 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 12:32:37.990578 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 12:32:37.991767 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 12:32:37.991811 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 12:32:37.993644 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 12:32:37.996444 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 12:32:37.997607 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 12:32:38.003890 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 12:32:38.005086 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 12:32:38.006960 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 12:32:38.007115 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 12:32:38.008982 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 12:32:38.009140 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 12:32:38.011221 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 12:32:38.011363 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 12:32:38.017394 augenrules[1422]: /sbin/augenrules: No change May 13 12:32:38.023453 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 13 12:32:38.023777 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 12:32:38.023844 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 12:32:38.031565 augenrules[1451]: No rules May 13 12:32:38.033135 systemd[1]: audit-rules.service: Deactivated successfully. May 13 12:32:38.034845 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 12:32:38.083134 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 12:32:38.087332 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 12:32:38.117213 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 12:32:38.138144 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 12:32:38.139536 systemd[1]: Reached target time-set.target - System Time Set. May 13 12:32:38.154806 systemd-resolved[1351]: Positive Trust Anchors: May 13 12:32:38.154824 systemd-resolved[1351]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 12:32:38.154855 systemd-resolved[1351]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 12:32:38.161230 systemd-networkd[1430]: lo: Link UP May 13 12:32:38.161244 systemd-networkd[1430]: lo: Gained carrier May 13 12:32:38.162230 systemd-networkd[1430]: Enumeration completed May 13 12:32:38.162342 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 12:32:38.165940 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 13 12:32:38.171142 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 12:32:38.175870 systemd-networkd[1430]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 12:32:38.175877 systemd-networkd[1430]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 12:32:38.176691 systemd-resolved[1351]: Defaulting to hostname 'linux'. May 13 12:32:38.179976 systemd-networkd[1430]: eth0: Link UP May 13 12:32:38.180102 systemd-networkd[1430]: eth0: Gained carrier May 13 12:32:38.180119 systemd-networkd[1430]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 12:32:38.186765 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 12:32:38.187974 systemd[1]: Reached target network.target - Network. May 13 12:32:38.189319 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 12:32:38.190948 systemd[1]: Reached target sysinit.target - System Initialization. May 13 12:32:38.192507 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 12:32:38.194259 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 12:32:38.196764 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 12:32:38.197977 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 12:32:38.200207 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 12:32:38.201470 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 12:32:38.201530 systemd[1]: Reached target paths.target - Path Units. May 13 12:32:38.202480 systemd-networkd[1430]: eth0: DHCPv4 address 10.0.0.26/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 12:32:38.202617 systemd[1]: Reached target timers.target - Timer Units. May 13 12:32:38.204235 systemd-timesyncd[1434]: Network configuration changed, trying to establish connection. May 13 12:32:38.205608 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 12:32:38.209193 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 12:32:38.214092 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 13 12:32:38.215659 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 13 12:32:38.217042 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 13 12:32:38.217387 systemd-timesyncd[1434]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 12:32:38.217440 systemd-timesyncd[1434]: Initial clock synchronization to Tue 2025-05-13 12:32:38.588786 UTC. May 13 12:32:38.220052 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 12:32:38.221420 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 13 12:32:38.223712 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 13 12:32:38.225147 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 12:32:38.233371 systemd[1]: Reached target sockets.target - Socket Units. May 13 12:32:38.234528 systemd[1]: Reached target basic.target - Basic System. May 13 12:32:38.235485 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 12:32:38.235524 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 12:32:38.236523 systemd[1]: Starting containerd.service - containerd container runtime... May 13 12:32:38.238438 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 12:32:38.240278 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 12:32:38.252438 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 12:32:38.254403 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 12:32:38.255432 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 12:32:38.256347 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 12:32:38.260647 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 12:32:38.263458 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 12:32:38.266651 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 12:32:38.270129 jq[1492]: false May 13 12:32:38.275633 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 12:32:38.277958 extend-filesystems[1493]: Found loop3 May 13 12:32:38.277958 extend-filesystems[1493]: Found loop4 May 13 12:32:38.281486 extend-filesystems[1493]: Found loop5 May 13 12:32:38.281486 extend-filesystems[1493]: Found vda May 13 12:32:38.281486 extend-filesystems[1493]: Found vda1 May 13 12:32:38.281486 extend-filesystems[1493]: Found vda2 May 13 12:32:38.281486 extend-filesystems[1493]: Found vda3 May 13 12:32:38.281486 extend-filesystems[1493]: Found usr May 13 12:32:38.281486 extend-filesystems[1493]: Found vda4 May 13 12:32:38.281486 extend-filesystems[1493]: Found vda6 May 13 12:32:38.281486 extend-filesystems[1493]: Found vda7 May 13 12:32:38.281486 extend-filesystems[1493]: Found vda9 May 13 12:32:38.281486 extend-filesystems[1493]: Checking size of /dev/vda9 May 13 12:32:38.278525 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 12:32:38.281220 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 12:32:38.282068 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 12:32:38.283075 systemd[1]: Starting update-engine.service - Update Engine... May 13 12:32:38.289140 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 12:32:38.292067 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 12:32:38.295109 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 12:32:38.295332 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 12:32:38.295735 systemd[1]: motdgen.service: Deactivated successfully. May 13 12:32:38.295977 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 12:32:38.297602 extend-filesystems[1493]: Resized partition /dev/vda9 May 13 12:32:38.301278 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 12:32:38.301793 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 12:32:38.303593 extend-filesystems[1517]: resize2fs 1.47.2 (1-Jan-2025) May 13 12:32:38.309579 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 12:32:38.338456 jq[1513]: true May 13 12:32:38.362624 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 12:32:38.362828 (ntainerd)[1528]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 12:32:38.376212 tar[1518]: linux-arm64/helm May 13 12:32:38.376426 update_engine[1511]: I20250513 12:32:38.373646 1511 main.cc:92] Flatcar Update Engine starting May 13 12:32:38.376593 jq[1527]: true May 13 12:32:38.377351 extend-filesystems[1517]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 12:32:38.377351 extend-filesystems[1517]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 12:32:38.377351 extend-filesystems[1517]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 12:32:38.385990 extend-filesystems[1493]: Resized filesystem in /dev/vda9 May 13 12:32:38.388701 update_engine[1511]: I20250513 12:32:38.384305 1511 update_check_scheduler.cc:74] Next update check in 8m3s May 13 12:32:38.378571 dbus-daemon[1490]: [system] SELinux support is enabled May 13 12:32:38.378669 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 12:32:38.378894 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 12:32:38.391508 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 12:32:38.396020 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 12:32:38.398304 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 12:32:38.398354 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 12:32:38.400114 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 12:32:38.400133 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 12:32:38.401729 systemd[1]: Started update-engine.service - Update Engine. May 13 12:32:38.404276 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 12:32:38.413791 systemd-logind[1504]: Watching system buttons on /dev/input/event0 (Power Button) May 13 12:32:38.414051 systemd-logind[1504]: New seat seat0. May 13 12:32:38.415993 systemd[1]: Started systemd-logind.service - User Login Management. May 13 12:32:38.432931 bash[1554]: Updated "/home/core/.ssh/authorized_keys" May 13 12:32:38.436955 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 12:32:38.438889 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 13 12:32:38.469367 locksmithd[1547]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 12:32:38.573867 containerd[1528]: time="2025-05-13T12:32:38Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 13 12:32:38.574778 containerd[1528]: time="2025-05-13T12:32:38.574745080Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 13 12:32:38.584268 containerd[1528]: time="2025-05-13T12:32:38.584231840Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9µs" May 13 12:32:38.584268 containerd[1528]: time="2025-05-13T12:32:38.584263360Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 13 12:32:38.584345 containerd[1528]: time="2025-05-13T12:32:38.584279840Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 13 12:32:38.584450 containerd[1528]: time="2025-05-13T12:32:38.584429120Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 13 12:32:38.584478 containerd[1528]: time="2025-05-13T12:32:38.584450920Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 13 12:32:38.584478 containerd[1528]: time="2025-05-13T12:32:38.584474000Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 12:32:38.584550 containerd[1528]: time="2025-05-13T12:32:38.584531520Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 12:32:38.584579 containerd[1528]: time="2025-05-13T12:32:38.584561280Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 12:32:38.584775 containerd[1528]: time="2025-05-13T12:32:38.584751800Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 12:32:38.584775 containerd[1528]: time="2025-05-13T12:32:38.584773440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 12:32:38.584817 containerd[1528]: time="2025-05-13T12:32:38.584784640Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 12:32:38.584817 containerd[1528]: time="2025-05-13T12:32:38.584792040Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 13 12:32:38.584881 containerd[1528]: time="2025-05-13T12:32:38.584863680Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 13 12:32:38.585050 containerd[1528]: time="2025-05-13T12:32:38.585031840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 12:32:38.585081 containerd[1528]: time="2025-05-13T12:32:38.585069000Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 12:32:38.585081 containerd[1528]: time="2025-05-13T12:32:38.585079560Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 13 12:32:38.585130 containerd[1528]: time="2025-05-13T12:32:38.585110120Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 13 12:32:38.585402 containerd[1528]: time="2025-05-13T12:32:38.585367480Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 13 12:32:38.585440 containerd[1528]: time="2025-05-13T12:32:38.585429400Z" level=info msg="metadata content store policy set" policy=shared May 13 12:32:38.589379 containerd[1528]: time="2025-05-13T12:32:38.589129920Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 13 12:32:38.589379 containerd[1528]: time="2025-05-13T12:32:38.589173960Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 13 12:32:38.589379 containerd[1528]: time="2025-05-13T12:32:38.589186920Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 13 12:32:38.589379 containerd[1528]: time="2025-05-13T12:32:38.589197640Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 13 12:32:38.589379 containerd[1528]: time="2025-05-13T12:32:38.589208840Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 13 12:32:38.589379 containerd[1528]: time="2025-05-13T12:32:38.589218640Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 13 12:32:38.589379 containerd[1528]: time="2025-05-13T12:32:38.589228840Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 13 12:32:38.589379 containerd[1528]: time="2025-05-13T12:32:38.589239320Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 13 12:32:38.589379 containerd[1528]: time="2025-05-13T12:32:38.589253480Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 13 12:32:38.589379 containerd[1528]: time="2025-05-13T12:32:38.589263360Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 13 12:32:38.589379 containerd[1528]: time="2025-05-13T12:32:38.589271880Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 13 12:32:38.589379 containerd[1528]: time="2025-05-13T12:32:38.589283080Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 13 12:32:38.589641 containerd[1528]: time="2025-05-13T12:32:38.589388440Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 13 12:32:38.589641 containerd[1528]: time="2025-05-13T12:32:38.589407040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 13 12:32:38.589641 containerd[1528]: time="2025-05-13T12:32:38.589420960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 13 12:32:38.589641 containerd[1528]: time="2025-05-13T12:32:38.589435840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 13 12:32:38.589641 containerd[1528]: time="2025-05-13T12:32:38.589445800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 13 12:32:38.589641 containerd[1528]: time="2025-05-13T12:32:38.589457520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 13 12:32:38.589641 containerd[1528]: time="2025-05-13T12:32:38.589471520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 13 12:32:38.589641 containerd[1528]: time="2025-05-13T12:32:38.589481320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 13 12:32:38.589641 containerd[1528]: time="2025-05-13T12:32:38.589502720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 13 12:32:38.589641 containerd[1528]: time="2025-05-13T12:32:38.589514120Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 13 12:32:38.589641 containerd[1528]: time="2025-05-13T12:32:38.589525920Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 13 12:32:38.589813 containerd[1528]: time="2025-05-13T12:32:38.589717480Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 13 12:32:38.589813 containerd[1528]: time="2025-05-13T12:32:38.589732880Z" level=info msg="Start snapshots syncer" May 13 12:32:38.589813 containerd[1528]: time="2025-05-13T12:32:38.589752920Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 13 12:32:38.590120 containerd[1528]: time="2025-05-13T12:32:38.589942600Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 13 12:32:38.590120 containerd[1528]: time="2025-05-13T12:32:38.589999000Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 13 12:32:38.590295 containerd[1528]: time="2025-05-13T12:32:38.590061960Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 13 12:32:38.590295 containerd[1528]: time="2025-05-13T12:32:38.590164800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 13 12:32:38.590295 containerd[1528]: time="2025-05-13T12:32:38.590187880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 13 12:32:38.590295 containerd[1528]: time="2025-05-13T12:32:38.590199000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 13 12:32:38.590295 containerd[1528]: time="2025-05-13T12:32:38.590209000Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 13 12:32:38.590295 containerd[1528]: time="2025-05-13T12:32:38.590219440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 13 12:32:38.590295 containerd[1528]: time="2025-05-13T12:32:38.590229040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 13 12:32:38.590295 containerd[1528]: time="2025-05-13T12:32:38.590238360Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 13 12:32:38.590295 containerd[1528]: time="2025-05-13T12:32:38.590258800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 13 12:32:38.590295 containerd[1528]: time="2025-05-13T12:32:38.590268840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 13 12:32:38.590295 containerd[1528]: time="2025-05-13T12:32:38.590278040Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 13 12:32:38.590751 containerd[1528]: time="2025-05-13T12:32:38.590313360Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 12:32:38.590751 containerd[1528]: time="2025-05-13T12:32:38.590327600Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 12:32:38.590751 containerd[1528]: time="2025-05-13T12:32:38.590335920Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 12:32:38.590751 containerd[1528]: time="2025-05-13T12:32:38.590345120Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 12:32:38.590751 containerd[1528]: time="2025-05-13T12:32:38.590355480Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 13 12:32:38.590751 containerd[1528]: time="2025-05-13T12:32:38.590364680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 13 12:32:38.590751 containerd[1528]: time="2025-05-13T12:32:38.590375200Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 13 12:32:38.590751 containerd[1528]: time="2025-05-13T12:32:38.590448080Z" level=info msg="runtime interface created" May 13 12:32:38.590751 containerd[1528]: time="2025-05-13T12:32:38.590453160Z" level=info msg="created NRI interface" May 13 12:32:38.590751 containerd[1528]: time="2025-05-13T12:32:38.590461040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 13 12:32:38.590751 containerd[1528]: time="2025-05-13T12:32:38.590471960Z" level=info msg="Connect containerd service" May 13 12:32:38.590751 containerd[1528]: time="2025-05-13T12:32:38.590510640Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 12:32:38.591151 containerd[1528]: time="2025-05-13T12:32:38.591121640Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 12:32:38.694198 containerd[1528]: time="2025-05-13T12:32:38.693822600Z" level=info msg="Start subscribing containerd event" May 13 12:32:38.694198 containerd[1528]: time="2025-05-13T12:32:38.693888000Z" level=info msg="Start recovering state" May 13 12:32:38.694198 containerd[1528]: time="2025-05-13T12:32:38.693968040Z" level=info msg="Start event monitor" May 13 12:32:38.694198 containerd[1528]: time="2025-05-13T12:32:38.693979640Z" level=info msg="Start cni network conf syncer for default" May 13 12:32:38.694198 containerd[1528]: time="2025-05-13T12:32:38.693987480Z" level=info msg="Start streaming server" May 13 12:32:38.694198 containerd[1528]: time="2025-05-13T12:32:38.694011000Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 13 12:32:38.694198 containerd[1528]: time="2025-05-13T12:32:38.694018560Z" level=info msg="runtime interface starting up..." May 13 12:32:38.694198 containerd[1528]: time="2025-05-13T12:32:38.694024240Z" level=info msg="starting plugins..." May 13 12:32:38.694198 containerd[1528]: time="2025-05-13T12:32:38.694036080Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 13 12:32:38.694875 containerd[1528]: time="2025-05-13T12:32:38.694849440Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 12:32:38.694982 containerd[1528]: time="2025-05-13T12:32:38.694968880Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 12:32:38.697380 containerd[1528]: time="2025-05-13T12:32:38.697356680Z" level=info msg="containerd successfully booted in 0.123850s" May 13 12:32:38.697463 systemd[1]: Started containerd.service - containerd container runtime. May 13 12:32:38.737160 tar[1518]: linux-arm64/LICENSE May 13 12:32:38.737336 tar[1518]: linux-arm64/README.md May 13 12:32:38.753519 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 12:32:39.337737 systemd-networkd[1430]: eth0: Gained IPv6LL May 13 12:32:39.340091 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 12:32:39.341966 systemd[1]: Reached target network-online.target - Network is Online. May 13 12:32:39.344465 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 13 12:32:39.346990 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:32:39.356348 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 12:32:39.377835 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 12:32:39.378114 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 13 12:32:39.379829 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 12:32:39.385435 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 12:32:39.877771 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:32:39.892973 (kubelet)[1608]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 12:32:40.044418 sshd_keygen[1509]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 12:32:40.064420 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 12:32:40.067350 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 12:32:40.087099 systemd[1]: issuegen.service: Deactivated successfully. May 13 12:32:40.087403 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 12:32:40.090958 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 12:32:40.111928 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 12:32:40.115094 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 12:32:40.117369 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 13 12:32:40.119111 systemd[1]: Reached target getty.target - Login Prompts. May 13 12:32:40.120359 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 12:32:40.121616 systemd[1]: Startup finished in 2.141s (kernel) + 4.945s (initrd) + 3.583s (userspace) = 10.671s. May 13 12:32:40.376324 kubelet[1608]: E0513 12:32:40.376268 1608 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 12:32:40.378884 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 12:32:40.379023 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 12:32:40.380716 systemd[1]: kubelet.service: Consumed 807ms CPU time, 239.3M memory peak. May 13 12:32:45.173045 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 12:32:45.174175 systemd[1]: Started sshd@0-10.0.0.26:22-10.0.0.1:60404.service - OpenSSH per-connection server daemon (10.0.0.1:60404). May 13 12:32:45.235026 sshd[1638]: Accepted publickey for core from 10.0.0.1 port 60404 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:32:45.236654 sshd-session[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:32:45.242516 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 12:32:45.243436 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 12:32:45.248838 systemd-logind[1504]: New session 1 of user core. May 13 12:32:45.272632 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 12:32:45.276071 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 12:32:45.294235 (systemd)[1642]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 12:32:45.296344 systemd-logind[1504]: New session c1 of user core. May 13 12:32:45.410664 systemd[1642]: Queued start job for default target default.target. May 13 12:32:45.420602 systemd[1642]: Created slice app.slice - User Application Slice. May 13 12:32:45.420626 systemd[1642]: Reached target paths.target - Paths. May 13 12:32:45.420675 systemd[1642]: Reached target timers.target - Timers. May 13 12:32:45.421970 systemd[1642]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 12:32:45.431301 systemd[1642]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 12:32:45.431369 systemd[1642]: Reached target sockets.target - Sockets. May 13 12:32:45.431410 systemd[1642]: Reached target basic.target - Basic System. May 13 12:32:45.431438 systemd[1642]: Reached target default.target - Main User Target. May 13 12:32:45.431465 systemd[1642]: Startup finished in 130ms. May 13 12:32:45.431747 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 12:32:45.433351 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 12:32:45.494972 systemd[1]: Started sshd@1-10.0.0.26:22-10.0.0.1:60408.service - OpenSSH per-connection server daemon (10.0.0.1:60408). May 13 12:32:45.541040 sshd[1653]: Accepted publickey for core from 10.0.0.1 port 60408 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:32:45.542245 sshd-session[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:32:45.546651 systemd-logind[1504]: New session 2 of user core. May 13 12:32:45.560728 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 12:32:45.612333 sshd[1655]: Connection closed by 10.0.0.1 port 60408 May 13 12:32:45.612631 sshd-session[1653]: pam_unix(sshd:session): session closed for user core May 13 12:32:45.630681 systemd[1]: sshd@1-10.0.0.26:22-10.0.0.1:60408.service: Deactivated successfully. May 13 12:32:45.632943 systemd[1]: session-2.scope: Deactivated successfully. May 13 12:32:45.633553 systemd-logind[1504]: Session 2 logged out. Waiting for processes to exit. May 13 12:32:45.635744 systemd[1]: Started sshd@2-10.0.0.26:22-10.0.0.1:60424.service - OpenSSH per-connection server daemon (10.0.0.1:60424). May 13 12:32:45.636342 systemd-logind[1504]: Removed session 2. May 13 12:32:45.690506 sshd[1661]: Accepted publickey for core from 10.0.0.1 port 60424 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:32:45.691841 sshd-session[1661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:32:45.696465 systemd-logind[1504]: New session 3 of user core. May 13 12:32:45.707715 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 12:32:45.756119 sshd[1664]: Connection closed by 10.0.0.1 port 60424 May 13 12:32:45.756412 sshd-session[1661]: pam_unix(sshd:session): session closed for user core May 13 12:32:45.767518 systemd[1]: sshd@2-10.0.0.26:22-10.0.0.1:60424.service: Deactivated successfully. May 13 12:32:45.769901 systemd[1]: session-3.scope: Deactivated successfully. May 13 12:32:45.771392 systemd-logind[1504]: Session 3 logged out. Waiting for processes to exit. May 13 12:32:45.772594 systemd[1]: Started sshd@3-10.0.0.26:22-10.0.0.1:60428.service - OpenSSH per-connection server daemon (10.0.0.1:60428). May 13 12:32:45.773456 systemd-logind[1504]: Removed session 3. May 13 12:32:45.815963 sshd[1670]: Accepted publickey for core from 10.0.0.1 port 60428 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:32:45.817115 sshd-session[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:32:45.821359 systemd-logind[1504]: New session 4 of user core. May 13 12:32:45.828757 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 12:32:45.880954 sshd[1672]: Connection closed by 10.0.0.1 port 60428 May 13 12:32:45.881227 sshd-session[1670]: pam_unix(sshd:session): session closed for user core May 13 12:32:45.906616 systemd[1]: sshd@3-10.0.0.26:22-10.0.0.1:60428.service: Deactivated successfully. May 13 12:32:45.908906 systemd[1]: session-4.scope: Deactivated successfully. May 13 12:32:45.910412 systemd-logind[1504]: Session 4 logged out. Waiting for processes to exit. May 13 12:32:45.911861 systemd[1]: Started sshd@4-10.0.0.26:22-10.0.0.1:60442.service - OpenSSH per-connection server daemon (10.0.0.1:60442). May 13 12:32:45.913017 systemd-logind[1504]: Removed session 4. May 13 12:32:45.968396 sshd[1678]: Accepted publickey for core from 10.0.0.1 port 60442 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:32:45.969680 sshd-session[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:32:45.973535 systemd-logind[1504]: New session 5 of user core. May 13 12:32:45.986740 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 12:32:46.053901 sudo[1681]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 12:32:46.054188 sudo[1681]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 12:32:46.068275 sudo[1681]: pam_unix(sudo:session): session closed for user root May 13 12:32:46.069791 sshd[1680]: Connection closed by 10.0.0.1 port 60442 May 13 12:32:46.070270 sshd-session[1678]: pam_unix(sshd:session): session closed for user core May 13 12:32:46.081527 systemd[1]: sshd@4-10.0.0.26:22-10.0.0.1:60442.service: Deactivated successfully. May 13 12:32:46.084140 systemd[1]: session-5.scope: Deactivated successfully. May 13 12:32:46.084829 systemd-logind[1504]: Session 5 logged out. Waiting for processes to exit. May 13 12:32:46.087157 systemd[1]: Started sshd@5-10.0.0.26:22-10.0.0.1:60456.service - OpenSSH per-connection server daemon (10.0.0.1:60456). May 13 12:32:46.088047 systemd-logind[1504]: Removed session 5. May 13 12:32:46.144681 sshd[1687]: Accepted publickey for core from 10.0.0.1 port 60456 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:32:46.145970 sshd-session[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:32:46.149760 systemd-logind[1504]: New session 6 of user core. May 13 12:32:46.162719 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 12:32:46.213706 sudo[1691]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 12:32:46.214242 sudo[1691]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 12:32:46.278623 sudo[1691]: pam_unix(sudo:session): session closed for user root May 13 12:32:46.283668 sudo[1690]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 13 12:32:46.283925 sudo[1690]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 12:32:46.292054 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 12:32:46.333995 augenrules[1713]: No rules May 13 12:32:46.335153 systemd[1]: audit-rules.service: Deactivated successfully. May 13 12:32:46.335398 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 12:32:46.336666 sudo[1690]: pam_unix(sudo:session): session closed for user root May 13 12:32:46.338435 sshd[1689]: Connection closed by 10.0.0.1 port 60456 May 13 12:32:46.338351 sshd-session[1687]: pam_unix(sshd:session): session closed for user core May 13 12:32:46.349516 systemd[1]: sshd@5-10.0.0.26:22-10.0.0.1:60456.service: Deactivated successfully. May 13 12:32:46.351191 systemd[1]: session-6.scope: Deactivated successfully. May 13 12:32:46.352085 systemd-logind[1504]: Session 6 logged out. Waiting for processes to exit. May 13 12:32:46.354592 systemd[1]: Started sshd@6-10.0.0.26:22-10.0.0.1:60460.service - OpenSSH per-connection server daemon (10.0.0.1:60460). May 13 12:32:46.355379 systemd-logind[1504]: Removed session 6. May 13 12:32:46.408378 sshd[1722]: Accepted publickey for core from 10.0.0.1 port 60460 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:32:46.409762 sshd-session[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:32:46.413554 systemd-logind[1504]: New session 7 of user core. May 13 12:32:46.425722 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 12:32:46.476199 sudo[1726]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 12:32:46.476465 sudo[1726]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 12:32:46.832787 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 12:32:46.850999 (dockerd)[1747]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 12:32:47.119956 dockerd[1747]: time="2025-05-13T12:32:47.119834576Z" level=info msg="Starting up" May 13 12:32:47.122606 dockerd[1747]: time="2025-05-13T12:32:47.122578914Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 13 12:32:47.146941 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3001546258-merged.mount: Deactivated successfully. May 13 12:32:47.165293 dockerd[1747]: time="2025-05-13T12:32:47.165133620Z" level=info msg="Loading containers: start." May 13 12:32:47.173596 kernel: Initializing XFRM netlink socket May 13 12:32:47.380141 systemd-networkd[1430]: docker0: Link UP May 13 12:32:47.383447 dockerd[1747]: time="2025-05-13T12:32:47.383405703Z" level=info msg="Loading containers: done." May 13 12:32:47.397278 dockerd[1747]: time="2025-05-13T12:32:47.397224523Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 12:32:47.397411 dockerd[1747]: time="2025-05-13T12:32:47.397303767Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 13 12:32:47.397411 dockerd[1747]: time="2025-05-13T12:32:47.397405647Z" level=info msg="Initializing buildkit" May 13 12:32:47.417784 dockerd[1747]: time="2025-05-13T12:32:47.417749175Z" level=info msg="Completed buildkit initialization" May 13 12:32:47.422456 dockerd[1747]: time="2025-05-13T12:32:47.422422041Z" level=info msg="Daemon has completed initialization" May 13 12:32:47.422616 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 12:32:47.423025 dockerd[1747]: time="2025-05-13T12:32:47.422489947Z" level=info msg="API listen on /run/docker.sock" May 13 12:32:48.323043 containerd[1528]: time="2025-05-13T12:32:48.322923163Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 13 12:32:49.119678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4078107079.mount: Deactivated successfully. May 13 12:32:50.216004 containerd[1528]: time="2025-05-13T12:32:50.215938488Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:32:50.216498 containerd[1528]: time="2025-05-13T12:32:50.216463642Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794152" May 13 12:32:50.217271 containerd[1528]: time="2025-05-13T12:32:50.217232735Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:32:50.220655 containerd[1528]: time="2025-05-13T12:32:50.220620483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:32:50.222221 containerd[1528]: time="2025-05-13T12:32:50.222093582Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 1.899125102s" May 13 12:32:50.222221 containerd[1528]: time="2025-05-13T12:32:50.222127582Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 13 12:32:50.237496 containerd[1528]: time="2025-05-13T12:32:50.237397391Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 13 12:32:50.629425 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 12:32:50.630891 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:32:50.756320 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:32:50.759481 (kubelet)[2036]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 12:32:50.795514 kubelet[2036]: E0513 12:32:50.795463 2036 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 12:32:50.798949 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 12:32:50.799109 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 12:32:50.800696 systemd[1]: kubelet.service: Consumed 131ms CPU time, 95.2M memory peak. May 13 12:32:51.735100 containerd[1528]: time="2025-05-13T12:32:51.735050869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:32:51.736038 containerd[1528]: time="2025-05-13T12:32:51.735773606Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855552" May 13 12:32:51.736720 containerd[1528]: time="2025-05-13T12:32:51.736684446Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:32:51.738983 containerd[1528]: time="2025-05-13T12:32:51.738954117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:32:51.740195 containerd[1528]: time="2025-05-13T12:32:51.740105707Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 1.502521957s" May 13 12:32:51.740195 containerd[1528]: time="2025-05-13T12:32:51.740138249Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 13 12:32:51.756115 containerd[1528]: time="2025-05-13T12:32:51.756080732Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 13 12:32:52.674226 containerd[1528]: time="2025-05-13T12:32:52.674174882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:32:52.675669 containerd[1528]: time="2025-05-13T12:32:52.675635538Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263947" May 13 12:32:52.676705 containerd[1528]: time="2025-05-13T12:32:52.676647916Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:32:52.679800 containerd[1528]: time="2025-05-13T12:32:52.679769857Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:32:52.680302 containerd[1528]: time="2025-05-13T12:32:52.680263141Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 924.144386ms" May 13 12:32:52.680358 containerd[1528]: time="2025-05-13T12:32:52.680302299Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 13 12:32:52.695744 containerd[1528]: time="2025-05-13T12:32:52.695708634Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 13 12:32:53.647712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount179323575.mount: Deactivated successfully. May 13 12:32:53.836433 containerd[1528]: time="2025-05-13T12:32:53.836385839Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:32:53.837320 containerd[1528]: time="2025-05-13T12:32:53.837290148Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775707" May 13 12:32:53.838089 containerd[1528]: time="2025-05-13T12:32:53.838051521Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:32:53.839450 containerd[1528]: time="2025-05-13T12:32:53.839417634Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:32:53.840494 containerd[1528]: time="2025-05-13T12:32:53.840453237Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.144709767s" May 13 12:32:53.840494 containerd[1528]: time="2025-05-13T12:32:53.840490945Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 13 12:32:53.856550 containerd[1528]: time="2025-05-13T12:32:53.856501455Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 13 12:32:54.389089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2333386610.mount: Deactivated successfully. May 13 12:32:55.105780 containerd[1528]: time="2025-05-13T12:32:55.105715099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:32:55.106300 containerd[1528]: time="2025-05-13T12:32:55.106266948Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" May 13 12:32:55.106893 containerd[1528]: time="2025-05-13T12:32:55.106870479Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:32:55.109257 containerd[1528]: time="2025-05-13T12:32:55.109226927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:32:55.110980 containerd[1528]: time="2025-05-13T12:32:55.110916942Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.254381739s" May 13 12:32:55.110980 containerd[1528]: time="2025-05-13T12:32:55.110950042Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 13 12:32:55.126288 containerd[1528]: time="2025-05-13T12:32:55.126254167Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 13 12:32:55.573064 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3785696774.mount: Deactivated successfully. May 13 12:32:55.578169 containerd[1528]: time="2025-05-13T12:32:55.578129157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:32:55.578810 containerd[1528]: time="2025-05-13T12:32:55.578777331Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" May 13 12:32:55.579492 containerd[1528]: time="2025-05-13T12:32:55.579434393Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:32:55.581576 containerd[1528]: time="2025-05-13T12:32:55.581247841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:32:55.581972 containerd[1528]: time="2025-05-13T12:32:55.581943112Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 455.651581ms" May 13 12:32:55.582058 containerd[1528]: time="2025-05-13T12:32:55.582043298Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 13 12:32:55.596460 containerd[1528]: time="2025-05-13T12:32:55.596422299Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 13 12:32:56.113943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount619286368.mount: Deactivated successfully. May 13 12:32:57.899570 containerd[1528]: time="2025-05-13T12:32:57.899482494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:32:57.900451 containerd[1528]: time="2025-05-13T12:32:57.900128555Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" May 13 12:32:57.901616 containerd[1528]: time="2025-05-13T12:32:57.900968746Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:32:57.904202 containerd[1528]: time="2025-05-13T12:32:57.904131446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:32:57.905817 containerd[1528]: time="2025-05-13T12:32:57.905734788Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.309271313s" May 13 12:32:57.905817 containerd[1528]: time="2025-05-13T12:32:57.905767003Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 13 12:33:01.049852 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 12:33:01.051316 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:33:01.181311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:33:01.191847 (kubelet)[2299]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 12:33:01.228585 kubelet[2299]: E0513 12:33:01.228436 2299 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 12:33:01.230993 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 12:33:01.231122 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 12:33:01.231393 systemd[1]: kubelet.service: Consumed 126ms CPU time, 94.9M memory peak. May 13 12:33:04.850783 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:33:04.850921 systemd[1]: kubelet.service: Consumed 126ms CPU time, 94.9M memory peak. May 13 12:33:04.853080 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:33:04.868674 systemd[1]: Reload requested from client PID 2313 ('systemctl') (unit session-7.scope)... May 13 12:33:04.868804 systemd[1]: Reloading... May 13 12:33:04.941678 zram_generator::config[2357]: No configuration found. May 13 12:33:05.047104 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 12:33:05.131239 systemd[1]: Reloading finished in 262 ms. May 13 12:33:05.165941 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:33:05.169014 (kubelet)[2393]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 12:33:05.169370 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:33:05.169759 systemd[1]: kubelet.service: Deactivated successfully. May 13 12:33:05.169980 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:33:05.170017 systemd[1]: kubelet.service: Consumed 85ms CPU time, 82.4M memory peak. May 13 12:33:05.171951 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:33:05.296592 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:33:05.300537 (kubelet)[2406]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 12:33:05.337835 kubelet[2406]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 12:33:05.337835 kubelet[2406]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 12:33:05.337835 kubelet[2406]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 12:33:05.338664 kubelet[2406]: I0513 12:33:05.338616 2406 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 12:33:05.989097 kubelet[2406]: I0513 12:33:05.989057 2406 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 12:33:05.989243 kubelet[2406]: I0513 12:33:05.989232 2406 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 12:33:05.989575 kubelet[2406]: I0513 12:33:05.989536 2406 server.go:927] "Client rotation is on, will bootstrap in background" May 13 12:33:06.024508 kubelet[2406]: I0513 12:33:06.024445 2406 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 12:33:06.025081 kubelet[2406]: E0513 12:33:06.025057 2406 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.26:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.26:6443: connect: connection refused May 13 12:33:06.036517 kubelet[2406]: I0513 12:33:06.036486 2406 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 12:33:06.036966 kubelet[2406]: I0513 12:33:06.036930 2406 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 12:33:06.037146 kubelet[2406]: I0513 12:33:06.036957 2406 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 12:33:06.037261 kubelet[2406]: I0513 12:33:06.037250 2406 topology_manager.go:138] "Creating topology manager with none policy" May 13 12:33:06.037283 kubelet[2406]: I0513 12:33:06.037262 2406 container_manager_linux.go:301] "Creating device plugin manager" May 13 12:33:06.037406 kubelet[2406]: I0513 12:33:06.037385 2406 state_mem.go:36] "Initialized new in-memory state store" May 13 12:33:06.038724 kubelet[2406]: I0513 12:33:06.038703 2406 kubelet.go:400] "Attempting to sync node with API server" May 13 12:33:06.038771 kubelet[2406]: I0513 12:33:06.038727 2406 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 12:33:06.038928 kubelet[2406]: I0513 12:33:06.038914 2406 kubelet.go:312] "Adding apiserver pod source" May 13 12:33:06.039100 kubelet[2406]: I0513 12:33:06.039088 2406 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 12:33:06.040105 kubelet[2406]: W0513 12:33:06.040041 2406 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused May 13 12:33:06.040105 kubelet[2406]: E0513 12:33:06.040100 2406 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused May 13 12:33:06.040501 kubelet[2406]: I0513 12:33:06.040474 2406 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 13 12:33:06.040689 kubelet[2406]: W0513 12:33:06.040650 2406 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.26:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused May 13 12:33:06.040763 kubelet[2406]: E0513 12:33:06.040752 2406 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.26:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused May 13 12:33:06.041029 kubelet[2406]: I0513 12:33:06.041013 2406 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 12:33:06.041349 kubelet[2406]: W0513 12:33:06.041168 2406 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 12:33:06.042225 kubelet[2406]: I0513 12:33:06.042207 2406 server.go:1264] "Started kubelet" May 13 12:33:06.044502 kubelet[2406]: I0513 12:33:06.044462 2406 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 12:33:06.044706 kubelet[2406]: I0513 12:33:06.044644 2406 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 12:33:06.044974 kubelet[2406]: I0513 12:33:06.044946 2406 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 12:33:06.045783 kubelet[2406]: I0513 12:33:06.045759 2406 server.go:455] "Adding debug handlers to kubelet server" May 13 12:33:06.045979 kubelet[2406]: E0513 12:33:06.044227 2406 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.26:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.26:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f16325f1134d3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 12:33:06.042180819 +0000 UTC m=+0.738783724,LastTimestamp:2025-05-13 12:33:06.042180819 +0000 UTC m=+0.738783724,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 12:33:06.046967 kubelet[2406]: E0513 12:33:06.046946 2406 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 12:33:06.047572 kubelet[2406]: I0513 12:33:06.047202 2406 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 12:33:06.047572 kubelet[2406]: I0513 12:33:06.047454 2406 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 12:33:06.047572 kubelet[2406]: I0513 12:33:06.047539 2406 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 12:33:06.048045 kubelet[2406]: I0513 12:33:06.048029 2406 reconciler.go:26] "Reconciler: start to sync state" May 13 12:33:06.048438 kubelet[2406]: W0513 12:33:06.048398 2406 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused May 13 12:33:06.048531 kubelet[2406]: E0513 12:33:06.048520 2406 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused May 13 12:33:06.048614 kubelet[2406]: E0513 12:33:06.048430 2406 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 12:33:06.049078 kubelet[2406]: E0513 12:33:06.049041 2406 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="200ms" May 13 12:33:06.049739 kubelet[2406]: I0513 12:33:06.049718 2406 factory.go:221] Registration of the systemd container factory successfully May 13 12:33:06.049902 kubelet[2406]: I0513 12:33:06.049793 2406 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 12:33:06.051311 kubelet[2406]: I0513 12:33:06.051282 2406 factory.go:221] Registration of the containerd container factory successfully May 13 12:33:06.061148 kubelet[2406]: I0513 12:33:06.061127 2406 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 12:33:06.061148 kubelet[2406]: I0513 12:33:06.061144 2406 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 12:33:06.061245 kubelet[2406]: I0513 12:33:06.061161 2406 state_mem.go:36] "Initialized new in-memory state store" May 13 12:33:06.062088 kubelet[2406]: I0513 12:33:06.062046 2406 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 12:33:06.063216 kubelet[2406]: I0513 12:33:06.063197 2406 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 12:33:06.063358 kubelet[2406]: I0513 12:33:06.063348 2406 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 12:33:06.063403 kubelet[2406]: I0513 12:33:06.063369 2406 kubelet.go:2337] "Starting kubelet main sync loop" May 13 12:33:06.063424 kubelet[2406]: E0513 12:33:06.063407 2406 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 12:33:06.064072 kubelet[2406]: W0513 12:33:06.064041 2406 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused May 13 12:33:06.064135 kubelet[2406]: E0513 12:33:06.064083 2406 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused May 13 12:33:06.139047 kubelet[2406]: I0513 12:33:06.139003 2406 policy_none.go:49] "None policy: Start" May 13 12:33:06.139918 kubelet[2406]: I0513 12:33:06.139902 2406 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 12:33:06.140002 kubelet[2406]: I0513 12:33:06.139929 2406 state_mem.go:35] "Initializing new in-memory state store" May 13 12:33:06.145244 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 12:33:06.149663 kubelet[2406]: I0513 12:33:06.149645 2406 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 12:33:06.150010 kubelet[2406]: E0513 12:33:06.149973 2406 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" May 13 12:33:06.157947 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 12:33:06.160469 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 12:33:06.163789 kubelet[2406]: E0513 12:33:06.163752 2406 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 12:33:06.170239 kubelet[2406]: I0513 12:33:06.170202 2406 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 12:33:06.170573 kubelet[2406]: I0513 12:33:06.170370 2406 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 12:33:06.170573 kubelet[2406]: I0513 12:33:06.170479 2406 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 12:33:06.171965 kubelet[2406]: E0513 12:33:06.171946 2406 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 13 12:33:06.249754 kubelet[2406]: E0513 12:33:06.249660 2406 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="400ms" May 13 12:33:06.351180 kubelet[2406]: I0513 12:33:06.351154 2406 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 12:33:06.351561 kubelet[2406]: E0513 12:33:06.351526 2406 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" May 13 12:33:06.364637 kubelet[2406]: I0513 12:33:06.364612 2406 topology_manager.go:215] "Topology Admit Handler" podUID="23fb698fbd15401c47f14cb586c93b1f" podNamespace="kube-system" podName="kube-apiserver-localhost" May 13 12:33:06.365402 kubelet[2406]: I0513 12:33:06.365375 2406 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 13 12:33:06.366205 kubelet[2406]: I0513 12:33:06.366170 2406 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 13 12:33:06.372452 systemd[1]: Created slice kubepods-burstable-pod23fb698fbd15401c47f14cb586c93b1f.slice - libcontainer container kubepods-burstable-pod23fb698fbd15401c47f14cb586c93b1f.slice. May 13 12:33:06.382225 systemd[1]: Created slice kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice - libcontainer container kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice. May 13 12:33:06.392612 systemd[1]: Created slice kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice - libcontainer container kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice. May 13 12:33:06.450612 kubelet[2406]: I0513 12:33:06.450541 2406 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23fb698fbd15401c47f14cb586c93b1f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"23fb698fbd15401c47f14cb586c93b1f\") " pod="kube-system/kube-apiserver-localhost" May 13 12:33:06.450612 kubelet[2406]: I0513 12:33:06.450598 2406 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23fb698fbd15401c47f14cb586c93b1f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"23fb698fbd15401c47f14cb586c93b1f\") " pod="kube-system/kube-apiserver-localhost" May 13 12:33:06.450612 kubelet[2406]: I0513 12:33:06.450617 2406 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:33:06.450808 kubelet[2406]: I0513 12:33:06.450635 2406 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:33:06.450808 kubelet[2406]: I0513 12:33:06.450656 2406 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:33:06.450808 kubelet[2406]: I0513 12:33:06.450671 2406 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 13 12:33:06.450808 kubelet[2406]: I0513 12:33:06.450686 2406 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23fb698fbd15401c47f14cb586c93b1f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"23fb698fbd15401c47f14cb586c93b1f\") " pod="kube-system/kube-apiserver-localhost" May 13 12:33:06.450808 kubelet[2406]: I0513 12:33:06.450702 2406 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:33:06.450951 kubelet[2406]: I0513 12:33:06.450718 2406 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:33:06.650282 kubelet[2406]: E0513 12:33:06.650191 2406 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="800ms" May 13 12:33:06.681277 containerd[1528]: time="2025-05-13T12:33:06.681235082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:23fb698fbd15401c47f14cb586c93b1f,Namespace:kube-system,Attempt:0,}" May 13 12:33:06.693094 containerd[1528]: time="2025-05-13T12:33:06.693039200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 13 12:33:06.694718 containerd[1528]: time="2025-05-13T12:33:06.694677344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 13 12:33:06.752945 kubelet[2406]: I0513 12:33:06.752920 2406 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 12:33:06.753482 kubelet[2406]: E0513 12:33:06.753449 2406 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" May 13 12:33:06.876356 kubelet[2406]: W0513 12:33:06.876299 2406 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused May 13 12:33:06.876463 kubelet[2406]: E0513 12:33:06.876364 2406 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused May 13 12:33:06.961186 kubelet[2406]: W0513 12:33:06.961094 2406 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused May 13 12:33:06.961186 kubelet[2406]: E0513 12:33:06.961133 2406 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused May 13 12:33:07.136224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2414895887.mount: Deactivated successfully. May 13 12:33:07.140305 containerd[1528]: time="2025-05-13T12:33:07.140268496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 12:33:07.140842 containerd[1528]: time="2025-05-13T12:33:07.140807450Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 13 12:33:07.141938 containerd[1528]: time="2025-05-13T12:33:07.141882997Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 12:33:07.144410 containerd[1528]: time="2025-05-13T12:33:07.144379550Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 12:33:07.145202 containerd[1528]: time="2025-05-13T12:33:07.145163775Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 12:33:07.145799 containerd[1528]: time="2025-05-13T12:33:07.145770925Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 12:33:07.146575 containerd[1528]: time="2025-05-13T12:33:07.146469295Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 462.852573ms" May 13 12:33:07.146964 containerd[1528]: time="2025-05-13T12:33:07.146929283Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 13 12:33:07.147361 containerd[1528]: time="2025-05-13T12:33:07.147329084Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 13 12:33:07.150682 containerd[1528]: time="2025-05-13T12:33:07.150644461Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 453.671944ms" May 13 12:33:07.151193 containerd[1528]: time="2025-05-13T12:33:07.151156105Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 456.438871ms" May 13 12:33:07.162754 containerd[1528]: time="2025-05-13T12:33:07.162722903Z" level=info msg="connecting to shim a4ebd5d0fb0836e16edce8de1e2d43e66056dd1f3d6c30ee17ff7b09ca134637" address="unix:///run/containerd/s/8a5d7f6534659acd115751d46e8181a5d5c1a3db7a78b5b829c5d1ec70597a3c" namespace=k8s.io protocol=ttrpc version=3 May 13 12:33:07.172715 containerd[1528]: time="2025-05-13T12:33:07.172631273Z" level=info msg="connecting to shim 7e56c8151d01918817edbb1f07f82804b0bfdd2b2c88648f333ee276faee1811" address="unix:///run/containerd/s/871686825fe7277d86358e031c8f766b74c7ff7be7009f13a5895c0e05461f0b" namespace=k8s.io protocol=ttrpc version=3 May 13 12:33:07.175931 containerd[1528]: time="2025-05-13T12:33:07.175896955Z" level=info msg="connecting to shim 3b3649226a6c537f84ef4694e3d81c765da1fe808c364d58075668ed75141825" address="unix:///run/containerd/s/c1b9c2a9b4ebcce944f6b6ca4f5cd57c35c4b5724b84656ee5cb40729bb854bc" namespace=k8s.io protocol=ttrpc version=3 May 13 12:33:07.190807 systemd[1]: Started cri-containerd-a4ebd5d0fb0836e16edce8de1e2d43e66056dd1f3d6c30ee17ff7b09ca134637.scope - libcontainer container a4ebd5d0fb0836e16edce8de1e2d43e66056dd1f3d6c30ee17ff7b09ca134637. May 13 12:33:07.194753 systemd[1]: Started cri-containerd-7e56c8151d01918817edbb1f07f82804b0bfdd2b2c88648f333ee276faee1811.scope - libcontainer container 7e56c8151d01918817edbb1f07f82804b0bfdd2b2c88648f333ee276faee1811. May 13 12:33:07.198040 systemd[1]: Started cri-containerd-3b3649226a6c537f84ef4694e3d81c765da1fe808c364d58075668ed75141825.scope - libcontainer container 3b3649226a6c537f84ef4694e3d81c765da1fe808c364d58075668ed75141825. May 13 12:33:07.235613 containerd[1528]: time="2025-05-13T12:33:07.235412321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:23fb698fbd15401c47f14cb586c93b1f,Namespace:kube-system,Attempt:0,} returns sandbox id \"a4ebd5d0fb0836e16edce8de1e2d43e66056dd1f3d6c30ee17ff7b09ca134637\"" May 13 12:33:07.237892 containerd[1528]: time="2025-05-13T12:33:07.237860262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b3649226a6c537f84ef4694e3d81c765da1fe808c364d58075668ed75141825\"" May 13 12:33:07.240274 containerd[1528]: time="2025-05-13T12:33:07.240247254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e56c8151d01918817edbb1f07f82804b0bfdd2b2c88648f333ee276faee1811\"" May 13 12:33:07.240794 containerd[1528]: time="2025-05-13T12:33:07.240443991Z" level=info msg="CreateContainer within sandbox \"a4ebd5d0fb0836e16edce8de1e2d43e66056dd1f3d6c30ee17ff7b09ca134637\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 12:33:07.241842 containerd[1528]: time="2025-05-13T12:33:07.241811820Z" level=info msg="CreateContainer within sandbox \"3b3649226a6c537f84ef4694e3d81c765da1fe808c364d58075668ed75141825\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 12:33:07.243675 containerd[1528]: time="2025-05-13T12:33:07.243632669Z" level=info msg="CreateContainer within sandbox \"7e56c8151d01918817edbb1f07f82804b0bfdd2b2c88648f333ee276faee1811\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 12:33:07.250627 containerd[1528]: time="2025-05-13T12:33:07.250601836Z" level=info msg="Container 1d49055171a9afe7cf46825a2f4e97aed7d68f8547d7b831265bbd1a8484d8dd: CDI devices from CRI Config.CDIDevices: []" May 13 12:33:07.252490 containerd[1528]: time="2025-05-13T12:33:07.252461327Z" level=info msg="Container 6ff27c3dfcc0ce054435e31769093352bbecd3da207511c79339ac9f6c731dbc: CDI devices from CRI Config.CDIDevices: []" May 13 12:33:07.254626 containerd[1528]: time="2025-05-13T12:33:07.254599806Z" level=info msg="Container 4170cb9d556c336d140ed308ab5fa8b5def02010c907e2bc2fbc7ad9a502d104: CDI devices from CRI Config.CDIDevices: []" May 13 12:33:07.259710 containerd[1528]: time="2025-05-13T12:33:07.259603685Z" level=info msg="CreateContainer within sandbox \"a4ebd5d0fb0836e16edce8de1e2d43e66056dd1f3d6c30ee17ff7b09ca134637\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6ff27c3dfcc0ce054435e31769093352bbecd3da207511c79339ac9f6c731dbc\"" May 13 12:33:07.260273 containerd[1528]: time="2025-05-13T12:33:07.260167787Z" level=info msg="StartContainer for \"6ff27c3dfcc0ce054435e31769093352bbecd3da207511c79339ac9f6c731dbc\"" May 13 12:33:07.261850 containerd[1528]: time="2025-05-13T12:33:07.261821732Z" level=info msg="connecting to shim 6ff27c3dfcc0ce054435e31769093352bbecd3da207511c79339ac9f6c731dbc" address="unix:///run/containerd/s/8a5d7f6534659acd115751d46e8181a5d5c1a3db7a78b5b829c5d1ec70597a3c" protocol=ttrpc version=3 May 13 12:33:07.263102 containerd[1528]: time="2025-05-13T12:33:07.263059977Z" level=info msg="CreateContainer within sandbox \"7e56c8151d01918817edbb1f07f82804b0bfdd2b2c88648f333ee276faee1811\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4170cb9d556c336d140ed308ab5fa8b5def02010c907e2bc2fbc7ad9a502d104\"" May 13 12:33:07.263589 containerd[1528]: time="2025-05-13T12:33:07.263542430Z" level=info msg="StartContainer for \"4170cb9d556c336d140ed308ab5fa8b5def02010c907e2bc2fbc7ad9a502d104\"" May 13 12:33:07.264696 containerd[1528]: time="2025-05-13T12:33:07.264652013Z" level=info msg="CreateContainer within sandbox \"3b3649226a6c537f84ef4694e3d81c765da1fe808c364d58075668ed75141825\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1d49055171a9afe7cf46825a2f4e97aed7d68f8547d7b831265bbd1a8484d8dd\"" May 13 12:33:07.265084 containerd[1528]: time="2025-05-13T12:33:07.265028949Z" level=info msg="StartContainer for \"1d49055171a9afe7cf46825a2f4e97aed7d68f8547d7b831265bbd1a8484d8dd\"" May 13 12:33:07.265399 containerd[1528]: time="2025-05-13T12:33:07.265367162Z" level=info msg="connecting to shim 4170cb9d556c336d140ed308ab5fa8b5def02010c907e2bc2fbc7ad9a502d104" address="unix:///run/containerd/s/871686825fe7277d86358e031c8f766b74c7ff7be7009f13a5895c0e05461f0b" protocol=ttrpc version=3 May 13 12:33:07.266030 containerd[1528]: time="2025-05-13T12:33:07.265982881Z" level=info msg="connecting to shim 1d49055171a9afe7cf46825a2f4e97aed7d68f8547d7b831265bbd1a8484d8dd" address="unix:///run/containerd/s/c1b9c2a9b4ebcce944f6b6ca4f5cd57c35c4b5724b84656ee5cb40729bb854bc" protocol=ttrpc version=3 May 13 12:33:07.266410 kubelet[2406]: W0513 12:33:07.266292 2406 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused May 13 12:33:07.266410 kubelet[2406]: E0513 12:33:07.266399 2406 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused May 13 12:33:07.281699 systemd[1]: Started cri-containerd-6ff27c3dfcc0ce054435e31769093352bbecd3da207511c79339ac9f6c731dbc.scope - libcontainer container 6ff27c3dfcc0ce054435e31769093352bbecd3da207511c79339ac9f6c731dbc. May 13 12:33:07.285720 systemd[1]: Started cri-containerd-1d49055171a9afe7cf46825a2f4e97aed7d68f8547d7b831265bbd1a8484d8dd.scope - libcontainer container 1d49055171a9afe7cf46825a2f4e97aed7d68f8547d7b831265bbd1a8484d8dd. May 13 12:33:07.286511 systemd[1]: Started cri-containerd-4170cb9d556c336d140ed308ab5fa8b5def02010c907e2bc2fbc7ad9a502d104.scope - libcontainer container 4170cb9d556c336d140ed308ab5fa8b5def02010c907e2bc2fbc7ad9a502d104. May 13 12:33:07.337445 containerd[1528]: time="2025-05-13T12:33:07.337409667Z" level=info msg="StartContainer for \"1d49055171a9afe7cf46825a2f4e97aed7d68f8547d7b831265bbd1a8484d8dd\" returns successfully" May 13 12:33:07.339962 containerd[1528]: time="2025-05-13T12:33:07.339843992Z" level=info msg="StartContainer for \"4170cb9d556c336d140ed308ab5fa8b5def02010c907e2bc2fbc7ad9a502d104\" returns successfully" May 13 12:33:07.345052 kubelet[2406]: W0513 12:33:07.344995 2406 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.26:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused May 13 12:33:07.345115 kubelet[2406]: E0513 12:33:07.345058 2406 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.26:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused May 13 12:33:07.346199 containerd[1528]: time="2025-05-13T12:33:07.346110584Z" level=info msg="StartContainer for \"6ff27c3dfcc0ce054435e31769093352bbecd3da207511c79339ac9f6c731dbc\" returns successfully" May 13 12:33:07.453255 kubelet[2406]: E0513 12:33:07.451277 2406 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="1.6s" May 13 12:33:07.556340 kubelet[2406]: I0513 12:33:07.556223 2406 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 12:33:08.961906 kubelet[2406]: E0513 12:33:08.961779 2406 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.183f16325f1134d3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 12:33:06.042180819 +0000 UTC m=+0.738783724,LastTimestamp:2025-05-13 12:33:06.042180819 +0000 UTC m=+0.738783724,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 12:33:09.008635 kubelet[2406]: I0513 12:33:09.008597 2406 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 13 12:33:09.014659 kubelet[2406]: E0513 12:33:09.014577 2406 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.183f16325f59a36a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 12:33:06.046927722 +0000 UTC m=+0.743530627,LastTimestamp:2025-05-13 12:33:06.046927722 +0000 UTC m=+0.743530627,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 12:33:09.018499 kubelet[2406]: E0513 12:33:09.018475 2406 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 12:33:09.091860 kubelet[2406]: E0513 12:33:09.091765 2406 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.183f1632602a485c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 12:33:06.060601436 +0000 UTC m=+0.757204341,LastTimestamp:2025-05-13 12:33:06.060601436 +0000 UTC m=+0.757204341,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 12:33:09.119396 kubelet[2406]: E0513 12:33:09.119363 2406 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 12:33:09.120888 kubelet[2406]: E0513 12:33:09.120865 2406 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" May 13 12:33:09.219802 kubelet[2406]: E0513 12:33:09.219700 2406 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 12:33:09.320865 kubelet[2406]: E0513 12:33:09.320809 2406 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 12:33:09.421361 kubelet[2406]: E0513 12:33:09.421323 2406 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 12:33:10.042319 kubelet[2406]: I0513 12:33:10.042260 2406 apiserver.go:52] "Watching apiserver" May 13 12:33:10.049070 kubelet[2406]: I0513 12:33:10.049047 2406 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 12:33:10.764487 systemd[1]: Reload requested from client PID 2686 ('systemctl') (unit session-7.scope)... May 13 12:33:10.764502 systemd[1]: Reloading... May 13 12:33:10.839597 zram_generator::config[2729]: No configuration found. May 13 12:33:10.907982 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 12:33:11.002379 systemd[1]: Reloading finished in 237 ms. May 13 12:33:11.027511 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:33:11.037510 systemd[1]: kubelet.service: Deactivated successfully. May 13 12:33:11.037886 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:33:11.038026 systemd[1]: kubelet.service: Consumed 1.091s CPU time, 111.7M memory peak. May 13 12:33:11.039688 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:33:11.173348 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:33:11.178604 (kubelet)[2771]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 12:33:11.216759 kubelet[2771]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 12:33:11.216759 kubelet[2771]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 12:33:11.216759 kubelet[2771]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 12:33:11.217082 kubelet[2771]: I0513 12:33:11.216785 2771 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 12:33:11.222055 kubelet[2771]: I0513 12:33:11.220656 2771 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 12:33:11.222055 kubelet[2771]: I0513 12:33:11.220677 2771 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 12:33:11.222055 kubelet[2771]: I0513 12:33:11.220847 2771 server.go:927] "Client rotation is on, will bootstrap in background" May 13 12:33:11.222186 kubelet[2771]: I0513 12:33:11.222059 2771 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 12:33:11.223135 kubelet[2771]: I0513 12:33:11.223108 2771 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 12:33:11.228354 kubelet[2771]: I0513 12:33:11.228334 2771 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 12:33:11.228508 kubelet[2771]: I0513 12:33:11.228484 2771 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 12:33:11.228659 kubelet[2771]: I0513 12:33:11.228508 2771 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 12:33:11.228735 kubelet[2771]: I0513 12:33:11.228666 2771 topology_manager.go:138] "Creating topology manager with none policy" May 13 12:33:11.228735 kubelet[2771]: I0513 12:33:11.228674 2771 container_manager_linux.go:301] "Creating device plugin manager" May 13 12:33:11.228735 kubelet[2771]: I0513 12:33:11.228704 2771 state_mem.go:36] "Initialized new in-memory state store" May 13 12:33:11.228807 kubelet[2771]: I0513 12:33:11.228793 2771 kubelet.go:400] "Attempting to sync node with API server" May 13 12:33:11.228828 kubelet[2771]: I0513 12:33:11.228807 2771 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 12:33:11.228846 kubelet[2771]: I0513 12:33:11.228830 2771 kubelet.go:312] "Adding apiserver pod source" May 13 12:33:11.228865 kubelet[2771]: I0513 12:33:11.228848 2771 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 12:33:11.230574 kubelet[2771]: I0513 12:33:11.229618 2771 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 13 12:33:11.230723 kubelet[2771]: I0513 12:33:11.230699 2771 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 12:33:11.231089 kubelet[2771]: I0513 12:33:11.231067 2771 server.go:1264] "Started kubelet" May 13 12:33:11.231878 kubelet[2771]: I0513 12:33:11.231839 2771 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 12:33:11.232629 kubelet[2771]: I0513 12:33:11.232606 2771 server.go:455] "Adding debug handlers to kubelet server" May 13 12:33:11.232984 kubelet[2771]: I0513 12:33:11.232958 2771 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 12:33:11.233500 kubelet[2771]: I0513 12:33:11.233454 2771 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 12:33:11.233760 kubelet[2771]: I0513 12:33:11.233741 2771 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 12:33:11.234095 kubelet[2771]: I0513 12:33:11.234077 2771 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 12:33:11.234239 kubelet[2771]: I0513 12:33:11.234226 2771 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 12:33:11.234406 kubelet[2771]: I0513 12:33:11.234393 2771 reconciler.go:26] "Reconciler: start to sync state" May 13 12:33:11.236825 kubelet[2771]: I0513 12:33:11.236802 2771 factory.go:221] Registration of the containerd container factory successfully May 13 12:33:11.236825 kubelet[2771]: I0513 12:33:11.236817 2771 factory.go:221] Registration of the systemd container factory successfully May 13 12:33:11.236919 kubelet[2771]: I0513 12:33:11.236869 2771 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 12:33:11.238564 kubelet[2771]: E0513 12:33:11.237655 2771 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 12:33:11.249449 kubelet[2771]: I0513 12:33:11.249403 2771 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 12:33:11.250391 kubelet[2771]: I0513 12:33:11.250360 2771 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 12:33:11.250391 kubelet[2771]: I0513 12:33:11.250395 2771 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 12:33:11.250480 kubelet[2771]: I0513 12:33:11.250412 2771 kubelet.go:2337] "Starting kubelet main sync loop" May 13 12:33:11.250480 kubelet[2771]: E0513 12:33:11.250458 2771 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 12:33:11.282488 kubelet[2771]: I0513 12:33:11.282393 2771 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 12:33:11.282488 kubelet[2771]: I0513 12:33:11.282413 2771 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 12:33:11.282488 kubelet[2771]: I0513 12:33:11.282432 2771 state_mem.go:36] "Initialized new in-memory state store" May 13 12:33:11.282641 kubelet[2771]: I0513 12:33:11.282580 2771 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 12:33:11.282641 kubelet[2771]: I0513 12:33:11.282590 2771 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 12:33:11.282641 kubelet[2771]: I0513 12:33:11.282607 2771 policy_none.go:49] "None policy: Start" May 13 12:33:11.283255 kubelet[2771]: I0513 12:33:11.283222 2771 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 12:33:11.283255 kubelet[2771]: I0513 12:33:11.283244 2771 state_mem.go:35] "Initializing new in-memory state store" May 13 12:33:11.283491 kubelet[2771]: I0513 12:33:11.283362 2771 state_mem.go:75] "Updated machine memory state" May 13 12:33:11.287924 kubelet[2771]: I0513 12:33:11.287898 2771 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 12:33:11.288252 kubelet[2771]: I0513 12:33:11.288061 2771 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 12:33:11.288252 kubelet[2771]: I0513 12:33:11.288155 2771 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 12:33:11.338925 kubelet[2771]: I0513 12:33:11.338900 2771 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 12:33:11.344765 kubelet[2771]: I0513 12:33:11.344740 2771 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 13 12:33:11.344844 kubelet[2771]: I0513 12:33:11.344813 2771 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 13 12:33:11.351260 kubelet[2771]: I0513 12:33:11.351102 2771 topology_manager.go:215] "Topology Admit Handler" podUID="23fb698fbd15401c47f14cb586c93b1f" podNamespace="kube-system" podName="kube-apiserver-localhost" May 13 12:33:11.351260 kubelet[2771]: I0513 12:33:11.351214 2771 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 13 12:33:11.351260 kubelet[2771]: I0513 12:33:11.351250 2771 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 13 12:33:11.435955 kubelet[2771]: I0513 12:33:11.435925 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23fb698fbd15401c47f14cb586c93b1f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"23fb698fbd15401c47f14cb586c93b1f\") " pod="kube-system/kube-apiserver-localhost" May 13 12:33:11.436199 kubelet[2771]: I0513 12:33:11.436118 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:33:11.436199 kubelet[2771]: I0513 12:33:11.436144 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:33:11.436199 kubelet[2771]: I0513 12:33:11.436173 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23fb698fbd15401c47f14cb586c93b1f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"23fb698fbd15401c47f14cb586c93b1f\") " pod="kube-system/kube-apiserver-localhost" May 13 12:33:11.436419 kubelet[2771]: I0513 12:33:11.436189 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23fb698fbd15401c47f14cb586c93b1f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"23fb698fbd15401c47f14cb586c93b1f\") " pod="kube-system/kube-apiserver-localhost" May 13 12:33:11.436419 kubelet[2771]: I0513 12:33:11.436366 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:33:11.436419 kubelet[2771]: I0513 12:33:11.436383 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 13 12:33:11.436419 kubelet[2771]: I0513 12:33:11.436404 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:33:11.436609 kubelet[2771]: I0513 12:33:11.436595 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:33:11.764091 sudo[2805]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 13 12:33:11.764345 sudo[2805]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 13 12:33:12.211283 sudo[2805]: pam_unix(sudo:session): session closed for user root May 13 12:33:12.230298 kubelet[2771]: I0513 12:33:12.230114 2771 apiserver.go:52] "Watching apiserver" May 13 12:33:12.234768 kubelet[2771]: I0513 12:33:12.234730 2771 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 12:33:12.277221 kubelet[2771]: E0513 12:33:12.277187 2771 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 13 12:33:12.299457 kubelet[2771]: I0513 12:33:12.298872 2771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.294431845 podStartE2EDuration="1.294431845s" podCreationTimestamp="2025-05-13 12:33:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:33:12.287657308 +0000 UTC m=+1.105722658" watchObservedRunningTime="2025-05-13 12:33:12.294431845 +0000 UTC m=+1.112497236" May 13 12:33:12.306598 kubelet[2771]: I0513 12:33:12.306307 2771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.306292921 podStartE2EDuration="1.306292921s" podCreationTimestamp="2025-05-13 12:33:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:33:12.299023515 +0000 UTC m=+1.117088865" watchObservedRunningTime="2025-05-13 12:33:12.306292921 +0000 UTC m=+1.124358311" May 13 12:33:12.318567 kubelet[2771]: I0513 12:33:12.318258 2771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.318243601 podStartE2EDuration="1.318243601s" podCreationTimestamp="2025-05-13 12:33:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:33:12.306469648 +0000 UTC m=+1.124535038" watchObservedRunningTime="2025-05-13 12:33:12.318243601 +0000 UTC m=+1.136308951" May 13 12:33:15.581343 sudo[1726]: pam_unix(sudo:session): session closed for user root May 13 12:33:15.582463 sshd[1725]: Connection closed by 10.0.0.1 port 60460 May 13 12:33:15.582890 sshd-session[1722]: pam_unix(sshd:session): session closed for user core May 13 12:33:15.586493 systemd[1]: sshd@6-10.0.0.26:22-10.0.0.1:60460.service: Deactivated successfully. May 13 12:33:15.589464 systemd[1]: session-7.scope: Deactivated successfully. May 13 12:33:15.589671 systemd[1]: session-7.scope: Consumed 10.747s CPU time, 283.4M memory peak. May 13 12:33:15.590763 systemd-logind[1504]: Session 7 logged out. Waiting for processes to exit. May 13 12:33:15.591828 systemd-logind[1504]: Removed session 7. May 13 12:33:23.458600 update_engine[1511]: I20250513 12:33:23.458446 1511 update_attempter.cc:509] Updating boot flags... May 13 12:33:25.968426 kubelet[2771]: I0513 12:33:25.968392 2771 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 12:33:25.972268 containerd[1528]: time="2025-05-13T12:33:25.972235568Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 12:33:25.972485 kubelet[2771]: I0513 12:33:25.972454 2771 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 12:33:26.967992 kubelet[2771]: I0513 12:33:26.967905 2771 topology_manager.go:215] "Topology Admit Handler" podUID="6caa8d14-0307-4fe6-9619-4c0fd969eda7" podNamespace="kube-system" podName="cilium-ffsgm" May 13 12:33:26.968171 kubelet[2771]: I0513 12:33:26.968146 2771 topology_manager.go:215] "Topology Admit Handler" podUID="a1e38ef9-c034-47ad-8408-e6e7f6f13f83" podNamespace="kube-system" podName="kube-proxy-frhxb" May 13 12:33:26.980459 systemd[1]: Created slice kubepods-besteffort-poda1e38ef9_c034_47ad_8408_e6e7f6f13f83.slice - libcontainer container kubepods-besteffort-poda1e38ef9_c034_47ad_8408_e6e7f6f13f83.slice. May 13 12:33:26.995819 systemd[1]: Created slice kubepods-burstable-pod6caa8d14_0307_4fe6_9619_4c0fd969eda7.slice - libcontainer container kubepods-burstable-pod6caa8d14_0307_4fe6_9619_4c0fd969eda7.slice. May 13 12:33:27.058957 kubelet[2771]: I0513 12:33:27.058902 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6caa8d14-0307-4fe6-9619-4c0fd969eda7-host-proc-sys-kernel\") pod \"cilium-ffsgm\" (UID: \"6caa8d14-0307-4fe6-9619-4c0fd969eda7\") " pod="kube-system/cilium-ffsgm" May 13 12:33:27.059291 kubelet[2771]: I0513 12:33:27.058976 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a1e38ef9-c034-47ad-8408-e6e7f6f13f83-lib-modules\") pod \"kube-proxy-frhxb\" (UID: \"a1e38ef9-c034-47ad-8408-e6e7f6f13f83\") " pod="kube-system/kube-proxy-frhxb" May 13 12:33:27.059291 kubelet[2771]: I0513 12:33:27.058996 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6caa8d14-0307-4fe6-9619-4c0fd969eda7-clustermesh-secrets\") pod \"cilium-ffsgm\" (UID: \"6caa8d14-0307-4fe6-9619-4c0fd969eda7\") " pod="kube-system/cilium-ffsgm" May 13 12:33:27.059291 kubelet[2771]: I0513 12:33:27.059012 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6caa8d14-0307-4fe6-9619-4c0fd969eda7-hubble-tls\") pod \"cilium-ffsgm\" (UID: \"6caa8d14-0307-4fe6-9619-4c0fd969eda7\") " pod="kube-system/cilium-ffsgm" May 13 12:33:27.059291 kubelet[2771]: I0513 12:33:27.059028 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a1e38ef9-c034-47ad-8408-e6e7f6f13f83-xtables-lock\") pod \"kube-proxy-frhxb\" (UID: \"a1e38ef9-c034-47ad-8408-e6e7f6f13f83\") " pod="kube-system/kube-proxy-frhxb" May 13 12:33:27.059291 kubelet[2771]: I0513 12:33:27.059047 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6caa8d14-0307-4fe6-9619-4c0fd969eda7-host-proc-sys-net\") pod \"cilium-ffsgm\" (UID: \"6caa8d14-0307-4fe6-9619-4c0fd969eda7\") " pod="kube-system/cilium-ffsgm" May 13 12:33:27.059291 kubelet[2771]: I0513 12:33:27.059061 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6caa8d14-0307-4fe6-9619-4c0fd969eda7-cilium-run\") pod \"cilium-ffsgm\" (UID: \"6caa8d14-0307-4fe6-9619-4c0fd969eda7\") " pod="kube-system/cilium-ffsgm" May 13 12:33:27.059423 kubelet[2771]: I0513 12:33:27.059087 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6caa8d14-0307-4fe6-9619-4c0fd969eda7-hostproc\") pod \"cilium-ffsgm\" (UID: \"6caa8d14-0307-4fe6-9619-4c0fd969eda7\") " pod="kube-system/cilium-ffsgm" May 13 12:33:27.059423 kubelet[2771]: I0513 12:33:27.059105 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6caa8d14-0307-4fe6-9619-4c0fd969eda7-cilium-cgroup\") pod \"cilium-ffsgm\" (UID: \"6caa8d14-0307-4fe6-9619-4c0fd969eda7\") " pod="kube-system/cilium-ffsgm" May 13 12:33:27.059423 kubelet[2771]: I0513 12:33:27.059120 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6caa8d14-0307-4fe6-9619-4c0fd969eda7-lib-modules\") pod \"cilium-ffsgm\" (UID: \"6caa8d14-0307-4fe6-9619-4c0fd969eda7\") " pod="kube-system/cilium-ffsgm" May 13 12:33:27.059423 kubelet[2771]: I0513 12:33:27.059136 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf4sf\" (UniqueName: \"kubernetes.io/projected/6caa8d14-0307-4fe6-9619-4c0fd969eda7-kube-api-access-cf4sf\") pod \"cilium-ffsgm\" (UID: \"6caa8d14-0307-4fe6-9619-4c0fd969eda7\") " pod="kube-system/cilium-ffsgm" May 13 12:33:27.059423 kubelet[2771]: I0513 12:33:27.059153 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6caa8d14-0307-4fe6-9619-4c0fd969eda7-cni-path\") pod \"cilium-ffsgm\" (UID: \"6caa8d14-0307-4fe6-9619-4c0fd969eda7\") " pod="kube-system/cilium-ffsgm" May 13 12:33:27.059423 kubelet[2771]: I0513 12:33:27.059175 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkxls\" (UniqueName: \"kubernetes.io/projected/a1e38ef9-c034-47ad-8408-e6e7f6f13f83-kube-api-access-pkxls\") pod \"kube-proxy-frhxb\" (UID: \"a1e38ef9-c034-47ad-8408-e6e7f6f13f83\") " pod="kube-system/kube-proxy-frhxb" May 13 12:33:27.059540 kubelet[2771]: I0513 12:33:27.059194 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6caa8d14-0307-4fe6-9619-4c0fd969eda7-etc-cni-netd\") pod \"cilium-ffsgm\" (UID: \"6caa8d14-0307-4fe6-9619-4c0fd969eda7\") " pod="kube-system/cilium-ffsgm" May 13 12:33:27.059540 kubelet[2771]: I0513 12:33:27.059211 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6caa8d14-0307-4fe6-9619-4c0fd969eda7-xtables-lock\") pod \"cilium-ffsgm\" (UID: \"6caa8d14-0307-4fe6-9619-4c0fd969eda7\") " pod="kube-system/cilium-ffsgm" May 13 12:33:27.059540 kubelet[2771]: I0513 12:33:27.059229 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6caa8d14-0307-4fe6-9619-4c0fd969eda7-bpf-maps\") pod \"cilium-ffsgm\" (UID: \"6caa8d14-0307-4fe6-9619-4c0fd969eda7\") " pod="kube-system/cilium-ffsgm" May 13 12:33:27.059540 kubelet[2771]: I0513 12:33:27.059243 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6caa8d14-0307-4fe6-9619-4c0fd969eda7-cilium-config-path\") pod \"cilium-ffsgm\" (UID: \"6caa8d14-0307-4fe6-9619-4c0fd969eda7\") " pod="kube-system/cilium-ffsgm" May 13 12:33:27.059540 kubelet[2771]: I0513 12:33:27.059262 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a1e38ef9-c034-47ad-8408-e6e7f6f13f83-kube-proxy\") pod \"kube-proxy-frhxb\" (UID: \"a1e38ef9-c034-47ad-8408-e6e7f6f13f83\") " pod="kube-system/kube-proxy-frhxb" May 13 12:33:27.107189 kubelet[2771]: I0513 12:33:27.106870 2771 topology_manager.go:215] "Topology Admit Handler" podUID="4acd787a-22e0-485f-84a0-09cebeaf02ef" podNamespace="kube-system" podName="cilium-operator-599987898-nwn6z" May 13 12:33:27.113642 systemd[1]: Created slice kubepods-besteffort-pod4acd787a_22e0_485f_84a0_09cebeaf02ef.slice - libcontainer container kubepods-besteffort-pod4acd787a_22e0_485f_84a0_09cebeaf02ef.slice. May 13 12:33:27.160440 kubelet[2771]: I0513 12:33:27.159780 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4acd787a-22e0-485f-84a0-09cebeaf02ef-cilium-config-path\") pod \"cilium-operator-599987898-nwn6z\" (UID: \"4acd787a-22e0-485f-84a0-09cebeaf02ef\") " pod="kube-system/cilium-operator-599987898-nwn6z" May 13 12:33:27.160440 kubelet[2771]: I0513 12:33:27.159824 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbdn5\" (UniqueName: \"kubernetes.io/projected/4acd787a-22e0-485f-84a0-09cebeaf02ef-kube-api-access-zbdn5\") pod \"cilium-operator-599987898-nwn6z\" (UID: \"4acd787a-22e0-485f-84a0-09cebeaf02ef\") " pod="kube-system/cilium-operator-599987898-nwn6z" May 13 12:33:27.296220 containerd[1528]: time="2025-05-13T12:33:27.295844128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-frhxb,Uid:a1e38ef9-c034-47ad-8408-e6e7f6f13f83,Namespace:kube-system,Attempt:0,}" May 13 12:33:27.299852 containerd[1528]: time="2025-05-13T12:33:27.299819274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ffsgm,Uid:6caa8d14-0307-4fe6-9619-4c0fd969eda7,Namespace:kube-system,Attempt:0,}" May 13 12:33:27.322458 containerd[1528]: time="2025-05-13T12:33:27.322400764Z" level=info msg="connecting to shim 34deb9463a2343de4ba590c94a30f97fb400931c23130ab1b57f0fc7d9c7a451" address="unix:///run/containerd/s/10bd71be9f643392d9e67a4dabb006a9faac2f2537d95be2e7cdedf90232a9fb" namespace=k8s.io protocol=ttrpc version=3 May 13 12:33:27.325542 containerd[1528]: time="2025-05-13T12:33:27.325245265Z" level=info msg="connecting to shim d0aff202abbc210579d158e872a01d635db476040d5e9806ecc3068dc5bdb0bc" address="unix:///run/containerd/s/466ab871fe047016d617f5f8f8e6bedc0112a83daa3348bb4a044a444519f860" namespace=k8s.io protocol=ttrpc version=3 May 13 12:33:27.345732 systemd[1]: Started cri-containerd-34deb9463a2343de4ba590c94a30f97fb400931c23130ab1b57f0fc7d9c7a451.scope - libcontainer container 34deb9463a2343de4ba590c94a30f97fb400931c23130ab1b57f0fc7d9c7a451. May 13 12:33:27.348623 systemd[1]: Started cri-containerd-d0aff202abbc210579d158e872a01d635db476040d5e9806ecc3068dc5bdb0bc.scope - libcontainer container d0aff202abbc210579d158e872a01d635db476040d5e9806ecc3068dc5bdb0bc. May 13 12:33:27.374737 containerd[1528]: time="2025-05-13T12:33:27.374680158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ffsgm,Uid:6caa8d14-0307-4fe6-9619-4c0fd969eda7,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0aff202abbc210579d158e872a01d635db476040d5e9806ecc3068dc5bdb0bc\"" May 13 12:33:27.376561 containerd[1528]: time="2025-05-13T12:33:27.376526151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-frhxb,Uid:a1e38ef9-c034-47ad-8408-e6e7f6f13f83,Namespace:kube-system,Attempt:0,} returns sandbox id \"34deb9463a2343de4ba590c94a30f97fb400931c23130ab1b57f0fc7d9c7a451\"" May 13 12:33:27.381169 containerd[1528]: time="2025-05-13T12:33:27.381135248Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 13 12:33:27.387223 containerd[1528]: time="2025-05-13T12:33:27.387153111Z" level=info msg="CreateContainer within sandbox \"34deb9463a2343de4ba590c94a30f97fb400931c23130ab1b57f0fc7d9c7a451\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 12:33:27.395492 containerd[1528]: time="2025-05-13T12:33:27.395456474Z" level=info msg="Container f758d4941842f67ef89b233e9bb3d2d305048e3f6316a2ebcdeecae3077ea3c1: CDI devices from CRI Config.CDIDevices: []" May 13 12:33:27.402342 containerd[1528]: time="2025-05-13T12:33:27.402303252Z" level=info msg="CreateContainer within sandbox \"34deb9463a2343de4ba590c94a30f97fb400931c23130ab1b57f0fc7d9c7a451\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f758d4941842f67ef89b233e9bb3d2d305048e3f6316a2ebcdeecae3077ea3c1\"" May 13 12:33:27.402851 containerd[1528]: time="2025-05-13T12:33:27.402823555Z" level=info msg="StartContainer for \"f758d4941842f67ef89b233e9bb3d2d305048e3f6316a2ebcdeecae3077ea3c1\"" May 13 12:33:27.404504 containerd[1528]: time="2025-05-13T12:33:27.404473663Z" level=info msg="connecting to shim f758d4941842f67ef89b233e9bb3d2d305048e3f6316a2ebcdeecae3077ea3c1" address="unix:///run/containerd/s/10bd71be9f643392d9e67a4dabb006a9faac2f2537d95be2e7cdedf90232a9fb" protocol=ttrpc version=3 May 13 12:33:27.419018 containerd[1528]: time="2025-05-13T12:33:27.418978168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-nwn6z,Uid:4acd787a-22e0-485f-84a0-09cebeaf02ef,Namespace:kube-system,Attempt:0,}" May 13 12:33:27.430739 systemd[1]: Started cri-containerd-f758d4941842f67ef89b233e9bb3d2d305048e3f6316a2ebcdeecae3077ea3c1.scope - libcontainer container f758d4941842f67ef89b233e9bb3d2d305048e3f6316a2ebcdeecae3077ea3c1. May 13 12:33:27.434762 containerd[1528]: time="2025-05-13T12:33:27.434707357Z" level=info msg="connecting to shim 593dee176660503932c7ba2104c928c7124723dcfcdbc954bf4817ffd8996a3d" address="unix:///run/containerd/s/a8de2c9ac274a812edaa41bdc4b878cb42f6b2c6b50596037d49dd2bd7065d79" namespace=k8s.io protocol=ttrpc version=3 May 13 12:33:27.458775 systemd[1]: Started cri-containerd-593dee176660503932c7ba2104c928c7124723dcfcdbc954bf4817ffd8996a3d.scope - libcontainer container 593dee176660503932c7ba2104c928c7124723dcfcdbc954bf4817ffd8996a3d. May 13 12:33:27.478009 containerd[1528]: time="2025-05-13T12:33:27.477862716Z" level=info msg="StartContainer for \"f758d4941842f67ef89b233e9bb3d2d305048e3f6316a2ebcdeecae3077ea3c1\" returns successfully" May 13 12:33:27.507509 containerd[1528]: time="2025-05-13T12:33:27.507472542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-nwn6z,Uid:4acd787a-22e0-485f-84a0-09cebeaf02ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"593dee176660503932c7ba2104c928c7124723dcfcdbc954bf4817ffd8996a3d\"" May 13 12:33:28.312870 kubelet[2771]: I0513 12:33:28.312801 2771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-frhxb" podStartSLOduration=2.31278122 podStartE2EDuration="2.31278122s" podCreationTimestamp="2025-05-13 12:33:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:33:28.312429316 +0000 UTC m=+17.130494706" watchObservedRunningTime="2025-05-13 12:33:28.31278122 +0000 UTC m=+17.130846610" May 13 12:33:32.172282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2109144453.mount: Deactivated successfully. May 13 12:33:33.364877 containerd[1528]: time="2025-05-13T12:33:33.364769677Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:33:33.365690 containerd[1528]: time="2025-05-13T12:33:33.365316816Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 13 12:33:33.366245 containerd[1528]: time="2025-05-13T12:33:33.366208707Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:33:33.368433 containerd[1528]: time="2025-05-13T12:33:33.368303273Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.98713309s" May 13 12:33:33.368433 containerd[1528]: time="2025-05-13T12:33:33.368362492Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 13 12:33:33.377941 containerd[1528]: time="2025-05-13T12:33:33.377792657Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 13 12:33:33.378958 containerd[1528]: time="2025-05-13T12:33:33.378926628Z" level=info msg="CreateContainer within sandbox \"d0aff202abbc210579d158e872a01d635db476040d5e9806ecc3068dc5bdb0bc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 12:33:33.400809 containerd[1528]: time="2025-05-13T12:33:33.398994674Z" level=info msg="Container d39af00773a91035d8f04ee369ebcf409edcd7390f48eb0e0bd67832d4eb62fd: CDI devices from CRI Config.CDIDevices: []" May 13 12:33:33.402178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2800019149.mount: Deactivated successfully. May 13 12:33:33.403811 containerd[1528]: time="2025-05-13T12:33:33.403772917Z" level=info msg="CreateContainer within sandbox \"d0aff202abbc210579d158e872a01d635db476040d5e9806ecc3068dc5bdb0bc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d39af00773a91035d8f04ee369ebcf409edcd7390f48eb0e0bd67832d4eb62fd\"" May 13 12:33:33.404274 containerd[1528]: time="2025-05-13T12:33:33.404203978Z" level=info msg="StartContainer for \"d39af00773a91035d8f04ee369ebcf409edcd7390f48eb0e0bd67832d4eb62fd\"" May 13 12:33:33.405193 containerd[1528]: time="2025-05-13T12:33:33.405163532Z" level=info msg="connecting to shim d39af00773a91035d8f04ee369ebcf409edcd7390f48eb0e0bd67832d4eb62fd" address="unix:///run/containerd/s/466ab871fe047016d617f5f8f8e6bedc0112a83daa3348bb4a044a444519f860" protocol=ttrpc version=3 May 13 12:33:33.446682 systemd[1]: Started cri-containerd-d39af00773a91035d8f04ee369ebcf409edcd7390f48eb0e0bd67832d4eb62fd.scope - libcontainer container d39af00773a91035d8f04ee369ebcf409edcd7390f48eb0e0bd67832d4eb62fd. May 13 12:33:33.476777 containerd[1528]: time="2025-05-13T12:33:33.475682084Z" level=info msg="StartContainer for \"d39af00773a91035d8f04ee369ebcf409edcd7390f48eb0e0bd67832d4eb62fd\" returns successfully" May 13 12:33:33.518480 systemd[1]: cri-containerd-d39af00773a91035d8f04ee369ebcf409edcd7390f48eb0e0bd67832d4eb62fd.scope: Deactivated successfully. May 13 12:33:33.549569 containerd[1528]: time="2025-05-13T12:33:33.549355189Z" level=info msg="received exit event container_id:\"d39af00773a91035d8f04ee369ebcf409edcd7390f48eb0e0bd67832d4eb62fd\" id:\"d39af00773a91035d8f04ee369ebcf409edcd7390f48eb0e0bd67832d4eb62fd\" pid:3207 exited_at:{seconds:1747139613 nanos:534461596}" May 13 12:33:33.556178 containerd[1528]: time="2025-05-13T12:33:33.556137447Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d39af00773a91035d8f04ee369ebcf409edcd7390f48eb0e0bd67832d4eb62fd\" id:\"d39af00773a91035d8f04ee369ebcf409edcd7390f48eb0e0bd67832d4eb62fd\" pid:3207 exited_at:{seconds:1747139613 nanos:534461596}" May 13 12:33:33.584726 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d39af00773a91035d8f04ee369ebcf409edcd7390f48eb0e0bd67832d4eb62fd-rootfs.mount: Deactivated successfully. May 13 12:33:34.361520 containerd[1528]: time="2025-05-13T12:33:34.361476923Z" level=info msg="CreateContainer within sandbox \"d0aff202abbc210579d158e872a01d635db476040d5e9806ecc3068dc5bdb0bc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 12:33:34.374886 containerd[1528]: time="2025-05-13T12:33:34.374844437Z" level=info msg="Container 851d43786a3c5f4fcaf1e85ee81832b304230b8b63c9a7c6a064f6fe43654314: CDI devices from CRI Config.CDIDevices: []" May 13 12:33:34.379330 containerd[1528]: time="2025-05-13T12:33:34.379291072Z" level=info msg="CreateContainer within sandbox \"d0aff202abbc210579d158e872a01d635db476040d5e9806ecc3068dc5bdb0bc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"851d43786a3c5f4fcaf1e85ee81832b304230b8b63c9a7c6a064f6fe43654314\"" May 13 12:33:34.379734 containerd[1528]: time="2025-05-13T12:33:34.379709923Z" level=info msg="StartContainer for \"851d43786a3c5f4fcaf1e85ee81832b304230b8b63c9a7c6a064f6fe43654314\"" May 13 12:33:34.380727 containerd[1528]: time="2025-05-13T12:33:34.380686269Z" level=info msg="connecting to shim 851d43786a3c5f4fcaf1e85ee81832b304230b8b63c9a7c6a064f6fe43654314" address="unix:///run/containerd/s/466ab871fe047016d617f5f8f8e6bedc0112a83daa3348bb4a044a444519f860" protocol=ttrpc version=3 May 13 12:33:34.418694 systemd[1]: Started cri-containerd-851d43786a3c5f4fcaf1e85ee81832b304230b8b63c9a7c6a064f6fe43654314.scope - libcontainer container 851d43786a3c5f4fcaf1e85ee81832b304230b8b63c9a7c6a064f6fe43654314. May 13 12:33:34.447261 containerd[1528]: time="2025-05-13T12:33:34.447215540Z" level=info msg="StartContainer for \"851d43786a3c5f4fcaf1e85ee81832b304230b8b63c9a7c6a064f6fe43654314\" returns successfully" May 13 12:33:34.454174 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 12:33:34.454382 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 12:33:34.454864 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 13 12:33:34.456295 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 12:33:34.458872 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 12:33:34.459217 systemd[1]: cri-containerd-851d43786a3c5f4fcaf1e85ee81832b304230b8b63c9a7c6a064f6fe43654314.scope: Deactivated successfully. May 13 12:33:34.459686 containerd[1528]: time="2025-05-13T12:33:34.459517319Z" level=info msg="TaskExit event in podsandbox handler container_id:\"851d43786a3c5f4fcaf1e85ee81832b304230b8b63c9a7c6a064f6fe43654314\" id:\"851d43786a3c5f4fcaf1e85ee81832b304230b8b63c9a7c6a064f6fe43654314\" pid:3252 exited_at:{seconds:1747139614 nanos:457970273}" May 13 12:33:34.466823 containerd[1528]: time="2025-05-13T12:33:34.466787760Z" level=info msg="received exit event container_id:\"851d43786a3c5f4fcaf1e85ee81832b304230b8b63c9a7c6a064f6fe43654314\" id:\"851d43786a3c5f4fcaf1e85ee81832b304230b8b63c9a7c6a064f6fe43654314\" pid:3252 exited_at:{seconds:1747139614 nanos:457970273}" May 13 12:33:34.487143 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 12:33:34.493081 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-851d43786a3c5f4fcaf1e85ee81832b304230b8b63c9a7c6a064f6fe43654314-rootfs.mount: Deactivated successfully. May 13 12:33:34.861648 containerd[1528]: time="2025-05-13T12:33:34.861226856Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:33:34.861961 containerd[1528]: time="2025-05-13T12:33:34.861929476Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 13 12:33:34.862967 containerd[1528]: time="2025-05-13T12:33:34.862918986Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:33:34.865570 containerd[1528]: time="2025-05-13T12:33:34.865227031Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.487406884s" May 13 12:33:34.865570 containerd[1528]: time="2025-05-13T12:33:34.865267403Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 13 12:33:34.868323 containerd[1528]: time="2025-05-13T12:33:34.868291512Z" level=info msg="CreateContainer within sandbox \"593dee176660503932c7ba2104c928c7124723dcfcdbc954bf4817ffd8996a3d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 13 12:33:34.886033 containerd[1528]: time="2025-05-13T12:33:34.885999987Z" level=info msg="Container 8cb98ff9881036b25b596ba84781130a9d128b82a0d2c2b4910229632707e70c: CDI devices from CRI Config.CDIDevices: []" May 13 12:33:34.891447 containerd[1528]: time="2025-05-13T12:33:34.891389798Z" level=info msg="CreateContainer within sandbox \"593dee176660503932c7ba2104c928c7124723dcfcdbc954bf4817ffd8996a3d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8cb98ff9881036b25b596ba84781130a9d128b82a0d2c2b4910229632707e70c\"" May 13 12:33:34.891929 containerd[1528]: time="2025-05-13T12:33:34.891890155Z" level=info msg="StartContainer for \"8cb98ff9881036b25b596ba84781130a9d128b82a0d2c2b4910229632707e70c\"" May 13 12:33:34.892818 containerd[1528]: time="2025-05-13T12:33:34.892790037Z" level=info msg="connecting to shim 8cb98ff9881036b25b596ba84781130a9d128b82a0d2c2b4910229632707e70c" address="unix:///run/containerd/s/a8de2c9ac274a812edaa41bdc4b878cb42f6b2c6b50596037d49dd2bd7065d79" protocol=ttrpc version=3 May 13 12:33:34.912698 systemd[1]: Started cri-containerd-8cb98ff9881036b25b596ba84781130a9d128b82a0d2c2b4910229632707e70c.scope - libcontainer container 8cb98ff9881036b25b596ba84781130a9d128b82a0d2c2b4910229632707e70c. May 13 12:33:34.937197 containerd[1528]: time="2025-05-13T12:33:34.937155515Z" level=info msg="StartContainer for \"8cb98ff9881036b25b596ba84781130a9d128b82a0d2c2b4910229632707e70c\" returns successfully" May 13 12:33:35.370744 containerd[1528]: time="2025-05-13T12:33:35.370698001Z" level=info msg="CreateContainer within sandbox \"d0aff202abbc210579d158e872a01d635db476040d5e9806ecc3068dc5bdb0bc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 12:33:35.396687 containerd[1528]: time="2025-05-13T12:33:35.396645934Z" level=info msg="Container a62da769d550517817e18fd797c1e81bc7e6817785027053a4700150350905ad: CDI devices from CRI Config.CDIDevices: []" May 13 12:33:35.400599 kubelet[2771]: I0513 12:33:35.400144 2771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-nwn6z" podStartSLOduration=1.04311443 podStartE2EDuration="8.400124701s" podCreationTimestamp="2025-05-13 12:33:27 +0000 UTC" firstStartedPulling="2025-05-13 12:33:27.509063985 +0000 UTC m=+16.327129375" lastFinishedPulling="2025-05-13 12:33:34.866074296 +0000 UTC m=+23.684139646" observedRunningTime="2025-05-13 12:33:35.381775537 +0000 UTC m=+24.199840887" watchObservedRunningTime="2025-05-13 12:33:35.400124701 +0000 UTC m=+24.218190091" May 13 12:33:35.405821 containerd[1528]: time="2025-05-13T12:33:35.405776122Z" level=info msg="CreateContainer within sandbox \"d0aff202abbc210579d158e872a01d635db476040d5e9806ecc3068dc5bdb0bc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a62da769d550517817e18fd797c1e81bc7e6817785027053a4700150350905ad\"" May 13 12:33:35.406463 containerd[1528]: time="2025-05-13T12:33:35.406442803Z" level=info msg="StartContainer for \"a62da769d550517817e18fd797c1e81bc7e6817785027053a4700150350905ad\"" May 13 12:33:35.407847 containerd[1528]: time="2025-05-13T12:33:35.407794410Z" level=info msg="connecting to shim a62da769d550517817e18fd797c1e81bc7e6817785027053a4700150350905ad" address="unix:///run/containerd/s/466ab871fe047016d617f5f8f8e6bedc0112a83daa3348bb4a044a444519f860" protocol=ttrpc version=3 May 13 12:33:35.429786 systemd[1]: Started cri-containerd-a62da769d550517817e18fd797c1e81bc7e6817785027053a4700150350905ad.scope - libcontainer container a62da769d550517817e18fd797c1e81bc7e6817785027053a4700150350905ad. May 13 12:33:35.467380 containerd[1528]: time="2025-05-13T12:33:35.467337377Z" level=info msg="StartContainer for \"a62da769d550517817e18fd797c1e81bc7e6817785027053a4700150350905ad\" returns successfully" May 13 12:33:35.489721 systemd[1]: cri-containerd-a62da769d550517817e18fd797c1e81bc7e6817785027053a4700150350905ad.scope: Deactivated successfully. May 13 12:33:35.501032 containerd[1528]: time="2025-05-13T12:33:35.499960118Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a62da769d550517817e18fd797c1e81bc7e6817785027053a4700150350905ad\" id:\"a62da769d550517817e18fd797c1e81bc7e6817785027053a4700150350905ad\" pid:3349 exited_at:{seconds:1747139615 nanos:499579724}" May 13 12:33:35.501826 containerd[1528]: time="2025-05-13T12:33:35.501790789Z" level=info msg="received exit event container_id:\"a62da769d550517817e18fd797c1e81bc7e6817785027053a4700150350905ad\" id:\"a62da769d550517817e18fd797c1e81bc7e6817785027053a4700150350905ad\" pid:3349 exited_at:{seconds:1747139615 nanos:499579724}" May 13 12:33:35.521510 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a62da769d550517817e18fd797c1e81bc7e6817785027053a4700150350905ad-rootfs.mount: Deactivated successfully. May 13 12:33:36.381831 containerd[1528]: time="2025-05-13T12:33:36.381788782Z" level=info msg="CreateContainer within sandbox \"d0aff202abbc210579d158e872a01d635db476040d5e9806ecc3068dc5bdb0bc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 12:33:36.389011 containerd[1528]: time="2025-05-13T12:33:36.388966458Z" level=info msg="Container c967dc506e140c28802f3899f594e17c9c6c57814c9c505a67852228f58a63c2: CDI devices from CRI Config.CDIDevices: []" May 13 12:33:36.394811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1493536116.mount: Deactivated successfully. May 13 12:33:36.397311 containerd[1528]: time="2025-05-13T12:33:36.397257336Z" level=info msg="CreateContainer within sandbox \"d0aff202abbc210579d158e872a01d635db476040d5e9806ecc3068dc5bdb0bc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c967dc506e140c28802f3899f594e17c9c6c57814c9c505a67852228f58a63c2\"" May 13 12:33:36.397966 containerd[1528]: time="2025-05-13T12:33:36.397944775Z" level=info msg="StartContainer for \"c967dc506e140c28802f3899f594e17c9c6c57814c9c505a67852228f58a63c2\"" May 13 12:33:36.398798 containerd[1528]: time="2025-05-13T12:33:36.398774255Z" level=info msg="connecting to shim c967dc506e140c28802f3899f594e17c9c6c57814c9c505a67852228f58a63c2" address="unix:///run/containerd/s/466ab871fe047016d617f5f8f8e6bedc0112a83daa3348bb4a044a444519f860" protocol=ttrpc version=3 May 13 12:33:36.415686 systemd[1]: Started cri-containerd-c967dc506e140c28802f3899f594e17c9c6c57814c9c505a67852228f58a63c2.scope - libcontainer container c967dc506e140c28802f3899f594e17c9c6c57814c9c505a67852228f58a63c2. May 13 12:33:36.438833 systemd[1]: cri-containerd-c967dc506e140c28802f3899f594e17c9c6c57814c9c505a67852228f58a63c2.scope: Deactivated successfully. May 13 12:33:36.440443 containerd[1528]: time="2025-05-13T12:33:36.440407696Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c967dc506e140c28802f3899f594e17c9c6c57814c9c505a67852228f58a63c2\" id:\"c967dc506e140c28802f3899f594e17c9c6c57814c9c505a67852228f58a63c2\" pid:3392 exited_at:{seconds:1747139616 nanos:439965248}" May 13 12:33:36.442330 containerd[1528]: time="2025-05-13T12:33:36.442290481Z" level=info msg="received exit event container_id:\"c967dc506e140c28802f3899f594e17c9c6c57814c9c505a67852228f58a63c2\" id:\"c967dc506e140c28802f3899f594e17c9c6c57814c9c505a67852228f58a63c2\" pid:3392 exited_at:{seconds:1747139616 nanos:439965248}" May 13 12:33:36.443351 containerd[1528]: time="2025-05-13T12:33:36.443324700Z" level=info msg="StartContainer for \"c967dc506e140c28802f3899f594e17c9c6c57814c9c505a67852228f58a63c2\" returns successfully" May 13 12:33:36.446994 containerd[1528]: time="2025-05-13T12:33:36.439557010Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6caa8d14_0307_4fe6_9619_4c0fd969eda7.slice/cri-containerd-c967dc506e140c28802f3899f594e17c9c6c57814c9c505a67852228f58a63c2.scope/memory.events\": no such file or directory" May 13 12:33:36.462004 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c967dc506e140c28802f3899f594e17c9c6c57814c9c505a67852228f58a63c2-rootfs.mount: Deactivated successfully. May 13 12:33:37.386454 containerd[1528]: time="2025-05-13T12:33:37.386415312Z" level=info msg="CreateContainer within sandbox \"d0aff202abbc210579d158e872a01d635db476040d5e9806ecc3068dc5bdb0bc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 12:33:37.411859 containerd[1528]: time="2025-05-13T12:33:37.411814135Z" level=info msg="Container 77c4e49f4442065e1cb469b8c35de53fbc5d37e3a56fe7f757daabcf5c574f8c: CDI devices from CRI Config.CDIDevices: []" May 13 12:33:37.418534 containerd[1528]: time="2025-05-13T12:33:37.418499195Z" level=info msg="CreateContainer within sandbox \"d0aff202abbc210579d158e872a01d635db476040d5e9806ecc3068dc5bdb0bc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"77c4e49f4442065e1cb469b8c35de53fbc5d37e3a56fe7f757daabcf5c574f8c\"" May 13 12:33:37.419075 containerd[1528]: time="2025-05-13T12:33:37.419050628Z" level=info msg="StartContainer for \"77c4e49f4442065e1cb469b8c35de53fbc5d37e3a56fe7f757daabcf5c574f8c\"" May 13 12:33:37.420194 containerd[1528]: time="2025-05-13T12:33:37.420120526Z" level=info msg="connecting to shim 77c4e49f4442065e1cb469b8c35de53fbc5d37e3a56fe7f757daabcf5c574f8c" address="unix:///run/containerd/s/466ab871fe047016d617f5f8f8e6bedc0112a83daa3348bb4a044a444519f860" protocol=ttrpc version=3 May 13 12:33:37.443693 systemd[1]: Started cri-containerd-77c4e49f4442065e1cb469b8c35de53fbc5d37e3a56fe7f757daabcf5c574f8c.scope - libcontainer container 77c4e49f4442065e1cb469b8c35de53fbc5d37e3a56fe7f757daabcf5c574f8c. May 13 12:33:37.477059 containerd[1528]: time="2025-05-13T12:33:37.477025432Z" level=info msg="StartContainer for \"77c4e49f4442065e1cb469b8c35de53fbc5d37e3a56fe7f757daabcf5c574f8c\" returns successfully" May 13 12:33:37.557842 containerd[1528]: time="2025-05-13T12:33:37.557747523Z" level=info msg="TaskExit event in podsandbox handler container_id:\"77c4e49f4442065e1cb469b8c35de53fbc5d37e3a56fe7f757daabcf5c574f8c\" id:\"4972e688f16d42c921fe9c9ed3425477e10b7ab7112bfcd88c241a5c88f9bd68\" pid:3459 exited_at:{seconds:1747139617 nanos:557453121}" May 13 12:33:37.573950 kubelet[2771]: I0513 12:33:37.573909 2771 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 13 12:33:37.597255 kubelet[2771]: I0513 12:33:37.597028 2771 topology_manager.go:215] "Topology Admit Handler" podUID="df69edd3-0833-4c3d-a8e1-2c0e93fbe7c7" podNamespace="kube-system" podName="coredns-7db6d8ff4d-h564h" May 13 12:33:37.597722 kubelet[2771]: I0513 12:33:37.597680 2771 topology_manager.go:215] "Topology Admit Handler" podUID="c8365e53-587e-4775-8ce8-82176a99163c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-k7tg8" May 13 12:33:37.627691 systemd[1]: Created slice kubepods-burstable-poddf69edd3_0833_4c3d_a8e1_2c0e93fbe7c7.slice - libcontainer container kubepods-burstable-poddf69edd3_0833_4c3d_a8e1_2c0e93fbe7c7.slice. May 13 12:33:37.631057 kubelet[2771]: I0513 12:33:37.631020 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd5vc\" (UniqueName: \"kubernetes.io/projected/df69edd3-0833-4c3d-a8e1-2c0e93fbe7c7-kube-api-access-kd5vc\") pod \"coredns-7db6d8ff4d-h564h\" (UID: \"df69edd3-0833-4c3d-a8e1-2c0e93fbe7c7\") " pod="kube-system/coredns-7db6d8ff4d-h564h" May 13 12:33:37.631431 kubelet[2771]: I0513 12:33:37.631334 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df69edd3-0833-4c3d-a8e1-2c0e93fbe7c7-config-volume\") pod \"coredns-7db6d8ff4d-h564h\" (UID: \"df69edd3-0833-4c3d-a8e1-2c0e93fbe7c7\") " pod="kube-system/coredns-7db6d8ff4d-h564h" May 13 12:33:37.631431 kubelet[2771]: I0513 12:33:37.631372 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8365e53-587e-4775-8ce8-82176a99163c-config-volume\") pod \"coredns-7db6d8ff4d-k7tg8\" (UID: \"c8365e53-587e-4775-8ce8-82176a99163c\") " pod="kube-system/coredns-7db6d8ff4d-k7tg8" May 13 12:33:37.631431 kubelet[2771]: I0513 12:33:37.631391 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56hwc\" (UniqueName: \"kubernetes.io/projected/c8365e53-587e-4775-8ce8-82176a99163c-kube-api-access-56hwc\") pod \"coredns-7db6d8ff4d-k7tg8\" (UID: \"c8365e53-587e-4775-8ce8-82176a99163c\") " pod="kube-system/coredns-7db6d8ff4d-k7tg8" May 13 12:33:37.631688 systemd[1]: Created slice kubepods-burstable-podc8365e53_587e_4775_8ce8_82176a99163c.slice - libcontainer container kubepods-burstable-podc8365e53_587e_4775_8ce8_82176a99163c.slice. May 13 12:33:37.931246 containerd[1528]: time="2025-05-13T12:33:37.931073993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-h564h,Uid:df69edd3-0833-4c3d-a8e1-2c0e93fbe7c7,Namespace:kube-system,Attempt:0,}" May 13 12:33:37.935304 containerd[1528]: time="2025-05-13T12:33:37.934666952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-k7tg8,Uid:c8365e53-587e-4775-8ce8-82176a99163c,Namespace:kube-system,Attempt:0,}" May 13 12:33:38.409242 kubelet[2771]: I0513 12:33:38.409171 2771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ffsgm" podStartSLOduration=6.41253224 podStartE2EDuration="12.409152196s" podCreationTimestamp="2025-05-13 12:33:26 +0000 UTC" firstStartedPulling="2025-05-13 12:33:27.380744281 +0000 UTC m=+16.198809671" lastFinishedPulling="2025-05-13 12:33:33.377364237 +0000 UTC m=+22.195429627" observedRunningTime="2025-05-13 12:33:38.408706316 +0000 UTC m=+27.226771706" watchObservedRunningTime="2025-05-13 12:33:38.409152196 +0000 UTC m=+27.227217586" May 13 12:33:39.655046 systemd-networkd[1430]: cilium_host: Link UP May 13 12:33:39.655738 systemd-networkd[1430]: cilium_net: Link UP May 13 12:33:39.655900 systemd-networkd[1430]: cilium_net: Gained carrier May 13 12:33:39.656029 systemd-networkd[1430]: cilium_host: Gained carrier May 13 12:33:39.740922 systemd-networkd[1430]: cilium_vxlan: Link UP May 13 12:33:39.740927 systemd-networkd[1430]: cilium_vxlan: Gained carrier May 13 12:33:40.037596 kernel: NET: Registered PF_ALG protocol family May 13 12:33:40.201718 systemd-networkd[1430]: cilium_host: Gained IPv6LL May 13 12:33:40.265763 systemd-networkd[1430]: cilium_net: Gained IPv6LL May 13 12:33:40.606057 systemd-networkd[1430]: lxc_health: Link UP May 13 12:33:40.606394 systemd-networkd[1430]: lxc_health: Gained carrier May 13 12:33:41.063727 systemd-networkd[1430]: lxc77fa6604df2e: Link UP May 13 12:33:41.065018 kernel: eth0: renamed from tmp884c2 May 13 12:33:41.073661 systemd-networkd[1430]: lxc3b8eff34fd60: Link UP May 13 12:33:41.075068 systemd-networkd[1430]: lxc77fa6604df2e: Gained carrier May 13 12:33:41.076392 kernel: eth0: renamed from tmpaeac9 May 13 12:33:41.077988 systemd-networkd[1430]: lxc3b8eff34fd60: Gained carrier May 13 12:33:41.226677 systemd-networkd[1430]: cilium_vxlan: Gained IPv6LL May 13 12:33:41.674660 systemd-networkd[1430]: lxc_health: Gained IPv6LL May 13 12:33:41.836791 systemd[1]: Started sshd@7-10.0.0.26:22-10.0.0.1:59636.service - OpenSSH per-connection server daemon (10.0.0.1:59636). May 13 12:33:41.883206 sshd[3943]: Accepted publickey for core from 10.0.0.1 port 59636 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:33:41.884474 sshd-session[3943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:33:41.889397 systemd-logind[1504]: New session 8 of user core. May 13 12:33:41.901766 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 12:33:42.059095 sshd[3945]: Connection closed by 10.0.0.1 port 59636 May 13 12:33:42.059489 sshd-session[3943]: pam_unix(sshd:session): session closed for user core May 13 12:33:42.064327 systemd-logind[1504]: Session 8 logged out. Waiting for processes to exit. May 13 12:33:42.064871 systemd[1]: sshd@7-10.0.0.26:22-10.0.0.1:59636.service: Deactivated successfully. May 13 12:33:42.067097 systemd[1]: session-8.scope: Deactivated successfully. May 13 12:33:42.071063 systemd-logind[1504]: Removed session 8. May 13 12:33:42.121755 systemd-networkd[1430]: lxc77fa6604df2e: Gained IPv6LL May 13 12:33:42.185677 systemd-networkd[1430]: lxc3b8eff34fd60: Gained IPv6LL May 13 12:33:44.597671 containerd[1528]: time="2025-05-13T12:33:44.597631209Z" level=info msg="connecting to shim aeac9633802d1cd6953dc5ffc7b1351a4b530271f950f7983b2f2276ab934c72" address="unix:///run/containerd/s/192afaee750286cff5c59505a35551703e6c311d5a37065c3cba7d2a5f1ba672" namespace=k8s.io protocol=ttrpc version=3 May 13 12:33:44.598597 containerd[1528]: time="2025-05-13T12:33:44.598572054Z" level=info msg="connecting to shim 884c2d64004ef20b423e34a07013784390967855b4c40296c17b8671ac939a04" address="unix:///run/containerd/s/16f5f398de24067bd3da7e5e0f9d4f9f59fa3fe6cde7e6776e1ae3da830972c3" namespace=k8s.io protocol=ttrpc version=3 May 13 12:33:44.622709 systemd[1]: Started cri-containerd-aeac9633802d1cd6953dc5ffc7b1351a4b530271f950f7983b2f2276ab934c72.scope - libcontainer container aeac9633802d1cd6953dc5ffc7b1351a4b530271f950f7983b2f2276ab934c72. May 13 12:33:44.626316 systemd[1]: Started cri-containerd-884c2d64004ef20b423e34a07013784390967855b4c40296c17b8671ac939a04.scope - libcontainer container 884c2d64004ef20b423e34a07013784390967855b4c40296c17b8671ac939a04. May 13 12:33:44.638035 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 12:33:44.640972 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 12:33:44.666228 containerd[1528]: time="2025-05-13T12:33:44.666189205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-h564h,Uid:df69edd3-0833-4c3d-a8e1-2c0e93fbe7c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"aeac9633802d1cd6953dc5ffc7b1351a4b530271f950f7983b2f2276ab934c72\"" May 13 12:33:44.667732 containerd[1528]: time="2025-05-13T12:33:44.667704175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-k7tg8,Uid:c8365e53-587e-4775-8ce8-82176a99163c,Namespace:kube-system,Attempt:0,} returns sandbox id \"884c2d64004ef20b423e34a07013784390967855b4c40296c17b8671ac939a04\"" May 13 12:33:44.672748 containerd[1528]: time="2025-05-13T12:33:44.672715665Z" level=info msg="CreateContainer within sandbox \"aeac9633802d1cd6953dc5ffc7b1351a4b530271f950f7983b2f2276ab934c72\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 12:33:44.673315 containerd[1528]: time="2025-05-13T12:33:44.673030574Z" level=info msg="CreateContainer within sandbox \"884c2d64004ef20b423e34a07013784390967855b4c40296c17b8671ac939a04\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 12:33:44.681858 containerd[1528]: time="2025-05-13T12:33:44.681825407Z" level=info msg="Container 5d6e0e32c5dcc35a6e106701f0ee867307b832e8df188db7cbd54ab48facc680: CDI devices from CRI Config.CDIDevices: []" May 13 12:33:44.687259 containerd[1528]: time="2025-05-13T12:33:44.687234184Z" level=info msg="Container d225732b42cea29c135a461e59146a5f9998a15c1a97ae49894dd262eec2419a: CDI devices from CRI Config.CDIDevices: []" May 13 12:33:44.690669 containerd[1528]: time="2025-05-13T12:33:44.690640565Z" level=info msg="CreateContainer within sandbox \"aeac9633802d1cd6953dc5ffc7b1351a4b530271f950f7983b2f2276ab934c72\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5d6e0e32c5dcc35a6e106701f0ee867307b832e8df188db7cbd54ab48facc680\"" May 13 12:33:44.691335 containerd[1528]: time="2025-05-13T12:33:44.691294788Z" level=info msg="StartContainer for \"5d6e0e32c5dcc35a6e106701f0ee867307b832e8df188db7cbd54ab48facc680\"" May 13 12:33:44.692064 containerd[1528]: time="2025-05-13T12:33:44.692039510Z" level=info msg="connecting to shim 5d6e0e32c5dcc35a6e106701f0ee867307b832e8df188db7cbd54ab48facc680" address="unix:///run/containerd/s/192afaee750286cff5c59505a35551703e6c311d5a37065c3cba7d2a5f1ba672" protocol=ttrpc version=3 May 13 12:33:44.694190 containerd[1528]: time="2025-05-13T12:33:44.693810335Z" level=info msg="CreateContainer within sandbox \"884c2d64004ef20b423e34a07013784390967855b4c40296c17b8671ac939a04\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d225732b42cea29c135a461e59146a5f9998a15c1a97ae49894dd262eec2419a\"" May 13 12:33:44.694798 containerd[1528]: time="2025-05-13T12:33:44.694767263Z" level=info msg="StartContainer for \"d225732b42cea29c135a461e59146a5f9998a15c1a97ae49894dd262eec2419a\"" May 13 12:33:44.695521 containerd[1528]: time="2025-05-13T12:33:44.695492901Z" level=info msg="connecting to shim d225732b42cea29c135a461e59146a5f9998a15c1a97ae49894dd262eec2419a" address="unix:///run/containerd/s/16f5f398de24067bd3da7e5e0f9d4f9f59fa3fe6cde7e6776e1ae3da830972c3" protocol=ttrpc version=3 May 13 12:33:44.710737 systemd[1]: Started cri-containerd-5d6e0e32c5dcc35a6e106701f0ee867307b832e8df188db7cbd54ab48facc680.scope - libcontainer container 5d6e0e32c5dcc35a6e106701f0ee867307b832e8df188db7cbd54ab48facc680. May 13 12:33:44.713839 systemd[1]: Started cri-containerd-d225732b42cea29c135a461e59146a5f9998a15c1a97ae49894dd262eec2419a.scope - libcontainer container d225732b42cea29c135a461e59146a5f9998a15c1a97ae49894dd262eec2419a. May 13 12:33:44.748843 containerd[1528]: time="2025-05-13T12:33:44.748808741Z" level=info msg="StartContainer for \"d225732b42cea29c135a461e59146a5f9998a15c1a97ae49894dd262eec2419a\" returns successfully" May 13 12:33:44.759456 containerd[1528]: time="2025-05-13T12:33:44.759421010Z" level=info msg="StartContainer for \"5d6e0e32c5dcc35a6e106701f0ee867307b832e8df188db7cbd54ab48facc680\" returns successfully" May 13 12:33:45.423344 kubelet[2771]: I0513 12:33:45.423228 2771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-k7tg8" podStartSLOduration=18.42321408 podStartE2EDuration="18.42321408s" podCreationTimestamp="2025-05-13 12:33:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:33:45.422427114 +0000 UTC m=+34.240492504" watchObservedRunningTime="2025-05-13 12:33:45.42321408 +0000 UTC m=+34.241279470" May 13 12:33:45.433399 kubelet[2771]: I0513 12:33:45.433352 2771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-h564h" podStartSLOduration=18.433334775 podStartE2EDuration="18.433334775s" podCreationTimestamp="2025-05-13 12:33:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:33:45.432761854 +0000 UTC m=+34.250827244" watchObservedRunningTime="2025-05-13 12:33:45.433334775 +0000 UTC m=+34.251400125" May 13 12:33:45.582020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1539171977.mount: Deactivated successfully. May 13 12:33:47.071955 systemd[1]: Started sshd@8-10.0.0.26:22-10.0.0.1:50310.service - OpenSSH per-connection server daemon (10.0.0.1:50310). May 13 12:33:47.132579 sshd[4137]: Accepted publickey for core from 10.0.0.1 port 50310 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:33:47.134085 sshd-session[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:33:47.137998 systemd-logind[1504]: New session 9 of user core. May 13 12:33:47.147744 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 12:33:47.265003 sshd[4139]: Connection closed by 10.0.0.1 port 50310 May 13 12:33:47.264111 sshd-session[4137]: pam_unix(sshd:session): session closed for user core May 13 12:33:47.267216 systemd[1]: sshd@8-10.0.0.26:22-10.0.0.1:50310.service: Deactivated successfully. May 13 12:33:47.268911 systemd[1]: session-9.scope: Deactivated successfully. May 13 12:33:47.269620 systemd-logind[1504]: Session 9 logged out. Waiting for processes to exit. May 13 12:33:47.270655 systemd-logind[1504]: Removed session 9. May 13 12:33:52.278001 systemd[1]: Started sshd@9-10.0.0.26:22-10.0.0.1:50322.service - OpenSSH per-connection server daemon (10.0.0.1:50322). May 13 12:33:52.330449 sshd[4154]: Accepted publickey for core from 10.0.0.1 port 50322 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:33:52.331906 sshd-session[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:33:52.335374 systemd-logind[1504]: New session 10 of user core. May 13 12:33:52.341704 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 12:33:52.454796 sshd[4156]: Connection closed by 10.0.0.1 port 50322 May 13 12:33:52.455219 sshd-session[4154]: pam_unix(sshd:session): session closed for user core May 13 12:33:52.468728 systemd[1]: sshd@9-10.0.0.26:22-10.0.0.1:50322.service: Deactivated successfully. May 13 12:33:52.470356 systemd[1]: session-10.scope: Deactivated successfully. May 13 12:33:52.472013 systemd-logind[1504]: Session 10 logged out. Waiting for processes to exit. May 13 12:33:52.474342 systemd[1]: Started sshd@10-10.0.0.26:22-10.0.0.1:35516.service - OpenSSH per-connection server daemon (10.0.0.1:35516). May 13 12:33:52.476127 systemd-logind[1504]: Removed session 10. May 13 12:33:52.534657 sshd[4171]: Accepted publickey for core from 10.0.0.1 port 35516 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:33:52.535761 sshd-session[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:33:52.539499 systemd-logind[1504]: New session 11 of user core. May 13 12:33:52.550678 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 12:33:52.693371 sshd[4173]: Connection closed by 10.0.0.1 port 35516 May 13 12:33:52.694324 sshd-session[4171]: pam_unix(sshd:session): session closed for user core May 13 12:33:52.702104 systemd[1]: sshd@10-10.0.0.26:22-10.0.0.1:35516.service: Deactivated successfully. May 13 12:33:52.705413 systemd[1]: session-11.scope: Deactivated successfully. May 13 12:33:52.706851 systemd-logind[1504]: Session 11 logged out. Waiting for processes to exit. May 13 12:33:52.708763 systemd-logind[1504]: Removed session 11. May 13 12:33:52.711903 systemd[1]: Started sshd@11-10.0.0.26:22-10.0.0.1:35518.service - OpenSSH per-connection server daemon (10.0.0.1:35518). May 13 12:33:52.776113 sshd[4185]: Accepted publickey for core from 10.0.0.1 port 35518 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:33:52.777216 sshd-session[4185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:33:52.780788 systemd-logind[1504]: New session 12 of user core. May 13 12:33:52.789714 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 12:33:52.899014 sshd[4187]: Connection closed by 10.0.0.1 port 35518 May 13 12:33:52.899328 sshd-session[4185]: pam_unix(sshd:session): session closed for user core May 13 12:33:52.902754 systemd-logind[1504]: Session 12 logged out. Waiting for processes to exit. May 13 12:33:52.902889 systemd[1]: sshd@11-10.0.0.26:22-10.0.0.1:35518.service: Deactivated successfully. May 13 12:33:52.904539 systemd[1]: session-12.scope: Deactivated successfully. May 13 12:33:52.906175 systemd-logind[1504]: Removed session 12. May 13 12:33:57.916000 systemd[1]: Started sshd@12-10.0.0.26:22-10.0.0.1:35524.service - OpenSSH per-connection server daemon (10.0.0.1:35524). May 13 12:33:57.962807 sshd[4203]: Accepted publickey for core from 10.0.0.1 port 35524 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:33:57.964135 sshd-session[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:33:57.968300 systemd-logind[1504]: New session 13 of user core. May 13 12:33:57.980863 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 12:33:58.094306 sshd[4205]: Connection closed by 10.0.0.1 port 35524 May 13 12:33:58.094699 sshd-session[4203]: pam_unix(sshd:session): session closed for user core May 13 12:33:58.101624 systemd[1]: sshd@12-10.0.0.26:22-10.0.0.1:35524.service: Deactivated successfully. May 13 12:33:58.103385 systemd[1]: session-13.scope: Deactivated successfully. May 13 12:33:58.104490 systemd-logind[1504]: Session 13 logged out. Waiting for processes to exit. May 13 12:33:58.107009 systemd-logind[1504]: Removed session 13. May 13 12:34:03.108254 systemd[1]: Started sshd@13-10.0.0.26:22-10.0.0.1:41726.service - OpenSSH per-connection server daemon (10.0.0.1:41726). May 13 12:34:03.159984 sshd[4220]: Accepted publickey for core from 10.0.0.1 port 41726 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:34:03.161117 sshd-session[4220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:34:03.164705 systemd-logind[1504]: New session 14 of user core. May 13 12:34:03.170731 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 12:34:03.278056 sshd[4222]: Connection closed by 10.0.0.1 port 41726 May 13 12:34:03.278603 sshd-session[4220]: pam_unix(sshd:session): session closed for user core May 13 12:34:03.282353 systemd[1]: sshd@13-10.0.0.26:22-10.0.0.1:41726.service: Deactivated successfully. May 13 12:34:03.284240 systemd[1]: session-14.scope: Deactivated successfully. May 13 12:34:03.286121 systemd-logind[1504]: Session 14 logged out. Waiting for processes to exit. May 13 12:34:03.287164 systemd-logind[1504]: Removed session 14. May 13 12:34:08.301958 systemd[1]: Started sshd@14-10.0.0.26:22-10.0.0.1:41732.service - OpenSSH per-connection server daemon (10.0.0.1:41732). May 13 12:34:08.359147 sshd[4237]: Accepted publickey for core from 10.0.0.1 port 41732 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:34:08.363444 sshd-session[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:34:08.368457 systemd-logind[1504]: New session 15 of user core. May 13 12:34:08.378741 systemd[1]: Started session-15.scope - Session 15 of User core. May 13 12:34:08.505661 sshd[4239]: Connection closed by 10.0.0.1 port 41732 May 13 12:34:08.504589 sshd-session[4237]: pam_unix(sshd:session): session closed for user core May 13 12:34:08.514247 systemd[1]: sshd@14-10.0.0.26:22-10.0.0.1:41732.service: Deactivated successfully. May 13 12:34:08.515846 systemd[1]: session-15.scope: Deactivated successfully. May 13 12:34:08.518047 systemd-logind[1504]: Session 15 logged out. Waiting for processes to exit. May 13 12:34:08.519911 systemd[1]: Started sshd@15-10.0.0.26:22-10.0.0.1:41742.service - OpenSSH per-connection server daemon (10.0.0.1:41742). May 13 12:34:08.521919 systemd-logind[1504]: Removed session 15. May 13 12:34:08.575828 sshd[4252]: Accepted publickey for core from 10.0.0.1 port 41742 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:34:08.577314 sshd-session[4252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:34:08.581815 systemd-logind[1504]: New session 16 of user core. May 13 12:34:08.602731 systemd[1]: Started session-16.scope - Session 16 of User core. May 13 12:34:08.822570 sshd[4254]: Connection closed by 10.0.0.1 port 41742 May 13 12:34:08.823176 sshd-session[4252]: pam_unix(sshd:session): session closed for user core May 13 12:34:08.837385 systemd[1]: sshd@15-10.0.0.26:22-10.0.0.1:41742.service: Deactivated successfully. May 13 12:34:08.842624 systemd[1]: session-16.scope: Deactivated successfully. May 13 12:34:08.846701 systemd-logind[1504]: Session 16 logged out. Waiting for processes to exit. May 13 12:34:08.850733 systemd[1]: Started sshd@16-10.0.0.26:22-10.0.0.1:41746.service - OpenSSH per-connection server daemon (10.0.0.1:41746). May 13 12:34:08.852957 systemd-logind[1504]: Removed session 16. May 13 12:34:08.910902 sshd[4265]: Accepted publickey for core from 10.0.0.1 port 41746 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:34:08.911886 sshd-session[4265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:34:08.915621 systemd-logind[1504]: New session 17 of user core. May 13 12:34:08.921705 systemd[1]: Started session-17.scope - Session 17 of User core. May 13 12:34:10.200464 sshd[4267]: Connection closed by 10.0.0.1 port 41746 May 13 12:34:10.200438 sshd-session[4265]: pam_unix(sshd:session): session closed for user core May 13 12:34:10.209393 systemd[1]: sshd@16-10.0.0.26:22-10.0.0.1:41746.service: Deactivated successfully. May 13 12:34:10.211876 systemd[1]: session-17.scope: Deactivated successfully. May 13 12:34:10.213250 systemd-logind[1504]: Session 17 logged out. Waiting for processes to exit. May 13 12:34:10.216065 systemd[1]: Started sshd@17-10.0.0.26:22-10.0.0.1:41750.service - OpenSSH per-connection server daemon (10.0.0.1:41750). May 13 12:34:10.218402 systemd-logind[1504]: Removed session 17. May 13 12:34:10.275889 sshd[4287]: Accepted publickey for core from 10.0.0.1 port 41750 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:34:10.277146 sshd-session[4287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:34:10.280791 systemd-logind[1504]: New session 18 of user core. May 13 12:34:10.299702 systemd[1]: Started session-18.scope - Session 18 of User core. May 13 12:34:10.523733 sshd[4289]: Connection closed by 10.0.0.1 port 41750 May 13 12:34:10.525452 sshd-session[4287]: pam_unix(sshd:session): session closed for user core May 13 12:34:10.533699 systemd[1]: sshd@17-10.0.0.26:22-10.0.0.1:41750.service: Deactivated successfully. May 13 12:34:10.535234 systemd[1]: session-18.scope: Deactivated successfully. May 13 12:34:10.537619 systemd-logind[1504]: Session 18 logged out. Waiting for processes to exit. May 13 12:34:10.542132 systemd[1]: Started sshd@18-10.0.0.26:22-10.0.0.1:41754.service - OpenSSH per-connection server daemon (10.0.0.1:41754). May 13 12:34:10.543858 systemd-logind[1504]: Removed session 18. May 13 12:34:10.596977 sshd[4300]: Accepted publickey for core from 10.0.0.1 port 41754 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:34:10.598110 sshd-session[4300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:34:10.602401 systemd-logind[1504]: New session 19 of user core. May 13 12:34:10.606712 systemd[1]: Started session-19.scope - Session 19 of User core. May 13 12:34:10.712076 sshd[4302]: Connection closed by 10.0.0.1 port 41754 May 13 12:34:10.712618 sshd-session[4300]: pam_unix(sshd:session): session closed for user core May 13 12:34:10.716013 systemd[1]: sshd@18-10.0.0.26:22-10.0.0.1:41754.service: Deactivated successfully. May 13 12:34:10.717655 systemd[1]: session-19.scope: Deactivated successfully. May 13 12:34:10.718344 systemd-logind[1504]: Session 19 logged out. Waiting for processes to exit. May 13 12:34:10.719505 systemd-logind[1504]: Removed session 19. May 13 12:34:15.727817 systemd[1]: Started sshd@19-10.0.0.26:22-10.0.0.1:35932.service - OpenSSH per-connection server daemon (10.0.0.1:35932). May 13 12:34:15.777711 sshd[4322]: Accepted publickey for core from 10.0.0.1 port 35932 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:34:15.779093 sshd-session[4322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:34:15.782935 systemd-logind[1504]: New session 20 of user core. May 13 12:34:15.793686 systemd[1]: Started session-20.scope - Session 20 of User core. May 13 12:34:15.898692 sshd[4324]: Connection closed by 10.0.0.1 port 35932 May 13 12:34:15.899150 sshd-session[4322]: pam_unix(sshd:session): session closed for user core May 13 12:34:15.902629 systemd[1]: sshd@19-10.0.0.26:22-10.0.0.1:35932.service: Deactivated successfully. May 13 12:34:15.904243 systemd[1]: session-20.scope: Deactivated successfully. May 13 12:34:15.904936 systemd-logind[1504]: Session 20 logged out. Waiting for processes to exit. May 13 12:34:15.906139 systemd-logind[1504]: Removed session 20. May 13 12:34:20.915360 systemd[1]: Started sshd@20-10.0.0.26:22-10.0.0.1:35942.service - OpenSSH per-connection server daemon (10.0.0.1:35942). May 13 12:34:20.981803 sshd[4337]: Accepted publickey for core from 10.0.0.1 port 35942 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:34:20.983104 sshd-session[4337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:34:20.988326 systemd-logind[1504]: New session 21 of user core. May 13 12:34:20.999719 systemd[1]: Started session-21.scope - Session 21 of User core. May 13 12:34:21.125481 sshd[4339]: Connection closed by 10.0.0.1 port 35942 May 13 12:34:21.125806 sshd-session[4337]: pam_unix(sshd:session): session closed for user core May 13 12:34:21.131105 systemd[1]: sshd@20-10.0.0.26:22-10.0.0.1:35942.service: Deactivated successfully. May 13 12:34:21.133032 systemd[1]: session-21.scope: Deactivated successfully. May 13 12:34:21.134276 systemd-logind[1504]: Session 21 logged out. Waiting for processes to exit. May 13 12:34:21.135311 systemd-logind[1504]: Removed session 21. May 13 12:34:26.148060 systemd[1]: Started sshd@21-10.0.0.26:22-10.0.0.1:44828.service - OpenSSH per-connection server daemon (10.0.0.1:44828). May 13 12:34:26.199828 sshd[4353]: Accepted publickey for core from 10.0.0.1 port 44828 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:34:26.201009 sshd-session[4353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:34:26.204655 systemd-logind[1504]: New session 22 of user core. May 13 12:34:26.223681 systemd[1]: Started session-22.scope - Session 22 of User core. May 13 12:34:26.338367 sshd[4355]: Connection closed by 10.0.0.1 port 44828 May 13 12:34:26.338839 sshd-session[4353]: pam_unix(sshd:session): session closed for user core May 13 12:34:26.350623 systemd[1]: sshd@21-10.0.0.26:22-10.0.0.1:44828.service: Deactivated successfully. May 13 12:34:26.352857 systemd[1]: session-22.scope: Deactivated successfully. May 13 12:34:26.353630 systemd-logind[1504]: Session 22 logged out. Waiting for processes to exit. May 13 12:34:26.356995 systemd[1]: Started sshd@22-10.0.0.26:22-10.0.0.1:44834.service - OpenSSH per-connection server daemon (10.0.0.1:44834). May 13 12:34:26.357921 systemd-logind[1504]: Removed session 22. May 13 12:34:26.411601 sshd[4368]: Accepted publickey for core from 10.0.0.1 port 44834 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:34:26.412724 sshd-session[4368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:34:26.418335 systemd-logind[1504]: New session 23 of user core. May 13 12:34:26.431907 systemd[1]: Started session-23.scope - Session 23 of User core. May 13 12:34:27.997172 containerd[1528]: time="2025-05-13T12:34:27.996714996Z" level=info msg="StopContainer for \"8cb98ff9881036b25b596ba84781130a9d128b82a0d2c2b4910229632707e70c\" with timeout 30 (s)" May 13 12:34:27.997901 containerd[1528]: time="2025-05-13T12:34:27.997198765Z" level=info msg="Stop container \"8cb98ff9881036b25b596ba84781130a9d128b82a0d2c2b4910229632707e70c\" with signal terminated" May 13 12:34:28.010328 systemd[1]: cri-containerd-8cb98ff9881036b25b596ba84781130a9d128b82a0d2c2b4910229632707e70c.scope: Deactivated successfully. May 13 12:34:28.011998 containerd[1528]: time="2025-05-13T12:34:28.011870172Z" level=info msg="received exit event container_id:\"8cb98ff9881036b25b596ba84781130a9d128b82a0d2c2b4910229632707e70c\" id:\"8cb98ff9881036b25b596ba84781130a9d128b82a0d2c2b4910229632707e70c\" pid:3315 exited_at:{seconds:1747139668 nanos:11497954}" May 13 12:34:28.012348 containerd[1528]: time="2025-05-13T12:34:28.012294148Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8cb98ff9881036b25b596ba84781130a9d128b82a0d2c2b4910229632707e70c\" id:\"8cb98ff9881036b25b596ba84781130a9d128b82a0d2c2b4910229632707e70c\" pid:3315 exited_at:{seconds:1747139668 nanos:11497954}" May 13 12:34:28.029211 containerd[1528]: time="2025-05-13T12:34:28.028905297Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 12:34:28.033533 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8cb98ff9881036b25b596ba84781130a9d128b82a0d2c2b4910229632707e70c-rootfs.mount: Deactivated successfully. May 13 12:34:28.033660 containerd[1528]: time="2025-05-13T12:34:28.033598423Z" level=info msg="TaskExit event in podsandbox handler container_id:\"77c4e49f4442065e1cb469b8c35de53fbc5d37e3a56fe7f757daabcf5c574f8c\" id:\"b98384518e852fe77efe18718ed7e5c3ae5d2a91ef5284ae2915085cf9145b01\" pid:4401 exited_at:{seconds:1747139668 nanos:33320839}" May 13 12:34:28.037578 containerd[1528]: time="2025-05-13T12:34:28.037501315Z" level=info msg="StopContainer for \"77c4e49f4442065e1cb469b8c35de53fbc5d37e3a56fe7f757daabcf5c574f8c\" with timeout 2 (s)" May 13 12:34:28.038369 containerd[1528]: time="2025-05-13T12:34:28.037884052Z" level=info msg="Stop container \"77c4e49f4442065e1cb469b8c35de53fbc5d37e3a56fe7f757daabcf5c574f8c\" with signal terminated" May 13 12:34:28.046687 systemd-networkd[1430]: lxc_health: Link DOWN May 13 12:34:28.046694 systemd-networkd[1430]: lxc_health: Lost carrier May 13 12:34:28.065577 systemd[1]: cri-containerd-77c4e49f4442065e1cb469b8c35de53fbc5d37e3a56fe7f757daabcf5c574f8c.scope: Deactivated successfully. May 13 12:34:28.065869 systemd[1]: cri-containerd-77c4e49f4442065e1cb469b8c35de53fbc5d37e3a56fe7f757daabcf5c574f8c.scope: Consumed 6.311s CPU time, 121.4M memory peak, 128K read from disk, 14.1M written to disk. May 13 12:34:28.066903 containerd[1528]: time="2025-05-13T12:34:28.066387706Z" level=info msg="received exit event container_id:\"77c4e49f4442065e1cb469b8c35de53fbc5d37e3a56fe7f757daabcf5c574f8c\" id:\"77c4e49f4442065e1cb469b8c35de53fbc5d37e3a56fe7f757daabcf5c574f8c\" pid:3429 exited_at:{seconds:1747139668 nanos:66079924}" May 13 12:34:28.066903 containerd[1528]: time="2025-05-13T12:34:28.066479301Z" level=info msg="TaskExit event in podsandbox handler container_id:\"77c4e49f4442065e1cb469b8c35de53fbc5d37e3a56fe7f757daabcf5c574f8c\" id:\"77c4e49f4442065e1cb469b8c35de53fbc5d37e3a56fe7f757daabcf5c574f8c\" pid:3429 exited_at:{seconds:1747139668 nanos:66079924}" May 13 12:34:28.084485 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77c4e49f4442065e1cb469b8c35de53fbc5d37e3a56fe7f757daabcf5c574f8c-rootfs.mount: Deactivated successfully. May 13 12:34:28.114690 containerd[1528]: time="2025-05-13T12:34:28.114639726Z" level=info msg="StopContainer for \"8cb98ff9881036b25b596ba84781130a9d128b82a0d2c2b4910229632707e70c\" returns successfully" May 13 12:34:28.115362 containerd[1528]: time="2025-05-13T12:34:28.115321687Z" level=info msg="StopPodSandbox for \"593dee176660503932c7ba2104c928c7124723dcfcdbc954bf4817ffd8996a3d\"" May 13 12:34:28.115404 containerd[1528]: time="2025-05-13T12:34:28.115387203Z" level=info msg="Container to stop \"8cb98ff9881036b25b596ba84781130a9d128b82a0d2c2b4910229632707e70c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 12:34:28.124948 systemd[1]: cri-containerd-593dee176660503932c7ba2104c928c7124723dcfcdbc954bf4817ffd8996a3d.scope: Deactivated successfully. May 13 12:34:28.126061 containerd[1528]: time="2025-05-13T12:34:28.125981064Z" level=info msg="TaskExit event in podsandbox handler container_id:\"593dee176660503932c7ba2104c928c7124723dcfcdbc954bf4817ffd8996a3d\" id:\"593dee176660503932c7ba2104c928c7124723dcfcdbc954bf4817ffd8996a3d\" pid:3023 exit_status:137 exited_at:{seconds:1747139668 nanos:125680921}" May 13 12:34:28.152902 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-593dee176660503932c7ba2104c928c7124723dcfcdbc954bf4817ffd8996a3d-rootfs.mount: Deactivated successfully. May 13 12:34:28.176106 containerd[1528]: time="2025-05-13T12:34:28.176048218Z" level=info msg="StopContainer for \"77c4e49f4442065e1cb469b8c35de53fbc5d37e3a56fe7f757daabcf5c574f8c\" returns successfully" May 13 12:34:28.176668 containerd[1528]: time="2025-05-13T12:34:28.176624984Z" level=info msg="StopPodSandbox for \"d0aff202abbc210579d158e872a01d635db476040d5e9806ecc3068dc5bdb0bc\"" May 13 12:34:28.176738 containerd[1528]: time="2025-05-13T12:34:28.176697700Z" level=info msg="Container to stop \"d39af00773a91035d8f04ee369ebcf409edcd7390f48eb0e0bd67832d4eb62fd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 12:34:28.176738 containerd[1528]: time="2025-05-13T12:34:28.176730138Z" level=info msg="Container to stop \"77c4e49f4442065e1cb469b8c35de53fbc5d37e3a56fe7f757daabcf5c574f8c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 12:34:28.176791 containerd[1528]: time="2025-05-13T12:34:28.176739737Z" level=info msg="Container to stop \"851d43786a3c5f4fcaf1e85ee81832b304230b8b63c9a7c6a064f6fe43654314\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 12:34:28.176791 containerd[1528]: time="2025-05-13T12:34:28.176748217Z" level=info msg="Container to stop \"a62da769d550517817e18fd797c1e81bc7e6817785027053a4700150350905ad\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 12:34:28.176791 containerd[1528]: time="2025-05-13T12:34:28.176756416Z" level=info msg="Container to stop \"c967dc506e140c28802f3899f594e17c9c6c57814c9c505a67852228f58a63c2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 12:34:28.182593 systemd[1]: cri-containerd-d0aff202abbc210579d158e872a01d635db476040d5e9806ecc3068dc5bdb0bc.scope: Deactivated successfully. May 13 12:34:28.188434 containerd[1528]: time="2025-05-13T12:34:28.188337499Z" level=info msg="shim disconnected" id=593dee176660503932c7ba2104c928c7124723dcfcdbc954bf4817ffd8996a3d namespace=k8s.io May 13 12:34:28.188434 containerd[1528]: time="2025-05-13T12:34:28.188365338Z" level=warning msg="cleaning up after shim disconnected" id=593dee176660503932c7ba2104c928c7124723dcfcdbc954bf4817ffd8996a3d namespace=k8s.io May 13 12:34:28.188434 containerd[1528]: time="2025-05-13T12:34:28.188393856Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 12:34:28.205239 containerd[1528]: time="2025-05-13T12:34:28.205184195Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d0aff202abbc210579d158e872a01d635db476040d5e9806ecc3068dc5bdb0bc\" id:\"d0aff202abbc210579d158e872a01d635db476040d5e9806ecc3068dc5bdb0bc\" pid:2938 exit_status:137 exited_at:{seconds:1747139668 nanos:182868659}" May 13 12:34:28.205474 containerd[1528]: time="2025-05-13T12:34:28.205435700Z" level=info msg="received exit event sandbox_id:\"593dee176660503932c7ba2104c928c7124723dcfcdbc954bf4817ffd8996a3d\" exit_status:137 exited_at:{seconds:1747139668 nanos:125680921}" May 13 12:34:28.205903 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d0aff202abbc210579d158e872a01d635db476040d5e9806ecc3068dc5bdb0bc-rootfs.mount: Deactivated successfully. May 13 12:34:28.206908 containerd[1528]: time="2025-05-13T12:34:28.206870616Z" level=info msg="TearDown network for sandbox \"593dee176660503932c7ba2104c928c7124723dcfcdbc954bf4817ffd8996a3d\" successfully" May 13 12:34:28.206908 containerd[1528]: time="2025-05-13T12:34:28.206898575Z" level=info msg="StopPodSandbox for \"593dee176660503932c7ba2104c928c7124723dcfcdbc954bf4817ffd8996a3d\" returns successfully" May 13 12:34:28.208851 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-593dee176660503932c7ba2104c928c7124723dcfcdbc954bf4817ffd8996a3d-shm.mount: Deactivated successfully. May 13 12:34:28.253312 containerd[1528]: time="2025-05-13T12:34:28.252471671Z" level=info msg="received exit event sandbox_id:\"d0aff202abbc210579d158e872a01d635db476040d5e9806ecc3068dc5bdb0bc\" exit_status:137 exited_at:{seconds:1747139668 nanos:182868659}" May 13 12:34:28.253312 containerd[1528]: time="2025-05-13T12:34:28.252754295Z" level=info msg="shim disconnected" id=d0aff202abbc210579d158e872a01d635db476040d5e9806ecc3068dc5bdb0bc namespace=k8s.io May 13 12:34:28.253312 containerd[1528]: time="2025-05-13T12:34:28.252776214Z" level=warning msg="cleaning up after shim disconnected" id=d0aff202abbc210579d158e872a01d635db476040d5e9806ecc3068dc5bdb0bc namespace=k8s.io May 13 12:34:28.253312 containerd[1528]: time="2025-05-13T12:34:28.252826131Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 12:34:28.253312 containerd[1528]: time="2025-05-13T12:34:28.252894727Z" level=info msg="TearDown network for sandbox \"d0aff202abbc210579d158e872a01d635db476040d5e9806ecc3068dc5bdb0bc\" successfully" May 13 12:34:28.253312 containerd[1528]: time="2025-05-13T12:34:28.252915285Z" level=info msg="StopPodSandbox for \"d0aff202abbc210579d158e872a01d635db476040d5e9806ecc3068dc5bdb0bc\" returns successfully" May 13 12:34:28.332200 kubelet[2771]: I0513 12:34:28.332142 2771 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6caa8d14-0307-4fe6-9619-4c0fd969eda7-host-proc-sys-net\") pod \"6caa8d14-0307-4fe6-9619-4c0fd969eda7\" (UID: \"6caa8d14-0307-4fe6-9619-4c0fd969eda7\") " May 13 12:34:28.332200 kubelet[2771]: I0513 12:34:28.332197 2771 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6caa8d14-0307-4fe6-9619-4c0fd969eda7-lib-modules\") pod \"6caa8d14-0307-4fe6-9619-4c0fd969eda7\" (UID: \"6caa8d14-0307-4fe6-9619-4c0fd969eda7\") " May 13 12:34:28.332605 kubelet[2771]: I0513 12:34:28.332219 2771 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6caa8d14-0307-4fe6-9619-4c0fd969eda7-cni-path\") pod \"6caa8d14-0307-4fe6-9619-4c0fd969eda7\" (UID: \"6caa8d14-0307-4fe6-9619-4c0fd969eda7\") " May 13 12:34:28.332605 kubelet[2771]: I0513 12:34:28.332241 2771 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4acd787a-22e0-485f-84a0-09cebeaf02ef-cilium-config-path\") pod \"4acd787a-22e0-485f-84a0-09cebeaf02ef\" (UID: \"4acd787a-22e0-485f-84a0-09cebeaf02ef\") " May 13 12:34:28.332605 kubelet[2771]: I0513 12:34:28.332257 2771 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6caa8d14-0307-4fe6-9619-4c0fd969eda7-host-proc-sys-kernel\") pod \"6caa8d14-0307-4fe6-9619-4c0fd969eda7\" (UID: \"6caa8d14-0307-4fe6-9619-4c0fd969eda7\") " May 13 12:34:28.332605 kubelet[2771]: I0513 12:34:28.332286 2771 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cf4sf\" (UniqueName: \"kubernetes.io/projected/6caa8d14-0307-4fe6-9619-4c0fd969eda7-kube-api-access-cf4sf\") pod \"6caa8d14-0307-4fe6-9619-4c0fd969eda7\" (UID: \"6caa8d14-0307-4fe6-9619-4c0fd969eda7\") " May 13 12:34:28.332605 kubelet[2771]: I0513 12:34:28.332304 2771 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbdn5\" (UniqueName: \"kubernetes.io/projected/4acd787a-22e0-485f-84a0-09cebeaf02ef-kube-api-access-zbdn5\") pod \"4acd787a-22e0-485f-84a0-09cebeaf02ef\" (UID: \"4acd787a-22e0-485f-84a0-09cebeaf02ef\") " May 13 12:34:28.332605 kubelet[2771]: I0513 12:34:28.332355 2771 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6caa8d14-0307-4fe6-9619-4c0fd969eda7-hubble-tls\") pod \"6caa8d14-0307-4fe6-9619-4c0fd969eda7\" (UID: \"6caa8d14-0307-4fe6-9619-4c0fd969eda7\") " May 13 12:34:28.332734 kubelet[2771]: I0513 12:34:28.332374 2771 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6caa8d14-0307-4fe6-9619-4c0fd969eda7-clustermesh-secrets\") pod \"6caa8d14-0307-4fe6-9619-4c0fd969eda7\" (UID: \"6caa8d14-0307-4fe6-9619-4c0fd969eda7\") " May 13 12:34:28.332734 kubelet[2771]: I0513 12:34:28.332390 2771 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6caa8d14-0307-4fe6-9619-4c0fd969eda7-hostproc\") pod \"6caa8d14-0307-4fe6-9619-4c0fd969eda7\" (UID: \"6caa8d14-0307-4fe6-9619-4c0fd969eda7\") " May 13 12:34:28.332734 kubelet[2771]: I0513 12:34:28.332406 2771 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6caa8d14-0307-4fe6-9619-4c0fd969eda7-xtables-lock\") pod \"6caa8d14-0307-4fe6-9619-4c0fd969eda7\" (UID: \"6caa8d14-0307-4fe6-9619-4c0fd969eda7\") " May 13 12:34:28.332734 kubelet[2771]: I0513 12:34:28.332429 2771 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6caa8d14-0307-4fe6-9619-4c0fd969eda7-cilium-run\") pod \"6caa8d14-0307-4fe6-9619-4c0fd969eda7\" (UID: \"6caa8d14-0307-4fe6-9619-4c0fd969eda7\") " May 13 12:34:28.332734 kubelet[2771]: I0513 12:34:28.332445 2771 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6caa8d14-0307-4fe6-9619-4c0fd969eda7-cilium-cgroup\") pod \"6caa8d14-0307-4fe6-9619-4c0fd969eda7\" (UID: \"6caa8d14-0307-4fe6-9619-4c0fd969eda7\") " May 13 12:34:28.332734 kubelet[2771]: I0513 12:34:28.332461 2771 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6caa8d14-0307-4fe6-9619-4c0fd969eda7-etc-cni-netd\") pod \"6caa8d14-0307-4fe6-9619-4c0fd969eda7\" (UID: \"6caa8d14-0307-4fe6-9619-4c0fd969eda7\") " May 13 12:34:28.332855 kubelet[2771]: I0513 12:34:28.332475 2771 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6caa8d14-0307-4fe6-9619-4c0fd969eda7-bpf-maps\") pod \"6caa8d14-0307-4fe6-9619-4c0fd969eda7\" (UID: \"6caa8d14-0307-4fe6-9619-4c0fd969eda7\") " May 13 12:34:28.332855 kubelet[2771]: I0513 12:34:28.332495 2771 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6caa8d14-0307-4fe6-9619-4c0fd969eda7-cilium-config-path\") pod \"6caa8d14-0307-4fe6-9619-4c0fd969eda7\" (UID: \"6caa8d14-0307-4fe6-9619-4c0fd969eda7\") " May 13 12:34:28.334867 kubelet[2771]: I0513 12:34:28.334839 2771 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6caa8d14-0307-4fe6-9619-4c0fd969eda7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6caa8d14-0307-4fe6-9619-4c0fd969eda7" (UID: "6caa8d14-0307-4fe6-9619-4c0fd969eda7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 12:34:28.335239 kubelet[2771]: I0513 12:34:28.334981 2771 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6caa8d14-0307-4fe6-9619-4c0fd969eda7-cni-path" (OuterVolumeSpecName: "cni-path") pod "6caa8d14-0307-4fe6-9619-4c0fd969eda7" (UID: "6caa8d14-0307-4fe6-9619-4c0fd969eda7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 12:34:28.335239 kubelet[2771]: I0513 12:34:28.335008 2771 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6caa8d14-0307-4fe6-9619-4c0fd969eda7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6caa8d14-0307-4fe6-9619-4c0fd969eda7" (UID: "6caa8d14-0307-4fe6-9619-4c0fd969eda7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 12:34:28.335239 kubelet[2771]: I0513 12:34:28.335065 2771 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6caa8d14-0307-4fe6-9619-4c0fd969eda7-hostproc" (OuterVolumeSpecName: "hostproc") pod "6caa8d14-0307-4fe6-9619-4c0fd969eda7" (UID: "6caa8d14-0307-4fe6-9619-4c0fd969eda7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 12:34:28.335239 kubelet[2771]: I0513 12:34:28.335101 2771 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6caa8d14-0307-4fe6-9619-4c0fd969eda7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6caa8d14-0307-4fe6-9619-4c0fd969eda7" (UID: "6caa8d14-0307-4fe6-9619-4c0fd969eda7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 12:34:28.335239 kubelet[2771]: I0513 12:34:28.335120 2771 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6caa8d14-0307-4fe6-9619-4c0fd969eda7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6caa8d14-0307-4fe6-9619-4c0fd969eda7" (UID: "6caa8d14-0307-4fe6-9619-4c0fd969eda7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 12:34:28.335386 kubelet[2771]: I0513 12:34:28.335135 2771 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6caa8d14-0307-4fe6-9619-4c0fd969eda7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6caa8d14-0307-4fe6-9619-4c0fd969eda7" (UID: "6caa8d14-0307-4fe6-9619-4c0fd969eda7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 12:34:28.335439 kubelet[2771]: I0513 12:34:28.335416 2771 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6caa8d14-0307-4fe6-9619-4c0fd969eda7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6caa8d14-0307-4fe6-9619-4c0fd969eda7" (UID: "6caa8d14-0307-4fe6-9619-4c0fd969eda7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 12:34:28.335572 kubelet[2771]: I0513 12:34:28.335542 2771 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6caa8d14-0307-4fe6-9619-4c0fd969eda7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6caa8d14-0307-4fe6-9619-4c0fd969eda7" (UID: "6caa8d14-0307-4fe6-9619-4c0fd969eda7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 12:34:28.338578 kubelet[2771]: I0513 12:34:28.337868 2771 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6caa8d14-0307-4fe6-9619-4c0fd969eda7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6caa8d14-0307-4fe6-9619-4c0fd969eda7" (UID: "6caa8d14-0307-4fe6-9619-4c0fd969eda7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 12:34:28.342663 kubelet[2771]: I0513 12:34:28.342217 2771 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6caa8d14-0307-4fe6-9619-4c0fd969eda7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6caa8d14-0307-4fe6-9619-4c0fd969eda7" (UID: "6caa8d14-0307-4fe6-9619-4c0fd969eda7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 12:34:28.342663 kubelet[2771]: I0513 12:34:28.342230 2771 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4acd787a-22e0-485f-84a0-09cebeaf02ef-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4acd787a-22e0-485f-84a0-09cebeaf02ef" (UID: "4acd787a-22e0-485f-84a0-09cebeaf02ef"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 12:34:28.343332 kubelet[2771]: I0513 12:34:28.343275 2771 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6caa8d14-0307-4fe6-9619-4c0fd969eda7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6caa8d14-0307-4fe6-9619-4c0fd969eda7" (UID: "6caa8d14-0307-4fe6-9619-4c0fd969eda7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 12:34:28.343518 kubelet[2771]: I0513 12:34:28.343471 2771 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6caa8d14-0307-4fe6-9619-4c0fd969eda7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6caa8d14-0307-4fe6-9619-4c0fd969eda7" (UID: "6caa8d14-0307-4fe6-9619-4c0fd969eda7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 12:34:28.344344 kubelet[2771]: I0513 12:34:28.344309 2771 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6caa8d14-0307-4fe6-9619-4c0fd969eda7-kube-api-access-cf4sf" (OuterVolumeSpecName: "kube-api-access-cf4sf") pod "6caa8d14-0307-4fe6-9619-4c0fd969eda7" (UID: "6caa8d14-0307-4fe6-9619-4c0fd969eda7"). InnerVolumeSpecName "kube-api-access-cf4sf". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 12:34:28.344395 kubelet[2771]: I0513 12:34:28.344366 2771 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4acd787a-22e0-485f-84a0-09cebeaf02ef-kube-api-access-zbdn5" (OuterVolumeSpecName: "kube-api-access-zbdn5") pod "4acd787a-22e0-485f-84a0-09cebeaf02ef" (UID: "4acd787a-22e0-485f-84a0-09cebeaf02ef"). InnerVolumeSpecName "kube-api-access-zbdn5". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 12:34:28.432761 kubelet[2771]: I0513 12:34:28.432710 2771 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6caa8d14-0307-4fe6-9619-4c0fd969eda7-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 12:34:28.432761 kubelet[2771]: I0513 12:34:28.432758 2771 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6caa8d14-0307-4fe6-9619-4c0fd969eda7-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 13 12:34:28.432923 kubelet[2771]: I0513 12:34:28.432775 2771 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6caa8d14-0307-4fe6-9619-4c0fd969eda7-lib-modules\") on node \"localhost\" DevicePath \"\"" May 13 12:34:28.432923 kubelet[2771]: I0513 12:34:28.432790 2771 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6caa8d14-0307-4fe6-9619-4c0fd969eda7-cni-path\") on node \"localhost\" DevicePath \"\"" May 13 12:34:28.432923 kubelet[2771]: I0513 12:34:28.432807 2771 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4acd787a-22e0-485f-84a0-09cebeaf02ef-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 12:34:28.432923 kubelet[2771]: I0513 12:34:28.432822 2771 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6caa8d14-0307-4fe6-9619-4c0fd969eda7-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 13 12:34:28.432923 kubelet[2771]: I0513 12:34:28.432836 2771 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-cf4sf\" (UniqueName: \"kubernetes.io/projected/6caa8d14-0307-4fe6-9619-4c0fd969eda7-kube-api-access-cf4sf\") on node \"localhost\" DevicePath \"\"" May 13 12:34:28.432923 kubelet[2771]: I0513 12:34:28.432850 2771 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-zbdn5\" (UniqueName: \"kubernetes.io/projected/4acd787a-22e0-485f-84a0-09cebeaf02ef-kube-api-access-zbdn5\") on node \"localhost\" DevicePath \"\"" May 13 12:34:28.432923 kubelet[2771]: I0513 12:34:28.432865 2771 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6caa8d14-0307-4fe6-9619-4c0fd969eda7-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 13 12:34:28.432923 kubelet[2771]: I0513 12:34:28.432878 2771 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6caa8d14-0307-4fe6-9619-4c0fd969eda7-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 13 12:34:28.433106 kubelet[2771]: I0513 12:34:28.432893 2771 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6caa8d14-0307-4fe6-9619-4c0fd969eda7-hostproc\") on node \"localhost\" DevicePath \"\"" May 13 12:34:28.433106 kubelet[2771]: I0513 12:34:28.432906 2771 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6caa8d14-0307-4fe6-9619-4c0fd969eda7-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 13 12:34:28.433106 kubelet[2771]: I0513 12:34:28.432914 2771 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6caa8d14-0307-4fe6-9619-4c0fd969eda7-cilium-run\") on node \"localhost\" DevicePath \"\"" May 13 12:34:28.433106 kubelet[2771]: I0513 12:34:28.432921 2771 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6caa8d14-0307-4fe6-9619-4c0fd969eda7-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 13 12:34:28.433106 kubelet[2771]: I0513 12:34:28.432928 2771 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6caa8d14-0307-4fe6-9619-4c0fd969eda7-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 13 12:34:28.433106 kubelet[2771]: I0513 12:34:28.432936 2771 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6caa8d14-0307-4fe6-9619-4c0fd969eda7-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 13 12:34:28.507863 kubelet[2771]: I0513 12:34:28.507756 2771 scope.go:117] "RemoveContainer" containerID="8cb98ff9881036b25b596ba84781130a9d128b82a0d2c2b4910229632707e70c" May 13 12:34:28.510866 containerd[1528]: time="2025-05-13T12:34:28.510801294Z" level=info msg="RemoveContainer for \"8cb98ff9881036b25b596ba84781130a9d128b82a0d2c2b4910229632707e70c\"" May 13 12:34:28.516926 systemd[1]: Removed slice kubepods-besteffort-pod4acd787a_22e0_485f_84a0_09cebeaf02ef.slice - libcontainer container kubepods-besteffort-pod4acd787a_22e0_485f_84a0_09cebeaf02ef.slice. May 13 12:34:28.520890 systemd[1]: Removed slice kubepods-burstable-pod6caa8d14_0307_4fe6_9619_4c0fd969eda7.slice - libcontainer container kubepods-burstable-pod6caa8d14_0307_4fe6_9619_4c0fd969eda7.slice. May 13 12:34:28.520987 systemd[1]: kubepods-burstable-pod6caa8d14_0307_4fe6_9619_4c0fd969eda7.slice: Consumed 6.458s CPU time, 121.7M memory peak, 132K read from disk, 14.3M written to disk. May 13 12:34:28.579927 containerd[1528]: time="2025-05-13T12:34:28.579878497Z" level=info msg="RemoveContainer for \"8cb98ff9881036b25b596ba84781130a9d128b82a0d2c2b4910229632707e70c\" returns successfully" May 13 12:34:28.580239 kubelet[2771]: I0513 12:34:28.580207 2771 scope.go:117] "RemoveContainer" containerID="8cb98ff9881036b25b596ba84781130a9d128b82a0d2c2b4910229632707e70c" May 13 12:34:28.580536 containerd[1528]: time="2025-05-13T12:34:28.580489821Z" level=error msg="ContainerStatus for \"8cb98ff9881036b25b596ba84781130a9d128b82a0d2c2b4910229632707e70c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8cb98ff9881036b25b596ba84781130a9d128b82a0d2c2b4910229632707e70c\": not found" May 13 12:34:28.584529 kubelet[2771]: E0513 12:34:28.584503 2771 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8cb98ff9881036b25b596ba84781130a9d128b82a0d2c2b4910229632707e70c\": not found" containerID="8cb98ff9881036b25b596ba84781130a9d128b82a0d2c2b4910229632707e70c" May 13 12:34:28.584712 kubelet[2771]: I0513 12:34:28.584637 2771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8cb98ff9881036b25b596ba84781130a9d128b82a0d2c2b4910229632707e70c"} err="failed to get container status \"8cb98ff9881036b25b596ba84781130a9d128b82a0d2c2b4910229632707e70c\": rpc error: code = NotFound desc = an error occurred when try to find container \"8cb98ff9881036b25b596ba84781130a9d128b82a0d2c2b4910229632707e70c\": not found" May 13 12:34:28.584771 kubelet[2771]: I0513 12:34:28.584760 2771 scope.go:117] "RemoveContainer" containerID="77c4e49f4442065e1cb469b8c35de53fbc5d37e3a56fe7f757daabcf5c574f8c" May 13 12:34:28.586497 containerd[1528]: time="2025-05-13T12:34:28.586465312Z" level=info msg="RemoveContainer for \"77c4e49f4442065e1cb469b8c35de53fbc5d37e3a56fe7f757daabcf5c574f8c\"" May 13 12:34:28.625781 containerd[1528]: time="2025-05-13T12:34:28.625680220Z" level=info msg="RemoveContainer for \"77c4e49f4442065e1cb469b8c35de53fbc5d37e3a56fe7f757daabcf5c574f8c\" returns successfully" May 13 12:34:28.626191 kubelet[2771]: I0513 12:34:28.626029 2771 scope.go:117] "RemoveContainer" containerID="c967dc506e140c28802f3899f594e17c9c6c57814c9c505a67852228f58a63c2" May 13 12:34:28.632968 containerd[1528]: time="2025-05-13T12:34:28.632935716Z" level=info msg="RemoveContainer for \"c967dc506e140c28802f3899f594e17c9c6c57814c9c505a67852228f58a63c2\"" May 13 12:34:28.638261 containerd[1528]: time="2025-05-13T12:34:28.638045178Z" level=info msg="RemoveContainer for \"c967dc506e140c28802f3899f594e17c9c6c57814c9c505a67852228f58a63c2\" returns successfully" May 13 12:34:28.638346 kubelet[2771]: I0513 12:34:28.638285 2771 scope.go:117] "RemoveContainer" containerID="a62da769d550517817e18fd797c1e81bc7e6817785027053a4700150350905ad" May 13 12:34:28.641309 containerd[1528]: time="2025-05-13T12:34:28.641280429Z" level=info msg="RemoveContainer for \"a62da769d550517817e18fd797c1e81bc7e6817785027053a4700150350905ad\"" May 13 12:34:28.645013 containerd[1528]: time="2025-05-13T12:34:28.644898257Z" level=info msg="RemoveContainer for \"a62da769d550517817e18fd797c1e81bc7e6817785027053a4700150350905ad\" returns successfully" May 13 12:34:28.645112 kubelet[2771]: I0513 12:34:28.645090 2771 scope.go:117] "RemoveContainer" containerID="851d43786a3c5f4fcaf1e85ee81832b304230b8b63c9a7c6a064f6fe43654314" May 13 12:34:28.646430 containerd[1528]: time="2025-05-13T12:34:28.646407489Z" level=info msg="RemoveContainer for \"851d43786a3c5f4fcaf1e85ee81832b304230b8b63c9a7c6a064f6fe43654314\"" May 13 12:34:28.649067 containerd[1528]: time="2025-05-13T12:34:28.648970579Z" level=info msg="RemoveContainer for \"851d43786a3c5f4fcaf1e85ee81832b304230b8b63c9a7c6a064f6fe43654314\" returns successfully" May 13 12:34:28.649227 kubelet[2771]: I0513 12:34:28.649112 2771 scope.go:117] "RemoveContainer" containerID="d39af00773a91035d8f04ee369ebcf409edcd7390f48eb0e0bd67832d4eb62fd" May 13 12:34:28.650584 containerd[1528]: time="2025-05-13T12:34:28.650543567Z" level=info msg="RemoveContainer for \"d39af00773a91035d8f04ee369ebcf409edcd7390f48eb0e0bd67832d4eb62fd\"" May 13 12:34:28.654183 containerd[1528]: time="2025-05-13T12:34:28.654151917Z" level=info msg="RemoveContainer for \"d39af00773a91035d8f04ee369ebcf409edcd7390f48eb0e0bd67832d4eb62fd\" returns successfully" May 13 12:34:28.654638 kubelet[2771]: I0513 12:34:28.654320 2771 scope.go:117] "RemoveContainer" containerID="77c4e49f4442065e1cb469b8c35de53fbc5d37e3a56fe7f757daabcf5c574f8c" May 13 12:34:28.654701 containerd[1528]: time="2025-05-13T12:34:28.654529534Z" level=error msg="ContainerStatus for \"77c4e49f4442065e1cb469b8c35de53fbc5d37e3a56fe7f757daabcf5c574f8c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"77c4e49f4442065e1cb469b8c35de53fbc5d37e3a56fe7f757daabcf5c574f8c\": not found" May 13 12:34:28.654890 kubelet[2771]: E0513 12:34:28.654867 2771 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"77c4e49f4442065e1cb469b8c35de53fbc5d37e3a56fe7f757daabcf5c574f8c\": not found" containerID="77c4e49f4442065e1cb469b8c35de53fbc5d37e3a56fe7f757daabcf5c574f8c" May 13 12:34:28.654931 kubelet[2771]: I0513 12:34:28.654900 2771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"77c4e49f4442065e1cb469b8c35de53fbc5d37e3a56fe7f757daabcf5c574f8c"} err="failed to get container status \"77c4e49f4442065e1cb469b8c35de53fbc5d37e3a56fe7f757daabcf5c574f8c\": rpc error: code = NotFound desc = an error occurred when try to find container \"77c4e49f4442065e1cb469b8c35de53fbc5d37e3a56fe7f757daabcf5c574f8c\": not found" May 13 12:34:28.654931 kubelet[2771]: I0513 12:34:28.654921 2771 scope.go:117] "RemoveContainer" containerID="c967dc506e140c28802f3899f594e17c9c6c57814c9c505a67852228f58a63c2" May 13 12:34:28.655185 containerd[1528]: time="2025-05-13T12:34:28.655096661Z" level=error msg="ContainerStatus for \"c967dc506e140c28802f3899f594e17c9c6c57814c9c505a67852228f58a63c2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c967dc506e140c28802f3899f594e17c9c6c57814c9c505a67852228f58a63c2\": not found" May 13 12:34:28.655310 kubelet[2771]: E0513 12:34:28.655289 2771 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c967dc506e140c28802f3899f594e17c9c6c57814c9c505a67852228f58a63c2\": not found" containerID="c967dc506e140c28802f3899f594e17c9c6c57814c9c505a67852228f58a63c2" May 13 12:34:28.655389 kubelet[2771]: I0513 12:34:28.655312 2771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c967dc506e140c28802f3899f594e17c9c6c57814c9c505a67852228f58a63c2"} err="failed to get container status \"c967dc506e140c28802f3899f594e17c9c6c57814c9c505a67852228f58a63c2\": rpc error: code = NotFound desc = an error occurred when try to find container \"c967dc506e140c28802f3899f594e17c9c6c57814c9c505a67852228f58a63c2\": not found" May 13 12:34:28.655413 kubelet[2771]: I0513 12:34:28.655396 2771 scope.go:117] "RemoveContainer" containerID="a62da769d550517817e18fd797c1e81bc7e6817785027053a4700150350905ad" May 13 12:34:28.655642 containerd[1528]: time="2025-05-13T12:34:28.655570674Z" level=error msg="ContainerStatus for \"a62da769d550517817e18fd797c1e81bc7e6817785027053a4700150350905ad\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a62da769d550517817e18fd797c1e81bc7e6817785027053a4700150350905ad\": not found" May 13 12:34:28.655709 kubelet[2771]: E0513 12:34:28.655687 2771 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a62da769d550517817e18fd797c1e81bc7e6817785027053a4700150350905ad\": not found" containerID="a62da769d550517817e18fd797c1e81bc7e6817785027053a4700150350905ad" May 13 12:34:28.655752 kubelet[2771]: I0513 12:34:28.655713 2771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a62da769d550517817e18fd797c1e81bc7e6817785027053a4700150350905ad"} err="failed to get container status \"a62da769d550517817e18fd797c1e81bc7e6817785027053a4700150350905ad\": rpc error: code = NotFound desc = an error occurred when try to find container \"a62da769d550517817e18fd797c1e81bc7e6817785027053a4700150350905ad\": not found" May 13 12:34:28.655752 kubelet[2771]: I0513 12:34:28.655730 2771 scope.go:117] "RemoveContainer" containerID="851d43786a3c5f4fcaf1e85ee81832b304230b8b63c9a7c6a064f6fe43654314" May 13 12:34:28.655912 containerd[1528]: time="2025-05-13T12:34:28.655882295Z" level=error msg="ContainerStatus for \"851d43786a3c5f4fcaf1e85ee81832b304230b8b63c9a7c6a064f6fe43654314\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"851d43786a3c5f4fcaf1e85ee81832b304230b8b63c9a7c6a064f6fe43654314\": not found" May 13 12:34:28.656037 kubelet[2771]: E0513 12:34:28.656015 2771 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"851d43786a3c5f4fcaf1e85ee81832b304230b8b63c9a7c6a064f6fe43654314\": not found" containerID="851d43786a3c5f4fcaf1e85ee81832b304230b8b63c9a7c6a064f6fe43654314" May 13 12:34:28.656111 kubelet[2771]: I0513 12:34:28.656092 2771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"851d43786a3c5f4fcaf1e85ee81832b304230b8b63c9a7c6a064f6fe43654314"} err="failed to get container status \"851d43786a3c5f4fcaf1e85ee81832b304230b8b63c9a7c6a064f6fe43654314\": rpc error: code = NotFound desc = an error occurred when try to find container \"851d43786a3c5f4fcaf1e85ee81832b304230b8b63c9a7c6a064f6fe43654314\": not found" May 13 12:34:28.656141 kubelet[2771]: I0513 12:34:28.656115 2771 scope.go:117] "RemoveContainer" containerID="d39af00773a91035d8f04ee369ebcf409edcd7390f48eb0e0bd67832d4eb62fd" May 13 12:34:28.656299 containerd[1528]: time="2025-05-13T12:34:28.656268313Z" level=error msg="ContainerStatus for \"d39af00773a91035d8f04ee369ebcf409edcd7390f48eb0e0bd67832d4eb62fd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d39af00773a91035d8f04ee369ebcf409edcd7390f48eb0e0bd67832d4eb62fd\": not found" May 13 12:34:28.656393 kubelet[2771]: E0513 12:34:28.656375 2771 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d39af00773a91035d8f04ee369ebcf409edcd7390f48eb0e0bd67832d4eb62fd\": not found" containerID="d39af00773a91035d8f04ee369ebcf409edcd7390f48eb0e0bd67832d4eb62fd" May 13 12:34:28.656423 kubelet[2771]: I0513 12:34:28.656397 2771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d39af00773a91035d8f04ee369ebcf409edcd7390f48eb0e0bd67832d4eb62fd"} err="failed to get container status \"d39af00773a91035d8f04ee369ebcf409edcd7390f48eb0e0bd67832d4eb62fd\": rpc error: code = NotFound desc = an error occurred when try to find container \"d39af00773a91035d8f04ee369ebcf409edcd7390f48eb0e0bd67832d4eb62fd\": not found" May 13 12:34:29.032332 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d0aff202abbc210579d158e872a01d635db476040d5e9806ecc3068dc5bdb0bc-shm.mount: Deactivated successfully. May 13 12:34:29.032436 systemd[1]: var-lib-kubelet-pods-4acd787a\x2d22e0\x2d485f\x2d84a0\x2d09cebeaf02ef-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzbdn5.mount: Deactivated successfully. May 13 12:34:29.032491 systemd[1]: var-lib-kubelet-pods-6caa8d14\x2d0307\x2d4fe6\x2d9619\x2d4c0fd969eda7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcf4sf.mount: Deactivated successfully. May 13 12:34:29.032537 systemd[1]: var-lib-kubelet-pods-6caa8d14\x2d0307\x2d4fe6\x2d9619\x2d4c0fd969eda7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 12:34:29.032607 systemd[1]: var-lib-kubelet-pods-6caa8d14\x2d0307\x2d4fe6\x2d9619\x2d4c0fd969eda7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 12:34:29.253726 kubelet[2771]: I0513 12:34:29.253691 2771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4acd787a-22e0-485f-84a0-09cebeaf02ef" path="/var/lib/kubelet/pods/4acd787a-22e0-485f-84a0-09cebeaf02ef/volumes" May 13 12:34:29.254105 kubelet[2771]: I0513 12:34:29.254070 2771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6caa8d14-0307-4fe6-9619-4c0fd969eda7" path="/var/lib/kubelet/pods/6caa8d14-0307-4fe6-9619-4c0fd969eda7/volumes" May 13 12:34:29.963090 sshd[4370]: Connection closed by 10.0.0.1 port 44834 May 13 12:34:29.963344 sshd-session[4368]: pam_unix(sshd:session): session closed for user core May 13 12:34:29.975649 systemd[1]: sshd@22-10.0.0.26:22-10.0.0.1:44834.service: Deactivated successfully. May 13 12:34:29.977980 systemd[1]: session-23.scope: Deactivated successfully. May 13 12:34:29.978904 systemd-logind[1504]: Session 23 logged out. Waiting for processes to exit. May 13 12:34:29.981912 systemd[1]: Started sshd@23-10.0.0.26:22-10.0.0.1:44850.service - OpenSSH per-connection server daemon (10.0.0.1:44850). May 13 12:34:29.982496 systemd-logind[1504]: Removed session 23. May 13 12:34:30.034697 sshd[4524]: Accepted publickey for core from 10.0.0.1 port 44850 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:34:30.036003 sshd-session[4524]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:34:30.040361 systemd-logind[1504]: New session 24 of user core. May 13 12:34:30.053682 systemd[1]: Started session-24.scope - Session 24 of User core. May 13 12:34:30.967571 sshd[4526]: Connection closed by 10.0.0.1 port 44850 May 13 12:34:30.966752 sshd-session[4524]: pam_unix(sshd:session): session closed for user core May 13 12:34:30.977225 systemd[1]: sshd@23-10.0.0.26:22-10.0.0.1:44850.service: Deactivated successfully. May 13 12:34:30.981177 systemd[1]: session-24.scope: Deactivated successfully. May 13 12:34:30.984892 kubelet[2771]: I0513 12:34:30.984375 2771 topology_manager.go:215] "Topology Admit Handler" podUID="2e021716-2083-4a66-a13d-9e279f876040" podNamespace="kube-system" podName="cilium-vg4vl" May 13 12:34:30.984892 kubelet[2771]: E0513 12:34:30.984504 2771 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6caa8d14-0307-4fe6-9619-4c0fd969eda7" containerName="apply-sysctl-overwrites" May 13 12:34:30.984892 kubelet[2771]: E0513 12:34:30.984514 2771 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6caa8d14-0307-4fe6-9619-4c0fd969eda7" containerName="mount-bpf-fs" May 13 12:34:30.984892 kubelet[2771]: E0513 12:34:30.984520 2771 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6caa8d14-0307-4fe6-9619-4c0fd969eda7" containerName="cilium-agent" May 13 12:34:30.984892 kubelet[2771]: E0513 12:34:30.984527 2771 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6caa8d14-0307-4fe6-9619-4c0fd969eda7" containerName="mount-cgroup" May 13 12:34:30.984892 kubelet[2771]: E0513 12:34:30.984532 2771 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4acd787a-22e0-485f-84a0-09cebeaf02ef" containerName="cilium-operator" May 13 12:34:30.984892 kubelet[2771]: E0513 12:34:30.984538 2771 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6caa8d14-0307-4fe6-9619-4c0fd969eda7" containerName="clean-cilium-state" May 13 12:34:30.984412 systemd-logind[1504]: Session 24 logged out. Waiting for processes to exit. May 13 12:34:30.986804 kubelet[2771]: I0513 12:34:30.985987 2771 memory_manager.go:354] "RemoveStaleState removing state" podUID="4acd787a-22e0-485f-84a0-09cebeaf02ef" containerName="cilium-operator" May 13 12:34:30.986804 kubelet[2771]: I0513 12:34:30.986008 2771 memory_manager.go:354] "RemoveStaleState removing state" podUID="6caa8d14-0307-4fe6-9619-4c0fd969eda7" containerName="cilium-agent" May 13 12:34:30.988533 systemd[1]: Started sshd@24-10.0.0.26:22-10.0.0.1:44864.service - OpenSSH per-connection server daemon (10.0.0.1:44864). May 13 12:34:30.992964 systemd-logind[1504]: Removed session 24. May 13 12:34:31.006875 systemd[1]: Created slice kubepods-burstable-pod2e021716_2083_4a66_a13d_9e279f876040.slice - libcontainer container kubepods-burstable-pod2e021716_2083_4a66_a13d_9e279f876040.slice. May 13 12:34:31.047934 kubelet[2771]: I0513 12:34:31.047898 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2e021716-2083-4a66-a13d-9e279f876040-cilium-cgroup\") pod \"cilium-vg4vl\" (UID: \"2e021716-2083-4a66-a13d-9e279f876040\") " pod="kube-system/cilium-vg4vl" May 13 12:34:31.048092 kubelet[2771]: I0513 12:34:31.048076 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2e021716-2083-4a66-a13d-9e279f876040-hubble-tls\") pod \"cilium-vg4vl\" (UID: \"2e021716-2083-4a66-a13d-9e279f876040\") " pod="kube-system/cilium-vg4vl" May 13 12:34:31.048224 kubelet[2771]: I0513 12:34:31.048211 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2e021716-2083-4a66-a13d-9e279f876040-bpf-maps\") pod \"cilium-vg4vl\" (UID: \"2e021716-2083-4a66-a13d-9e279f876040\") " pod="kube-system/cilium-vg4vl" May 13 12:34:31.048337 kubelet[2771]: I0513 12:34:31.048325 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2e021716-2083-4a66-a13d-9e279f876040-cni-path\") pod \"cilium-vg4vl\" (UID: \"2e021716-2083-4a66-a13d-9e279f876040\") " pod="kube-system/cilium-vg4vl" May 13 12:34:31.048415 kubelet[2771]: I0513 12:34:31.048403 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2e021716-2083-4a66-a13d-9e279f876040-host-proc-sys-net\") pod \"cilium-vg4vl\" (UID: \"2e021716-2083-4a66-a13d-9e279f876040\") " pod="kube-system/cilium-vg4vl" May 13 12:34:31.048492 kubelet[2771]: I0513 12:34:31.048481 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e021716-2083-4a66-a13d-9e279f876040-lib-modules\") pod \"cilium-vg4vl\" (UID: \"2e021716-2083-4a66-a13d-9e279f876040\") " pod="kube-system/cilium-vg4vl" May 13 12:34:31.048589 kubelet[2771]: I0513 12:34:31.048576 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2e021716-2083-4a66-a13d-9e279f876040-clustermesh-secrets\") pod \"cilium-vg4vl\" (UID: \"2e021716-2083-4a66-a13d-9e279f876040\") " pod="kube-system/cilium-vg4vl" May 13 12:34:31.048683 kubelet[2771]: I0513 12:34:31.048669 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2e021716-2083-4a66-a13d-9e279f876040-cilium-run\") pod \"cilium-vg4vl\" (UID: \"2e021716-2083-4a66-a13d-9e279f876040\") " pod="kube-system/cilium-vg4vl" May 13 12:34:31.048749 kubelet[2771]: I0513 12:34:31.048737 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2e021716-2083-4a66-a13d-9e279f876040-hostproc\") pod \"cilium-vg4vl\" (UID: \"2e021716-2083-4a66-a13d-9e279f876040\") " pod="kube-system/cilium-vg4vl" May 13 12:34:31.048809 kubelet[2771]: I0513 12:34:31.048798 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e021716-2083-4a66-a13d-9e279f876040-xtables-lock\") pod \"cilium-vg4vl\" (UID: \"2e021716-2083-4a66-a13d-9e279f876040\") " pod="kube-system/cilium-vg4vl" May 13 12:34:31.048875 kubelet[2771]: I0513 12:34:31.048857 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2e021716-2083-4a66-a13d-9e279f876040-cilium-ipsec-secrets\") pod \"cilium-vg4vl\" (UID: \"2e021716-2083-4a66-a13d-9e279f876040\") " pod="kube-system/cilium-vg4vl" May 13 12:34:31.048955 kubelet[2771]: I0513 12:34:31.048944 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2e021716-2083-4a66-a13d-9e279f876040-host-proc-sys-kernel\") pod \"cilium-vg4vl\" (UID: \"2e021716-2083-4a66-a13d-9e279f876040\") " pod="kube-system/cilium-vg4vl" May 13 12:34:31.049055 kubelet[2771]: I0513 12:34:31.049042 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pvvg\" (UniqueName: \"kubernetes.io/projected/2e021716-2083-4a66-a13d-9e279f876040-kube-api-access-8pvvg\") pod \"cilium-vg4vl\" (UID: \"2e021716-2083-4a66-a13d-9e279f876040\") " pod="kube-system/cilium-vg4vl" May 13 12:34:31.049141 kubelet[2771]: I0513 12:34:31.049130 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2e021716-2083-4a66-a13d-9e279f876040-etc-cni-netd\") pod \"cilium-vg4vl\" (UID: \"2e021716-2083-4a66-a13d-9e279f876040\") " pod="kube-system/cilium-vg4vl" May 13 12:34:31.049262 kubelet[2771]: I0513 12:34:31.049226 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2e021716-2083-4a66-a13d-9e279f876040-cilium-config-path\") pod \"cilium-vg4vl\" (UID: \"2e021716-2083-4a66-a13d-9e279f876040\") " pod="kube-system/cilium-vg4vl" May 13 12:34:31.053382 sshd[4538]: Accepted publickey for core from 10.0.0.1 port 44864 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:34:31.054502 sshd-session[4538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:34:31.059137 systemd-logind[1504]: New session 25 of user core. May 13 12:34:31.069223 systemd[1]: Started session-25.scope - Session 25 of User core. May 13 12:34:31.119612 sshd[4540]: Connection closed by 10.0.0.1 port 44864 May 13 12:34:31.120001 sshd-session[4538]: pam_unix(sshd:session): session closed for user core May 13 12:34:31.131937 systemd[1]: sshd@24-10.0.0.26:22-10.0.0.1:44864.service: Deactivated successfully. May 13 12:34:31.133616 systemd[1]: session-25.scope: Deactivated successfully. May 13 12:34:31.134212 systemd-logind[1504]: Session 25 logged out. Waiting for processes to exit. May 13 12:34:31.136800 systemd[1]: Started sshd@25-10.0.0.26:22-10.0.0.1:44878.service - OpenSSH per-connection server daemon (10.0.0.1:44878). May 13 12:34:31.137288 systemd-logind[1504]: Removed session 25. May 13 12:34:31.193088 sshd[4547]: Accepted publickey for core from 10.0.0.1 port 44878 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:34:31.194148 sshd-session[4547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:34:31.198613 systemd-logind[1504]: New session 26 of user core. May 13 12:34:31.207752 systemd[1]: Started session-26.scope - Session 26 of User core. May 13 12:34:31.306928 kubelet[2771]: E0513 12:34:31.306677 2771 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 12:34:31.313080 containerd[1528]: time="2025-05-13T12:34:31.313041773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vg4vl,Uid:2e021716-2083-4a66-a13d-9e279f876040,Namespace:kube-system,Attempt:0,}" May 13 12:34:31.328077 containerd[1528]: time="2025-05-13T12:34:31.328029972Z" level=info msg="connecting to shim 136f98d9d6e56a954566e4c3a6fb3e4cec83eafc965d63e1ec420adb8ab52dc7" address="unix:///run/containerd/s/624337d5f1d48899f3446ee1a9299caa066f0d5564a54db15ea56e248ffdebc4" namespace=k8s.io protocol=ttrpc version=3 May 13 12:34:31.350718 systemd[1]: Started cri-containerd-136f98d9d6e56a954566e4c3a6fb3e4cec83eafc965d63e1ec420adb8ab52dc7.scope - libcontainer container 136f98d9d6e56a954566e4c3a6fb3e4cec83eafc965d63e1ec420adb8ab52dc7. May 13 12:34:31.374441 containerd[1528]: time="2025-05-13T12:34:31.374362826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vg4vl,Uid:2e021716-2083-4a66-a13d-9e279f876040,Namespace:kube-system,Attempt:0,} returns sandbox id \"136f98d9d6e56a954566e4c3a6fb3e4cec83eafc965d63e1ec420adb8ab52dc7\"" May 13 12:34:31.378382 containerd[1528]: time="2025-05-13T12:34:31.378347445Z" level=info msg="CreateContainer within sandbox \"136f98d9d6e56a954566e4c3a6fb3e4cec83eafc965d63e1ec420adb8ab52dc7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 12:34:31.384561 containerd[1528]: time="2025-05-13T12:34:31.384520284Z" level=info msg="Container 3a5b77df1b8288945fffcd58bc83d2126b762cf528c5fe03880dfa4ca7d20c73: CDI devices from CRI Config.CDIDevices: []" May 13 12:34:31.389969 containerd[1528]: time="2025-05-13T12:34:31.389913039Z" level=info msg="CreateContainer within sandbox \"136f98d9d6e56a954566e4c3a6fb3e4cec83eafc965d63e1ec420adb8ab52dc7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3a5b77df1b8288945fffcd58bc83d2126b762cf528c5fe03880dfa4ca7d20c73\"" May 13 12:34:31.390587 containerd[1528]: time="2025-05-13T12:34:31.390537851Z" level=info msg="StartContainer for \"3a5b77df1b8288945fffcd58bc83d2126b762cf528c5fe03880dfa4ca7d20c73\"" May 13 12:34:31.401340 containerd[1528]: time="2025-05-13T12:34:31.401299202Z" level=info msg="connecting to shim 3a5b77df1b8288945fffcd58bc83d2126b762cf528c5fe03880dfa4ca7d20c73" address="unix:///run/containerd/s/624337d5f1d48899f3446ee1a9299caa066f0d5564a54db15ea56e248ffdebc4" protocol=ttrpc version=3 May 13 12:34:31.424725 systemd[1]: Started cri-containerd-3a5b77df1b8288945fffcd58bc83d2126b762cf528c5fe03880dfa4ca7d20c73.scope - libcontainer container 3a5b77df1b8288945fffcd58bc83d2126b762cf528c5fe03880dfa4ca7d20c73. May 13 12:34:31.451948 containerd[1528]: time="2025-05-13T12:34:31.451897222Z" level=info msg="StartContainer for \"3a5b77df1b8288945fffcd58bc83d2126b762cf528c5fe03880dfa4ca7d20c73\" returns successfully" May 13 12:34:31.464020 systemd[1]: cri-containerd-3a5b77df1b8288945fffcd58bc83d2126b762cf528c5fe03880dfa4ca7d20c73.scope: Deactivated successfully. May 13 12:34:31.465458 containerd[1528]: time="2025-05-13T12:34:31.465422487Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3a5b77df1b8288945fffcd58bc83d2126b762cf528c5fe03880dfa4ca7d20c73\" id:\"3a5b77df1b8288945fffcd58bc83d2126b762cf528c5fe03880dfa4ca7d20c73\" pid:4618 exited_at:{seconds:1747139671 nanos:464799836}" May 13 12:34:31.465666 containerd[1528]: time="2025-05-13T12:34:31.465601879Z" level=info msg="received exit event container_id:\"3a5b77df1b8288945fffcd58bc83d2126b762cf528c5fe03880dfa4ca7d20c73\" id:\"3a5b77df1b8288945fffcd58bc83d2126b762cf528c5fe03880dfa4ca7d20c73\" pid:4618 exited_at:{seconds:1747139671 nanos:464799836}" May 13 12:34:31.527396 containerd[1528]: time="2025-05-13T12:34:31.527348833Z" level=info msg="CreateContainer within sandbox \"136f98d9d6e56a954566e4c3a6fb3e4cec83eafc965d63e1ec420adb8ab52dc7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 12:34:31.537595 containerd[1528]: time="2025-05-13T12:34:31.537251423Z" level=info msg="Container cc49fd9d56c5190e0917fc3ac5fea06f5fa2acbeb68e5f4f7b28e731c12c6076: CDI devices from CRI Config.CDIDevices: []" May 13 12:34:31.544923 containerd[1528]: time="2025-05-13T12:34:31.544851797Z" level=info msg="CreateContainer within sandbox \"136f98d9d6e56a954566e4c3a6fb3e4cec83eafc965d63e1ec420adb8ab52dc7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cc49fd9d56c5190e0917fc3ac5fea06f5fa2acbeb68e5f4f7b28e731c12c6076\"" May 13 12:34:31.545610 containerd[1528]: time="2025-05-13T12:34:31.545345855Z" level=info msg="StartContainer for \"cc49fd9d56c5190e0917fc3ac5fea06f5fa2acbeb68e5f4f7b28e731c12c6076\"" May 13 12:34:31.546639 containerd[1528]: time="2025-05-13T12:34:31.546606317Z" level=info msg="connecting to shim cc49fd9d56c5190e0917fc3ac5fea06f5fa2acbeb68e5f4f7b28e731c12c6076" address="unix:///run/containerd/s/624337d5f1d48899f3446ee1a9299caa066f0d5564a54db15ea56e248ffdebc4" protocol=ttrpc version=3 May 13 12:34:31.566734 systemd[1]: Started cri-containerd-cc49fd9d56c5190e0917fc3ac5fea06f5fa2acbeb68e5f4f7b28e731c12c6076.scope - libcontainer container cc49fd9d56c5190e0917fc3ac5fea06f5fa2acbeb68e5f4f7b28e731c12c6076. May 13 12:34:31.591276 containerd[1528]: time="2025-05-13T12:34:31.591235609Z" level=info msg="StartContainer for \"cc49fd9d56c5190e0917fc3ac5fea06f5fa2acbeb68e5f4f7b28e731c12c6076\" returns successfully" May 13 12:34:31.602000 systemd[1]: cri-containerd-cc49fd9d56c5190e0917fc3ac5fea06f5fa2acbeb68e5f4f7b28e731c12c6076.scope: Deactivated successfully. May 13 12:34:31.603782 containerd[1528]: time="2025-05-13T12:34:31.603752000Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cc49fd9d56c5190e0917fc3ac5fea06f5fa2acbeb68e5f4f7b28e731c12c6076\" id:\"cc49fd9d56c5190e0917fc3ac5fea06f5fa2acbeb68e5f4f7b28e731c12c6076\" pid:4663 exited_at:{seconds:1747139671 nanos:603030833}" May 13 12:34:31.603872 containerd[1528]: time="2025-05-13T12:34:31.603846396Z" level=info msg="received exit event container_id:\"cc49fd9d56c5190e0917fc3ac5fea06f5fa2acbeb68e5f4f7b28e731c12c6076\" id:\"cc49fd9d56c5190e0917fc3ac5fea06f5fa2acbeb68e5f4f7b28e731c12c6076\" pid:4663 exited_at:{seconds:1747139671 nanos:603030833}" May 13 12:34:32.532286 containerd[1528]: time="2025-05-13T12:34:32.531991731Z" level=info msg="CreateContainer within sandbox \"136f98d9d6e56a954566e4c3a6fb3e4cec83eafc965d63e1ec420adb8ab52dc7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 12:34:32.542411 containerd[1528]: time="2025-05-13T12:34:32.542371741Z" level=info msg="Container a3faafbc4d51913a912ca913a6b2f6b09a177ae86ebe5351731feaaa59f32488: CDI devices from CRI Config.CDIDevices: []" May 13 12:34:32.550484 containerd[1528]: time="2025-05-13T12:34:32.550451607Z" level=info msg="CreateContainer within sandbox \"136f98d9d6e56a954566e4c3a6fb3e4cec83eafc965d63e1ec420adb8ab52dc7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a3faafbc4d51913a912ca913a6b2f6b09a177ae86ebe5351731feaaa59f32488\"" May 13 12:34:32.551230 containerd[1528]: time="2025-05-13T12:34:32.551109899Z" level=info msg="StartContainer for \"a3faafbc4d51913a912ca913a6b2f6b09a177ae86ebe5351731feaaa59f32488\"" May 13 12:34:32.552853 containerd[1528]: time="2025-05-13T12:34:32.552814429Z" level=info msg="connecting to shim a3faafbc4d51913a912ca913a6b2f6b09a177ae86ebe5351731feaaa59f32488" address="unix:///run/containerd/s/624337d5f1d48899f3446ee1a9299caa066f0d5564a54db15ea56e248ffdebc4" protocol=ttrpc version=3 May 13 12:34:32.573685 systemd[1]: Started cri-containerd-a3faafbc4d51913a912ca913a6b2f6b09a177ae86ebe5351731feaaa59f32488.scope - libcontainer container a3faafbc4d51913a912ca913a6b2f6b09a177ae86ebe5351731feaaa59f32488. May 13 12:34:32.626288 systemd[1]: cri-containerd-a3faafbc4d51913a912ca913a6b2f6b09a177ae86ebe5351731feaaa59f32488.scope: Deactivated successfully. May 13 12:34:32.627748 containerd[1528]: time="2025-05-13T12:34:32.627711289Z" level=info msg="received exit event container_id:\"a3faafbc4d51913a912ca913a6b2f6b09a177ae86ebe5351731feaaa59f32488\" id:\"a3faafbc4d51913a912ca913a6b2f6b09a177ae86ebe5351731feaaa59f32488\" pid:4708 exited_at:{seconds:1747139672 nanos:627442580}" May 13 12:34:32.628515 containerd[1528]: time="2025-05-13T12:34:32.627793686Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a3faafbc4d51913a912ca913a6b2f6b09a177ae86ebe5351731feaaa59f32488\" id:\"a3faafbc4d51913a912ca913a6b2f6b09a177ae86ebe5351731feaaa59f32488\" pid:4708 exited_at:{seconds:1747139672 nanos:627442580}" May 13 12:34:32.629474 containerd[1528]: time="2025-05-13T12:34:32.629445737Z" level=info msg="StartContainer for \"a3faafbc4d51913a912ca913a6b2f6b09a177ae86ebe5351731feaaa59f32488\" returns successfully" May 13 12:34:32.645634 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3faafbc4d51913a912ca913a6b2f6b09a177ae86ebe5351731feaaa59f32488-rootfs.mount: Deactivated successfully. May 13 12:34:33.002419 kubelet[2771]: I0513 12:34:33.002361 2771 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-13T12:34:33Z","lastTransitionTime":"2025-05-13T12:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 13 12:34:33.539738 containerd[1528]: time="2025-05-13T12:34:33.539698466Z" level=info msg="CreateContainer within sandbox \"136f98d9d6e56a954566e4c3a6fb3e4cec83eafc965d63e1ec420adb8ab52dc7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 12:34:33.551724 containerd[1528]: time="2025-05-13T12:34:33.550907806Z" level=info msg="Container f03c47d746d74cd0a4fdc8ca75f10b41c072bc540b711697561a063d8e17ee69: CDI devices from CRI Config.CDIDevices: []" May 13 12:34:33.562684 containerd[1528]: time="2025-05-13T12:34:33.562647966Z" level=info msg="CreateContainer within sandbox \"136f98d9d6e56a954566e4c3a6fb3e4cec83eafc965d63e1ec420adb8ab52dc7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f03c47d746d74cd0a4fdc8ca75f10b41c072bc540b711697561a063d8e17ee69\"" May 13 12:34:33.563218 containerd[1528]: time="2025-05-13T12:34:33.563186826Z" level=info msg="StartContainer for \"f03c47d746d74cd0a4fdc8ca75f10b41c072bc540b711697561a063d8e17ee69\"" May 13 12:34:33.564048 containerd[1528]: time="2025-05-13T12:34:33.564010275Z" level=info msg="connecting to shim f03c47d746d74cd0a4fdc8ca75f10b41c072bc540b711697561a063d8e17ee69" address="unix:///run/containerd/s/624337d5f1d48899f3446ee1a9299caa066f0d5564a54db15ea56e248ffdebc4" protocol=ttrpc version=3 May 13 12:34:33.589697 systemd[1]: Started cri-containerd-f03c47d746d74cd0a4fdc8ca75f10b41c072bc540b711697561a063d8e17ee69.scope - libcontainer container f03c47d746d74cd0a4fdc8ca75f10b41c072bc540b711697561a063d8e17ee69. May 13 12:34:33.610802 systemd[1]: cri-containerd-f03c47d746d74cd0a4fdc8ca75f10b41c072bc540b711697561a063d8e17ee69.scope: Deactivated successfully. May 13 12:34:33.613272 containerd[1528]: time="2025-05-13T12:34:33.613237232Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f03c47d746d74cd0a4fdc8ca75f10b41c072bc540b711697561a063d8e17ee69\" id:\"f03c47d746d74cd0a4fdc8ca75f10b41c072bc540b711697561a063d8e17ee69\" pid:4748 exited_at:{seconds:1747139673 nanos:612996281}" May 13 12:34:33.613272 containerd[1528]: time="2025-05-13T12:34:33.613244511Z" level=info msg="received exit event container_id:\"f03c47d746d74cd0a4fdc8ca75f10b41c072bc540b711697561a063d8e17ee69\" id:\"f03c47d746d74cd0a4fdc8ca75f10b41c072bc540b711697561a063d8e17ee69\" pid:4748 exited_at:{seconds:1747139673 nanos:612996281}" May 13 12:34:33.619311 containerd[1528]: time="2025-05-13T12:34:33.619279285Z" level=info msg="StartContainer for \"f03c47d746d74cd0a4fdc8ca75f10b41c072bc540b711697561a063d8e17ee69\" returns successfully" May 13 12:34:33.629791 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f03c47d746d74cd0a4fdc8ca75f10b41c072bc540b711697561a063d8e17ee69-rootfs.mount: Deactivated successfully. May 13 12:34:34.555736 containerd[1528]: time="2025-05-13T12:34:34.555518096Z" level=info msg="CreateContainer within sandbox \"136f98d9d6e56a954566e4c3a6fb3e4cec83eafc965d63e1ec420adb8ab52dc7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 12:34:34.563581 containerd[1528]: time="2025-05-13T12:34:34.563450829Z" level=info msg="Container 7f3c2750931fbf8ee0c6621cd22c1f584580fa033d241c3f154ca34c4277351e: CDI devices from CRI Config.CDIDevices: []" May 13 12:34:34.566902 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2428489995.mount: Deactivated successfully. May 13 12:34:34.572444 containerd[1528]: time="2025-05-13T12:34:34.572407528Z" level=info msg="CreateContainer within sandbox \"136f98d9d6e56a954566e4c3a6fb3e4cec83eafc965d63e1ec420adb8ab52dc7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7f3c2750931fbf8ee0c6621cd22c1f584580fa033d241c3f154ca34c4277351e\"" May 13 12:34:34.573164 containerd[1528]: time="2025-05-13T12:34:34.573141503Z" level=info msg="StartContainer for \"7f3c2750931fbf8ee0c6621cd22c1f584580fa033d241c3f154ca34c4277351e\"" May 13 12:34:34.574183 containerd[1528]: time="2025-05-13T12:34:34.574151469Z" level=info msg="connecting to shim 7f3c2750931fbf8ee0c6621cd22c1f584580fa033d241c3f154ca34c4277351e" address="unix:///run/containerd/s/624337d5f1d48899f3446ee1a9299caa066f0d5564a54db15ea56e248ffdebc4" protocol=ttrpc version=3 May 13 12:34:34.593694 systemd[1]: Started cri-containerd-7f3c2750931fbf8ee0c6621cd22c1f584580fa033d241c3f154ca34c4277351e.scope - libcontainer container 7f3c2750931fbf8ee0c6621cd22c1f584580fa033d241c3f154ca34c4277351e. May 13 12:34:34.624099 containerd[1528]: time="2025-05-13T12:34:34.624066710Z" level=info msg="StartContainer for \"7f3c2750931fbf8ee0c6621cd22c1f584580fa033d241c3f154ca34c4277351e\" returns successfully" May 13 12:34:34.674814 containerd[1528]: time="2025-05-13T12:34:34.674781404Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7f3c2750931fbf8ee0c6621cd22c1f584580fa033d241c3f154ca34c4277351e\" id:\"eff97b94d09ecd40c77c7a1bc3b432dfe0925f701c822e6bb3b72c6c4e1f48e8\" pid:4814 exited_at:{seconds:1747139674 nanos:674333539}" May 13 12:34:34.893568 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 13 12:34:37.559044 containerd[1528]: time="2025-05-13T12:34:37.558974422Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7f3c2750931fbf8ee0c6621cd22c1f584580fa033d241c3f154ca34c4277351e\" id:\"ef2189ed25ff2abd99f56b02d7cbc89710e8afbc357cff890c43ed896a9f74e1\" pid:5249 exit_status:1 exited_at:{seconds:1747139677 nanos:558517113}" May 13 12:34:37.664372 systemd-networkd[1430]: lxc_health: Link UP May 13 12:34:37.673625 systemd-networkd[1430]: lxc_health: Gained carrier May 13 12:34:39.274700 systemd-networkd[1430]: lxc_health: Gained IPv6LL May 13 12:34:39.329829 kubelet[2771]: I0513 12:34:39.329538 2771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vg4vl" podStartSLOduration=9.32952112 podStartE2EDuration="9.32952112s" podCreationTimestamp="2025-05-13 12:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:34:35.565571365 +0000 UTC m=+84.383636755" watchObservedRunningTime="2025-05-13 12:34:39.32952112 +0000 UTC m=+88.147586510" May 13 12:34:39.662467 containerd[1528]: time="2025-05-13T12:34:39.662429416Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7f3c2750931fbf8ee0c6621cd22c1f584580fa033d241c3f154ca34c4277351e\" id:\"0324d10dacb42b3c5af2cdb0616cce8ded7fd037a111928fe2afd5eac7f3375d\" pid:5351 exited_at:{seconds:1747139679 nanos:662164900}" May 13 12:34:41.762329 containerd[1528]: time="2025-05-13T12:34:41.762271958Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7f3c2750931fbf8ee0c6621cd22c1f584580fa033d241c3f154ca34c4277351e\" id:\"e77dd2927491cc77088512bef9f9fea73449f9a524b23051a0fe41fd7a59cddb\" pid:5385 exited_at:{seconds:1747139681 nanos:761986601}" May 13 12:34:43.871091 containerd[1528]: time="2025-05-13T12:34:43.871015266Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7f3c2750931fbf8ee0c6621cd22c1f584580fa033d241c3f154ca34c4277351e\" id:\"5d6a7d0e2b23a16bde0235faf5622b4cdb8db70ce7c875239557521203acbc47\" pid:5409 exited_at:{seconds:1747139683 nanos:868938035}" May 13 12:34:43.876052 sshd[4554]: Connection closed by 10.0.0.1 port 44878 May 13 12:34:43.876704 sshd-session[4547]: pam_unix(sshd:session): session closed for user core May 13 12:34:43.880462 systemd[1]: sshd@25-10.0.0.26:22-10.0.0.1:44878.service: Deactivated successfully. May 13 12:34:43.882221 systemd[1]: session-26.scope: Deactivated successfully. May 13 12:34:43.884062 systemd-logind[1504]: Session 26 logged out. Waiting for processes to exit. May 13 12:34:43.885185 systemd-logind[1504]: Removed session 26.