May 14 17:56:57.822287 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 14 17:56:57.822311 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Wed May 14 16:42:23 -00 2025 May 14 17:56:57.822322 kernel: KASLR enabled May 14 17:56:57.822327 kernel: efi: EFI v2.7 by EDK II May 14 17:56:57.822333 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb221f18 May 14 17:56:57.822339 kernel: random: crng init done May 14 17:56:57.822345 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 May 14 17:56:57.822365 kernel: secureboot: Secure boot enabled May 14 17:56:57.822371 kernel: ACPI: Early table checksum verification disabled May 14 17:56:57.822378 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) May 14 17:56:57.822385 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 14 17:56:57.822390 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 14 17:56:57.822396 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 17:56:57.822402 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 14 17:56:57.822409 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 17:56:57.822417 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 17:56:57.822423 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 17:56:57.822429 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 17:56:57.822435 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 14 17:56:57.822441 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 14 17:56:57.822447 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 14 17:56:57.822453 kernel: ACPI: Use ACPI SPCR as default console: Yes May 14 17:56:57.822459 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 14 17:56:57.822465 kernel: NODE_DATA(0) allocated [mem 0xdc737dc0-0xdc73efff] May 14 17:56:57.822471 kernel: Zone ranges: May 14 17:56:57.822479 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 14 17:56:57.822485 kernel: DMA32 empty May 14 17:56:57.822491 kernel: Normal empty May 14 17:56:57.822505 kernel: Device empty May 14 17:56:57.822511 kernel: Movable zone start for each node May 14 17:56:57.822517 kernel: Early memory node ranges May 14 17:56:57.822523 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] May 14 17:56:57.822529 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] May 14 17:56:57.822535 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] May 14 17:56:57.822541 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] May 14 17:56:57.822547 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] May 14 17:56:57.822553 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] May 14 17:56:57.822561 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] May 14 17:56:57.822567 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] May 14 17:56:57.822573 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 14 17:56:57.822591 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 14 17:56:57.822597 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 14 17:56:57.822604 kernel: psci: probing for conduit method from ACPI. May 14 17:56:57.822610 kernel: psci: PSCIv1.1 detected in firmware. May 14 17:56:57.822618 kernel: psci: Using standard PSCI v0.2 function IDs May 14 17:56:57.822624 kernel: psci: Trusted OS migration not required May 14 17:56:57.822630 kernel: psci: SMC Calling Convention v1.1 May 14 17:56:57.822637 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 14 17:56:57.822643 kernel: percpu: Embedded 33 pages/cpu s98136 r8192 d28840 u135168 May 14 17:56:57.822649 kernel: pcpu-alloc: s98136 r8192 d28840 u135168 alloc=33*4096 May 14 17:56:57.822656 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 14 17:56:57.822662 kernel: Detected PIPT I-cache on CPU0 May 14 17:56:57.822669 kernel: CPU features: detected: GIC system register CPU interface May 14 17:56:57.822676 kernel: CPU features: detected: Spectre-v4 May 14 17:56:57.822682 kernel: CPU features: detected: Spectre-BHB May 14 17:56:57.822689 kernel: CPU features: kernel page table isolation forced ON by KASLR May 14 17:56:57.822695 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 14 17:56:57.822702 kernel: CPU features: detected: ARM erratum 1418040 May 14 17:56:57.822708 kernel: CPU features: detected: SSBS not fully self-synchronizing May 14 17:56:57.822714 kernel: alternatives: applying boot alternatives May 14 17:56:57.822721 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=fb5d39925446c9958629410eadbe2d2aa0566996d55f4385bdd8a5ce4ad5f562 May 14 17:56:57.822728 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 17:56:57.822735 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 14 17:56:57.822741 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 17:56:57.822749 kernel: Fallback order for Node 0: 0 May 14 17:56:57.822755 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 May 14 17:56:57.822762 kernel: Policy zone: DMA May 14 17:56:57.822768 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 17:56:57.822774 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB May 14 17:56:57.822781 kernel: software IO TLB: area num 4. May 14 17:56:57.822787 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB May 14 17:56:57.822794 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) May 14 17:56:57.822800 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 14 17:56:57.822806 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 17:56:57.822813 kernel: rcu: RCU event tracing is enabled. May 14 17:56:57.822820 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 14 17:56:57.822828 kernel: Trampoline variant of Tasks RCU enabled. May 14 17:56:57.822834 kernel: Tracing variant of Tasks RCU enabled. May 14 17:56:57.822840 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 17:56:57.822847 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 14 17:56:57.822853 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 17:56:57.822860 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 17:56:57.822866 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 14 17:56:57.822872 kernel: GICv3: 256 SPIs implemented May 14 17:56:57.822879 kernel: GICv3: 0 Extended SPIs implemented May 14 17:56:57.822885 kernel: Root IRQ handler: gic_handle_irq May 14 17:56:57.822891 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 14 17:56:57.822899 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 May 14 17:56:57.822905 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 14 17:56:57.822911 kernel: ITS [mem 0x08080000-0x0809ffff] May 14 17:56:57.822918 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400e0000 (indirect, esz 8, psz 64K, shr 1) May 14 17:56:57.822924 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400f0000 (flat, esz 8, psz 64K, shr 1) May 14 17:56:57.822931 kernel: GICv3: using LPI property table @0x0000000040100000 May 14 17:56:57.822937 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040110000 May 14 17:56:57.822944 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 14 17:56:57.822950 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 17:56:57.822956 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 14 17:56:57.822963 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 14 17:56:57.822969 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 14 17:56:57.822977 kernel: arm-pv: using stolen time PV May 14 17:56:57.822984 kernel: Console: colour dummy device 80x25 May 14 17:56:57.822990 kernel: ACPI: Core revision 20240827 May 14 17:56:57.822997 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 14 17:56:57.823004 kernel: pid_max: default: 32768 minimum: 301 May 14 17:56:57.823011 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 14 17:56:57.823017 kernel: landlock: Up and running. May 14 17:56:57.823024 kernel: SELinux: Initializing. May 14 17:56:57.823036 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 17:56:57.823044 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 17:56:57.823056 kernel: rcu: Hierarchical SRCU implementation. May 14 17:56:57.823067 kernel: rcu: Max phase no-delay instances is 400. May 14 17:56:57.823074 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 14 17:56:57.823080 kernel: Remapping and enabling EFI services. May 14 17:56:57.823087 kernel: smp: Bringing up secondary CPUs ... May 14 17:56:57.823094 kernel: Detected PIPT I-cache on CPU1 May 14 17:56:57.823100 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 14 17:56:57.823107 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040120000 May 14 17:56:57.823116 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 17:56:57.823126 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 14 17:56:57.823133 kernel: Detected PIPT I-cache on CPU2 May 14 17:56:57.823141 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 14 17:56:57.823148 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040130000 May 14 17:56:57.823168 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 17:56:57.823176 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 14 17:56:57.823183 kernel: Detected PIPT I-cache on CPU3 May 14 17:56:57.823190 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 14 17:56:57.823199 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040140000 May 14 17:56:57.823206 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 17:56:57.823213 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 14 17:56:57.823220 kernel: smp: Brought up 1 node, 4 CPUs May 14 17:56:57.823227 kernel: SMP: Total of 4 processors activated. May 14 17:56:57.823233 kernel: CPU: All CPU(s) started at EL1 May 14 17:56:57.823240 kernel: CPU features: detected: 32-bit EL0 Support May 14 17:56:57.823247 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 14 17:56:57.823256 kernel: CPU features: detected: Common not Private translations May 14 17:56:57.823262 kernel: CPU features: detected: CRC32 instructions May 14 17:56:57.823269 kernel: CPU features: detected: Enhanced Virtualization Traps May 14 17:56:57.823276 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 14 17:56:57.823283 kernel: CPU features: detected: LSE atomic instructions May 14 17:56:57.823290 kernel: CPU features: detected: Privileged Access Never May 14 17:56:57.823297 kernel: CPU features: detected: RAS Extension Support May 14 17:56:57.823304 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 14 17:56:57.823311 kernel: alternatives: applying system-wide alternatives May 14 17:56:57.823320 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 May 14 17:56:57.823327 kernel: Memory: 2438884K/2572288K available (11072K kernel code, 2276K rwdata, 8928K rodata, 39424K init, 1034K bss, 127636K reserved, 0K cma-reserved) May 14 17:56:57.823334 kernel: devtmpfs: initialized May 14 17:56:57.823341 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 17:56:57.823348 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 14 17:56:57.823355 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 14 17:56:57.823361 kernel: 0 pages in range for non-PLT usage May 14 17:56:57.823368 kernel: 508544 pages in range for PLT usage May 14 17:56:57.823375 kernel: pinctrl core: initialized pinctrl subsystem May 14 17:56:57.823383 kernel: SMBIOS 3.0.0 present. May 14 17:56:57.823390 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 14 17:56:57.823397 kernel: DMI: Memory slots populated: 1/1 May 14 17:56:57.823403 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 17:56:57.823410 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 14 17:56:57.823417 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 14 17:56:57.823424 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 14 17:56:57.823431 kernel: audit: initializing netlink subsys (disabled) May 14 17:56:57.823438 kernel: audit: type=2000 audit(0.035:1): state=initialized audit_enabled=0 res=1 May 14 17:56:57.823456 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 17:56:57.823463 kernel: cpuidle: using governor menu May 14 17:56:57.823470 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 14 17:56:57.823477 kernel: ASID allocator initialised with 32768 entries May 14 17:56:57.823484 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 17:56:57.823491 kernel: Serial: AMBA PL011 UART driver May 14 17:56:57.823503 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 14 17:56:57.823510 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 14 17:56:57.823517 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 14 17:56:57.823526 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 14 17:56:57.823532 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 14 17:56:57.823540 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 14 17:56:57.823547 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 14 17:56:57.823553 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 14 17:56:57.823560 kernel: ACPI: Added _OSI(Module Device) May 14 17:56:57.823567 kernel: ACPI: Added _OSI(Processor Device) May 14 17:56:57.823574 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 17:56:57.823581 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 17:56:57.823589 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 17:56:57.823596 kernel: ACPI: Interpreter enabled May 14 17:56:57.823603 kernel: ACPI: Using GIC for interrupt routing May 14 17:56:57.823610 kernel: ACPI: MCFG table detected, 1 entries May 14 17:56:57.823616 kernel: ACPI: CPU0 has been hot-added May 14 17:56:57.823623 kernel: ACPI: CPU1 has been hot-added May 14 17:56:57.823630 kernel: ACPI: CPU2 has been hot-added May 14 17:56:57.823637 kernel: ACPI: CPU3 has been hot-added May 14 17:56:57.823644 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 14 17:56:57.823651 kernel: printk: legacy console [ttyAMA0] enabled May 14 17:56:57.823659 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 14 17:56:57.823825 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 17:56:57.823892 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 14 17:56:57.823952 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 14 17:56:57.824011 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 14 17:56:57.824071 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 14 17:56:57.824080 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 14 17:56:57.824089 kernel: PCI host bridge to bus 0000:00 May 14 17:56:57.824180 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 14 17:56:57.824250 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 14 17:56:57.824308 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 14 17:56:57.824363 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 14 17:56:57.824438 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint May 14 17:56:57.824525 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 14 17:56:57.824593 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] May 14 17:56:57.824658 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] May 14 17:56:57.824719 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] May 14 17:56:57.824778 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned May 14 17:56:57.824839 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned May 14 17:56:57.824901 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned May 14 17:56:57.824961 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 14 17:56:57.825015 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 14 17:56:57.825068 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 14 17:56:57.825077 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 14 17:56:57.825084 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 14 17:56:57.825092 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 14 17:56:57.825099 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 14 17:56:57.825106 kernel: iommu: Default domain type: Translated May 14 17:56:57.825114 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 14 17:56:57.825121 kernel: efivars: Registered efivars operations May 14 17:56:57.825128 kernel: vgaarb: loaded May 14 17:56:57.825135 kernel: clocksource: Switched to clocksource arch_sys_counter May 14 17:56:57.825142 kernel: VFS: Disk quotas dquot_6.6.0 May 14 17:56:57.825149 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 17:56:57.825194 kernel: pnp: PnP ACPI init May 14 17:56:57.825271 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 14 17:56:57.825282 kernel: pnp: PnP ACPI: found 1 devices May 14 17:56:57.825292 kernel: NET: Registered PF_INET protocol family May 14 17:56:57.825299 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 14 17:56:57.825306 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 14 17:56:57.825313 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 17:56:57.825321 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 14 17:56:57.825328 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 14 17:56:57.825335 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 14 17:56:57.825342 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 17:56:57.825350 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 17:56:57.825357 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 17:56:57.825364 kernel: PCI: CLS 0 bytes, default 64 May 14 17:56:57.825371 kernel: kvm [1]: HYP mode not available May 14 17:56:57.825378 kernel: Initialise system trusted keyrings May 14 17:56:57.825385 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 14 17:56:57.825391 kernel: Key type asymmetric registered May 14 17:56:57.825399 kernel: Asymmetric key parser 'x509' registered May 14 17:56:57.825405 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 14 17:56:57.825412 kernel: io scheduler mq-deadline registered May 14 17:56:57.825421 kernel: io scheduler kyber registered May 14 17:56:57.825427 kernel: io scheduler bfq registered May 14 17:56:57.825434 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 14 17:56:57.825441 kernel: ACPI: button: Power Button [PWRB] May 14 17:56:57.825449 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 14 17:56:57.825523 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 14 17:56:57.825533 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 17:56:57.825540 kernel: thunder_xcv, ver 1.0 May 14 17:56:57.825547 kernel: thunder_bgx, ver 1.0 May 14 17:56:57.825556 kernel: nicpf, ver 1.0 May 14 17:56:57.825563 kernel: nicvf, ver 1.0 May 14 17:56:57.825637 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 14 17:56:57.825695 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-14T17:56:57 UTC (1747245417) May 14 17:56:57.825704 kernel: hid: raw HID events driver (C) Jiri Kosina May 14 17:56:57.825711 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available May 14 17:56:57.825718 kernel: watchdog: NMI not fully supported May 14 17:56:57.825725 kernel: watchdog: Hard watchdog permanently disabled May 14 17:56:57.825734 kernel: NET: Registered PF_INET6 protocol family May 14 17:56:57.825741 kernel: Segment Routing with IPv6 May 14 17:56:57.825748 kernel: In-situ OAM (IOAM) with IPv6 May 14 17:56:57.825754 kernel: NET: Registered PF_PACKET protocol family May 14 17:56:57.825761 kernel: Key type dns_resolver registered May 14 17:56:57.825768 kernel: registered taskstats version 1 May 14 17:56:57.825775 kernel: Loading compiled-in X.509 certificates May 14 17:56:57.825782 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: c0c250ba312a1bb9bceb2432c486db6e5999df1a' May 14 17:56:57.825789 kernel: Demotion targets for Node 0: null May 14 17:56:57.825797 kernel: Key type .fscrypt registered May 14 17:56:57.825804 kernel: Key type fscrypt-provisioning registered May 14 17:56:57.825811 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 17:56:57.825818 kernel: ima: Allocated hash algorithm: sha1 May 14 17:56:57.825825 kernel: ima: No architecture policies found May 14 17:56:57.825832 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 14 17:56:57.825839 kernel: clk: Disabling unused clocks May 14 17:56:57.825846 kernel: PM: genpd: Disabling unused power domains May 14 17:56:57.825852 kernel: Warning: unable to open an initial console. May 14 17:56:57.825861 kernel: Freeing unused kernel memory: 39424K May 14 17:56:57.825868 kernel: Run /init as init process May 14 17:56:57.825875 kernel: with arguments: May 14 17:56:57.825882 kernel: /init May 14 17:56:57.825889 kernel: with environment: May 14 17:56:57.825895 kernel: HOME=/ May 14 17:56:57.825902 kernel: TERM=linux May 14 17:56:57.825909 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 17:56:57.825916 systemd[1]: Successfully made /usr/ read-only. May 14 17:56:57.825927 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 17:56:57.825935 systemd[1]: Detected virtualization kvm. May 14 17:56:57.825942 systemd[1]: Detected architecture arm64. May 14 17:56:57.825949 systemd[1]: Running in initrd. May 14 17:56:57.825957 systemd[1]: No hostname configured, using default hostname. May 14 17:56:57.825965 systemd[1]: Hostname set to . May 14 17:56:57.825973 systemd[1]: Initializing machine ID from VM UUID. May 14 17:56:57.825981 systemd[1]: Queued start job for default target initrd.target. May 14 17:56:57.825988 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 17:56:57.825996 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 17:56:57.826004 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 14 17:56:57.826011 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 17:56:57.826019 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 14 17:56:57.826027 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 14 17:56:57.826037 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 14 17:56:57.826045 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 14 17:56:57.826053 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 17:56:57.826060 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 17:56:57.826068 systemd[1]: Reached target paths.target - Path Units. May 14 17:56:57.826075 systemd[1]: Reached target slices.target - Slice Units. May 14 17:56:57.826083 systemd[1]: Reached target swap.target - Swaps. May 14 17:56:57.826090 systemd[1]: Reached target timers.target - Timer Units. May 14 17:56:57.826099 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 14 17:56:57.826106 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 17:56:57.826114 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 14 17:56:57.826121 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 14 17:56:57.826129 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 17:56:57.826136 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 17:56:57.826144 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 17:56:57.826151 systemd[1]: Reached target sockets.target - Socket Units. May 14 17:56:57.826182 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 14 17:56:57.826191 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 17:56:57.826198 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 14 17:56:57.826206 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 14 17:56:57.826214 systemd[1]: Starting systemd-fsck-usr.service... May 14 17:56:57.826221 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 17:56:57.826229 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 17:56:57.826237 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 17:56:57.826245 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 14 17:56:57.826255 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 17:56:57.826262 systemd[1]: Finished systemd-fsck-usr.service. May 14 17:56:57.826269 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 17:56:57.826305 systemd-journald[243]: Collecting audit messages is disabled. May 14 17:56:57.826326 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 17:56:57.826334 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 17:56:57.826342 systemd-journald[243]: Journal started May 14 17:56:57.826362 systemd-journald[243]: Runtime Journal (/run/log/journal/80b184f6cdb347a7a057fd81f200e440) is 6M, max 48.5M, 42.4M free. May 14 17:56:57.815511 systemd-modules-load[245]: Inserted module 'overlay' May 14 17:56:57.829180 systemd[1]: Started systemd-journald.service - Journal Service. May 14 17:56:57.829214 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 17:56:57.832217 systemd-modules-load[245]: Inserted module 'br_netfilter' May 14 17:56:57.833177 kernel: Bridge firewalling registered May 14 17:56:57.842257 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 17:56:57.843623 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 17:56:57.848075 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 17:56:57.849681 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 17:56:57.851336 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 17:56:57.858526 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 17:56:57.863130 systemd-tmpfiles[270]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 14 17:56:57.863361 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 17:56:57.866913 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 17:56:57.869351 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 17:56:57.872866 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 14 17:56:57.875297 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 17:56:57.900646 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=fb5d39925446c9958629410eadbe2d2aa0566996d55f4385bdd8a5ce4ad5f562 May 14 17:56:57.916774 systemd-resolved[288]: Positive Trust Anchors: May 14 17:56:57.916789 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 17:56:57.916821 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 17:56:57.921601 systemd-resolved[288]: Defaulting to hostname 'linux'. May 14 17:56:57.922577 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 17:56:57.926238 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 17:56:57.979188 kernel: SCSI subsystem initialized May 14 17:56:57.984176 kernel: Loading iSCSI transport class v2.0-870. May 14 17:56:57.996212 kernel: iscsi: registered transport (tcp) May 14 17:56:58.011189 kernel: iscsi: registered transport (qla4xxx) May 14 17:56:58.011205 kernel: QLogic iSCSI HBA Driver May 14 17:56:58.027841 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 17:56:58.043431 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 17:56:58.044972 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 17:56:58.091225 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 14 17:56:58.093560 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 14 17:56:58.154190 kernel: raid6: neonx8 gen() 15719 MB/s May 14 17:56:58.171187 kernel: raid6: neonx4 gen() 15748 MB/s May 14 17:56:58.188183 kernel: raid6: neonx2 gen() 13142 MB/s May 14 17:56:58.205175 kernel: raid6: neonx1 gen() 10464 MB/s May 14 17:56:58.222172 kernel: raid6: int64x8 gen() 6900 MB/s May 14 17:56:58.239173 kernel: raid6: int64x4 gen() 7350 MB/s May 14 17:56:58.256174 kernel: raid6: int64x2 gen() 6096 MB/s May 14 17:56:58.273195 kernel: raid6: int64x1 gen() 5036 MB/s May 14 17:56:58.273231 kernel: raid6: using algorithm neonx4 gen() 15748 MB/s May 14 17:56:58.290193 kernel: raid6: .... xor() 12360 MB/s, rmw enabled May 14 17:56:58.290220 kernel: raid6: using neon recovery algorithm May 14 17:56:58.295183 kernel: xor: measuring software checksum speed May 14 17:56:58.295218 kernel: 8regs : 21550 MB/sec May 14 17:56:58.296196 kernel: 32regs : 19716 MB/sec May 14 17:56:58.296211 kernel: arm64_neon : 28061 MB/sec May 14 17:56:58.296221 kernel: xor: using function: arm64_neon (28061 MB/sec) May 14 17:56:58.355195 kernel: Btrfs loaded, zoned=no, fsverity=no May 14 17:56:58.362248 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 14 17:56:58.365305 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 17:56:58.394912 systemd-udevd[498]: Using default interface naming scheme 'v255'. May 14 17:56:58.399332 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 17:56:58.401506 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 14 17:56:58.431848 dracut-pre-trigger[504]: rd.md=0: removing MD RAID activation May 14 17:56:58.453595 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 14 17:56:58.455869 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 17:56:58.508801 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 17:56:58.510930 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 14 17:56:58.563533 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 14 17:56:58.576573 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 14 17:56:58.576669 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 14 17:56:58.576680 kernel: GPT:9289727 != 19775487 May 14 17:56:58.576695 kernel: GPT:Alternate GPT header not at the end of the disk. May 14 17:56:58.576704 kernel: GPT:9289727 != 19775487 May 14 17:56:58.576712 kernel: GPT: Use GNU Parted to correct GPT errors. May 14 17:56:58.576720 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 17:56:58.569541 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 17:56:58.569666 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 17:56:58.571959 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 17:56:58.575833 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 17:56:58.598549 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 14 17:56:58.608822 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 17:56:58.610236 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 14 17:56:58.622591 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 14 17:56:58.623685 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 14 17:56:58.632561 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 17:56:58.639752 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 14 17:56:58.640852 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 14 17:56:58.642823 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 17:56:58.644591 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 17:56:58.646749 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 14 17:56:58.648651 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 14 17:56:58.667191 disk-uuid[588]: Primary Header is updated. May 14 17:56:58.667191 disk-uuid[588]: Secondary Entries is updated. May 14 17:56:58.667191 disk-uuid[588]: Secondary Header is updated. May 14 17:56:58.671210 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 17:56:58.674145 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 14 17:56:59.683199 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 17:56:59.683809 disk-uuid[593]: The operation has completed successfully. May 14 17:56:59.714173 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 17:56:59.714295 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 14 17:56:59.742312 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 14 17:56:59.764109 sh[608]: Success May 14 17:56:59.777009 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 17:56:59.777050 kernel: device-mapper: uevent: version 1.0.3 May 14 17:56:59.777061 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 14 17:56:59.792204 kernel: device-mapper: verity: sha256 using shash "sha256-ce" May 14 17:56:59.817858 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 14 17:56:59.820197 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 14 17:56:59.836401 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 14 17:56:59.843183 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 14 17:56:59.843217 kernel: BTRFS: device fsid e21bbf34-4c71-4257-bd6f-908a2b81e5ab devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (620) May 14 17:56:59.845228 kernel: BTRFS info (device dm-0): first mount of filesystem e21bbf34-4c71-4257-bd6f-908a2b81e5ab May 14 17:56:59.846362 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 14 17:56:59.846381 kernel: BTRFS info (device dm-0): using free-space-tree May 14 17:56:59.849699 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 14 17:56:59.850946 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 14 17:56:59.852208 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 14 17:56:59.852999 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 14 17:56:59.854709 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 14 17:56:59.871754 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (653) May 14 17:56:59.871818 kernel: BTRFS info (device vda6): first mount of filesystem 6d47052f-e956-47a0-903a-525ae08a05f2 May 14 17:56:59.872634 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 14 17:56:59.872680 kernel: BTRFS info (device vda6): using free-space-tree May 14 17:56:59.879182 kernel: BTRFS info (device vda6): last unmount of filesystem 6d47052f-e956-47a0-903a-525ae08a05f2 May 14 17:56:59.879587 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 14 17:56:59.881648 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 14 17:56:59.965669 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 17:56:59.969319 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 17:57:00.020451 systemd-networkd[796]: lo: Link UP May 14 17:57:00.020462 systemd-networkd[796]: lo: Gained carrier May 14 17:57:00.021341 systemd-networkd[796]: Enumeration completed May 14 17:57:00.022411 systemd-networkd[796]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 17:57:00.022415 systemd-networkd[796]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 17:57:00.023604 systemd-networkd[796]: eth0: Link UP May 14 17:57:00.023607 systemd-networkd[796]: eth0: Gained carrier May 14 17:57:00.023616 systemd-networkd[796]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 17:57:00.024966 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 17:57:00.026211 systemd[1]: Reached target network.target - Network. May 14 17:57:00.043221 systemd-networkd[796]: eth0: DHCPv4 address 10.0.0.30/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 17:57:00.047456 ignition[698]: Ignition 2.21.0 May 14 17:57:00.047469 ignition[698]: Stage: fetch-offline May 14 17:57:00.047506 ignition[698]: no configs at "/usr/lib/ignition/base.d" May 14 17:57:00.047514 ignition[698]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 17:57:00.047700 ignition[698]: parsed url from cmdline: "" May 14 17:57:00.047704 ignition[698]: no config URL provided May 14 17:57:00.047708 ignition[698]: reading system config file "/usr/lib/ignition/user.ign" May 14 17:57:00.047715 ignition[698]: no config at "/usr/lib/ignition/user.ign" May 14 17:57:00.047732 ignition[698]: op(1): [started] loading QEMU firmware config module May 14 17:57:00.047736 ignition[698]: op(1): executing: "modprobe" "qemu_fw_cfg" May 14 17:57:00.056257 ignition[698]: op(1): [finished] loading QEMU firmware config module May 14 17:57:00.093860 ignition[698]: parsing config with SHA512: 8532920a5197bcc1900ffac83caddd5c807c320f8ef7288fcdcbdf41ab4c5b45e5c78211e5183b95bd3d58eb9dfb0019ca5bddad6169c71404ecef88ad564e9c May 14 17:57:00.097953 unknown[698]: fetched base config from "system" May 14 17:57:00.097964 unknown[698]: fetched user config from "qemu" May 14 17:57:00.098329 ignition[698]: fetch-offline: fetch-offline passed May 14 17:57:00.098387 ignition[698]: Ignition finished successfully May 14 17:57:00.100827 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 14 17:57:00.102606 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 14 17:57:00.103382 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 14 17:57:00.137557 ignition[808]: Ignition 2.21.0 May 14 17:57:00.137572 ignition[808]: Stage: kargs May 14 17:57:00.137717 ignition[808]: no configs at "/usr/lib/ignition/base.d" May 14 17:57:00.137725 ignition[808]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 17:57:00.138735 ignition[808]: kargs: kargs passed May 14 17:57:00.138799 ignition[808]: Ignition finished successfully May 14 17:57:00.141689 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 14 17:57:00.143530 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 14 17:57:00.175353 ignition[816]: Ignition 2.21.0 May 14 17:57:00.175368 ignition[816]: Stage: disks May 14 17:57:00.175570 ignition[816]: no configs at "/usr/lib/ignition/base.d" May 14 17:57:00.175579 ignition[816]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 17:57:00.177047 ignition[816]: disks: disks passed May 14 17:57:00.179112 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 14 17:57:00.177102 ignition[816]: Ignition finished successfully May 14 17:57:00.180595 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 14 17:57:00.182307 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 14 17:57:00.183809 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 17:57:00.185384 systemd[1]: Reached target sysinit.target - System Initialization. May 14 17:57:00.186793 systemd[1]: Reached target basic.target - Basic System. May 14 17:57:00.189286 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 14 17:57:00.220349 systemd-resolved[288]: Detected conflict on linux IN A 10.0.0.30 May 14 17:57:00.220365 systemd-resolved[288]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. May 14 17:57:00.223243 systemd-fsck[826]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 14 17:57:00.225513 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 14 17:57:00.227941 systemd[1]: Mounting sysroot.mount - /sysroot... May 14 17:57:00.300198 kernel: EXT4-fs (vda9): mounted filesystem a9c1ea72-ce96-48c1-8c16-d7102e51beed r/w with ordered data mode. Quota mode: none. May 14 17:57:00.300974 systemd[1]: Mounted sysroot.mount - /sysroot. May 14 17:57:00.302271 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 14 17:57:00.305267 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 17:57:00.307518 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 14 17:57:00.308519 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 14 17:57:00.308575 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 17:57:00.308597 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 14 17:57:00.320551 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 14 17:57:00.323017 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 14 17:57:00.327712 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (834) May 14 17:57:00.327779 kernel: BTRFS info (device vda6): first mount of filesystem 6d47052f-e956-47a0-903a-525ae08a05f2 May 14 17:57:00.327807 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 14 17:57:00.327834 kernel: BTRFS info (device vda6): using free-space-tree May 14 17:57:00.335603 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 17:57:00.392467 initrd-setup-root[858]: cut: /sysroot/etc/passwd: No such file or directory May 14 17:57:00.397855 initrd-setup-root[865]: cut: /sysroot/etc/group: No such file or directory May 14 17:57:00.402038 initrd-setup-root[872]: cut: /sysroot/etc/shadow: No such file or directory May 14 17:57:00.404748 initrd-setup-root[879]: cut: /sysroot/etc/gshadow: No such file or directory May 14 17:57:00.486720 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 14 17:57:00.488769 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 14 17:57:00.492306 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 14 17:57:00.511184 kernel: BTRFS info (device vda6): last unmount of filesystem 6d47052f-e956-47a0-903a-525ae08a05f2 May 14 17:57:00.524388 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 14 17:57:00.530027 ignition[948]: INFO : Ignition 2.21.0 May 14 17:57:00.530027 ignition[948]: INFO : Stage: mount May 14 17:57:00.531425 ignition[948]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 17:57:00.531425 ignition[948]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 17:57:00.534297 ignition[948]: INFO : mount: mount passed May 14 17:57:00.534297 ignition[948]: INFO : Ignition finished successfully May 14 17:57:00.534466 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 14 17:57:00.537260 systemd[1]: Starting ignition-files.service - Ignition (files)... May 14 17:57:00.843953 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 14 17:57:00.845528 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 17:57:00.867443 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (961) May 14 17:57:00.867493 kernel: BTRFS info (device vda6): first mount of filesystem 6d47052f-e956-47a0-903a-525ae08a05f2 May 14 17:57:00.867505 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 14 17:57:00.868194 kernel: BTRFS info (device vda6): using free-space-tree May 14 17:57:00.872377 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 17:57:00.897177 ignition[978]: INFO : Ignition 2.21.0 May 14 17:57:00.897177 ignition[978]: INFO : Stage: files May 14 17:57:00.899768 ignition[978]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 17:57:00.899768 ignition[978]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 17:57:00.899768 ignition[978]: DEBUG : files: compiled without relabeling support, skipping May 14 17:57:00.903242 ignition[978]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 17:57:00.903242 ignition[978]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 17:57:00.905972 ignition[978]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 17:57:00.905972 ignition[978]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 17:57:00.905972 ignition[978]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 17:57:00.905470 unknown[978]: wrote ssh authorized keys file for user: core May 14 17:57:00.910552 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 14 17:57:00.910552 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 14 17:57:00.966003 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 14 17:57:01.194002 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 14 17:57:01.194002 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 17:57:01.197944 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 14 17:57:01.612809 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 14 17:57:01.782879 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 17:57:01.784925 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 14 17:57:01.784925 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 14 17:57:01.784925 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 17:57:01.784925 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 17:57:01.784925 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 17:57:01.784925 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 17:57:01.784925 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 17:57:01.784925 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 17:57:01.783444 systemd-networkd[796]: eth0: Gained IPv6LL May 14 17:57:01.799792 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 17:57:01.799792 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 17:57:01.799792 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 14 17:57:01.799792 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 14 17:57:01.799792 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 14 17:57:01.799792 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 14 17:57:01.984963 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 14 17:57:02.340708 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 14 17:57:02.340708 ignition[978]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 14 17:57:02.344592 ignition[978]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 17:57:02.344592 ignition[978]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 17:57:02.344592 ignition[978]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 14 17:57:02.344592 ignition[978]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 14 17:57:02.344592 ignition[978]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 17:57:02.344592 ignition[978]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 17:57:02.344592 ignition[978]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 14 17:57:02.344592 ignition[978]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 14 17:57:02.359210 ignition[978]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 14 17:57:02.362246 ignition[978]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 14 17:57:02.364884 ignition[978]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 14 17:57:02.364884 ignition[978]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 14 17:57:02.364884 ignition[978]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 14 17:57:02.364884 ignition[978]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 17:57:02.364884 ignition[978]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 17:57:02.364884 ignition[978]: INFO : files: files passed May 14 17:57:02.364884 ignition[978]: INFO : Ignition finished successfully May 14 17:57:02.366730 systemd[1]: Finished ignition-files.service - Ignition (files). May 14 17:57:02.370197 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 14 17:57:02.372110 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 14 17:57:02.386361 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 17:57:02.386459 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 14 17:57:02.389578 initrd-setup-root-after-ignition[1006]: grep: /sysroot/oem/oem-release: No such file or directory May 14 17:57:02.390875 initrd-setup-root-after-ignition[1008]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 17:57:02.390875 initrd-setup-root-after-ignition[1008]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 14 17:57:02.396322 initrd-setup-root-after-ignition[1013]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 17:57:02.392247 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 17:57:02.393847 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 14 17:57:02.395688 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 14 17:57:02.461150 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 17:57:02.462216 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 14 17:57:02.464874 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 14 17:57:02.465949 systemd[1]: Reached target initrd.target - Initrd Default Target. May 14 17:57:02.467814 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 14 17:57:02.468711 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 14 17:57:02.500098 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 17:57:02.502650 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 14 17:57:02.526452 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 14 17:57:02.527790 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 17:57:02.529880 systemd[1]: Stopped target timers.target - Timer Units. May 14 17:57:02.531670 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 17:57:02.531793 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 17:57:02.534242 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 14 17:57:02.536358 systemd[1]: Stopped target basic.target - Basic System. May 14 17:57:02.538092 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 14 17:57:02.539932 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 14 17:57:02.541939 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 14 17:57:02.544089 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 14 17:57:02.546108 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 14 17:57:02.548053 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 14 17:57:02.550086 systemd[1]: Stopped target sysinit.target - System Initialization. May 14 17:57:02.552151 systemd[1]: Stopped target local-fs.target - Local File Systems. May 14 17:57:02.553956 systemd[1]: Stopped target swap.target - Swaps. May 14 17:57:02.555528 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 17:57:02.555656 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 14 17:57:02.558065 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 14 17:57:02.559246 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 17:57:02.561277 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 14 17:57:02.562118 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 17:57:02.563380 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 17:57:02.563505 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 14 17:57:02.570251 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 17:57:02.570373 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 14 17:57:02.579734 systemd[1]: Stopped target paths.target - Path Units. May 14 17:57:02.581232 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 17:57:02.583084 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 17:57:02.584433 systemd[1]: Stopped target slices.target - Slice Units. May 14 17:57:02.586364 systemd[1]: Stopped target sockets.target - Socket Units. May 14 17:57:02.588003 systemd[1]: iscsid.socket: Deactivated successfully. May 14 17:57:02.588093 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 14 17:57:02.589936 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 17:57:02.590021 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 17:57:02.592289 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 17:57:02.592407 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 17:57:02.594254 systemd[1]: ignition-files.service: Deactivated successfully. May 14 17:57:02.594354 systemd[1]: Stopped ignition-files.service - Ignition (files). May 14 17:57:02.596857 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 14 17:57:02.599241 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 14 17:57:02.600082 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 17:57:02.600245 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 14 17:57:02.602906 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 17:57:02.603054 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 14 17:57:02.610764 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 17:57:02.611575 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 14 17:57:02.621002 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 17:57:02.625618 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 17:57:02.625724 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 14 17:57:02.629579 ignition[1033]: INFO : Ignition 2.21.0 May 14 17:57:02.629579 ignition[1033]: INFO : Stage: umount May 14 17:57:02.632049 ignition[1033]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 17:57:02.632049 ignition[1033]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 17:57:02.634379 ignition[1033]: INFO : umount: umount passed May 14 17:57:02.634379 ignition[1033]: INFO : Ignition finished successfully May 14 17:57:02.634208 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 17:57:02.635248 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 14 17:57:02.637303 systemd[1]: Stopped target network.target - Network. May 14 17:57:02.638793 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 17:57:02.638856 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 14 17:57:02.640696 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 17:57:02.640767 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 14 17:57:02.642429 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 17:57:02.642494 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 14 17:57:02.644241 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 14 17:57:02.644284 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 14 17:57:02.646092 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 17:57:02.646143 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 14 17:57:02.648201 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 14 17:57:02.649969 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 14 17:57:02.664489 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 17:57:02.664649 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 14 17:57:02.668655 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 14 17:57:02.668869 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 17:57:02.668994 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 14 17:57:02.671982 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 14 17:57:02.672582 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 14 17:57:02.673976 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 17:57:02.674015 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 14 17:57:02.680711 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 14 17:57:02.681781 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 17:57:02.681841 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 17:57:02.684064 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 17:57:02.684108 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 17:57:02.687202 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 17:57:02.687244 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 14 17:57:02.689261 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 14 17:57:02.689305 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 17:57:02.692688 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 17:57:02.696421 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 17:57:02.696487 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 14 17:57:02.709749 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 17:57:02.711285 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 17:57:02.712917 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 17:57:02.712955 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 14 17:57:02.714856 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 17:57:02.714885 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 14 17:57:02.716806 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 17:57:02.716863 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 14 17:57:02.719659 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 17:57:02.719705 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 14 17:57:02.722389 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 17:57:02.722442 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 17:57:02.726219 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 14 17:57:02.727484 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 14 17:57:02.727544 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 14 17:57:02.730909 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 14 17:57:02.730954 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 17:57:02.734175 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 14 17:57:02.734218 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 17:57:02.737779 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 17:57:02.737819 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 14 17:57:02.740149 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 17:57:02.740226 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 17:57:02.744538 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 14 17:57:02.744589 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. May 14 17:57:02.744617 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 14 17:57:02.744646 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 17:57:02.744904 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 17:57:02.748270 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 14 17:57:02.753430 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 17:57:02.753522 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 14 17:57:02.756127 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 14 17:57:02.758347 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 14 17:57:02.779326 systemd[1]: Switching root. May 14 17:57:02.812222 systemd-journald[243]: Journal stopped May 14 17:57:03.627027 systemd-journald[243]: Received SIGTERM from PID 1 (systemd). May 14 17:57:03.627181 kernel: SELinux: policy capability network_peer_controls=1 May 14 17:57:03.627200 kernel: SELinux: policy capability open_perms=1 May 14 17:57:03.627211 kernel: SELinux: policy capability extended_socket_class=1 May 14 17:57:03.627221 kernel: SELinux: policy capability always_check_network=0 May 14 17:57:03.627230 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 17:57:03.627239 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 17:57:03.627248 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 17:57:03.627259 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 17:57:03.627270 kernel: SELinux: policy capability userspace_initial_context=0 May 14 17:57:03.627287 kernel: audit: type=1403 audit(1747245423.000:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 17:57:03.627307 systemd[1]: Successfully loaded SELinux policy in 44.771ms. May 14 17:57:03.627327 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.351ms. May 14 17:57:03.627338 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 17:57:03.627350 systemd[1]: Detected virtualization kvm. May 14 17:57:03.627360 systemd[1]: Detected architecture arm64. May 14 17:57:03.627370 systemd[1]: Detected first boot. May 14 17:57:03.627381 systemd[1]: Initializing machine ID from VM UUID. May 14 17:57:03.627392 zram_generator::config[1078]: No configuration found. May 14 17:57:03.627406 kernel: NET: Registered PF_VSOCK protocol family May 14 17:57:03.627420 systemd[1]: Populated /etc with preset unit settings. May 14 17:57:03.627434 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 14 17:57:03.627444 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 17:57:03.627454 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 14 17:57:03.627474 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 17:57:03.627487 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 14 17:57:03.627498 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 14 17:57:03.627507 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 14 17:57:03.627517 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 14 17:57:03.627530 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 14 17:57:03.627540 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 14 17:57:03.627550 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 14 17:57:03.627561 systemd[1]: Created slice user.slice - User and Session Slice. May 14 17:57:03.627571 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 17:57:03.627581 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 17:57:03.627592 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 14 17:57:03.627602 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 14 17:57:03.627614 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 14 17:57:03.627624 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 17:57:03.627636 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 14 17:57:03.627648 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 17:57:03.627658 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 17:57:03.627668 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 14 17:57:03.627683 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 14 17:57:03.627700 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 14 17:57:03.627712 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 14 17:57:03.627722 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 17:57:03.627731 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 17:57:03.627741 systemd[1]: Reached target slices.target - Slice Units. May 14 17:57:03.627751 systemd[1]: Reached target swap.target - Swaps. May 14 17:57:03.627761 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 14 17:57:03.627771 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 14 17:57:03.627781 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 14 17:57:03.627791 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 17:57:03.627825 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 17:57:03.627837 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 17:57:03.627847 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 14 17:57:03.627857 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 14 17:57:03.627867 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 14 17:57:03.627876 systemd[1]: Mounting media.mount - External Media Directory... May 14 17:57:03.627887 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 14 17:57:03.627897 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 14 17:57:03.627907 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 14 17:57:03.627918 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 17:57:03.627929 systemd[1]: Reached target machines.target - Containers. May 14 17:57:03.627939 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 14 17:57:03.627949 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 17:57:03.627959 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 17:57:03.627969 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 14 17:57:03.627980 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 17:57:03.627990 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 17:57:03.628000 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 17:57:03.628011 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 14 17:57:03.628021 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 17:57:03.628031 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 17:57:03.628042 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 17:57:03.628052 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 14 17:57:03.628062 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 17:57:03.628072 systemd[1]: Stopped systemd-fsck-usr.service. May 14 17:57:03.628082 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 17:57:03.628094 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 17:57:03.628104 kernel: fuse: init (API version 7.41) May 14 17:57:03.628114 kernel: loop: module loaded May 14 17:57:03.628123 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 17:57:03.628134 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 17:57:03.628143 kernel: ACPI: bus type drm_connector registered May 14 17:57:03.628154 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 14 17:57:03.628174 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 14 17:57:03.628184 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 17:57:03.628195 systemd[1]: verity-setup.service: Deactivated successfully. May 14 17:57:03.628205 systemd[1]: Stopped verity-setup.service. May 14 17:57:03.628216 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 14 17:57:03.628226 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 14 17:57:03.628237 systemd[1]: Mounted media.mount - External Media Directory. May 14 17:57:03.628249 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 14 17:57:03.628285 systemd-journald[1153]: Collecting audit messages is disabled. May 14 17:57:03.628308 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 14 17:57:03.628318 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 14 17:57:03.628341 systemd-journald[1153]: Journal started May 14 17:57:03.628364 systemd-journald[1153]: Runtime Journal (/run/log/journal/80b184f6cdb347a7a057fd81f200e440) is 6M, max 48.5M, 42.4M free. May 14 17:57:03.411733 systemd[1]: Queued start job for default target multi-user.target. May 14 17:57:03.435188 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 14 17:57:03.435578 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 17:57:03.631183 systemd[1]: Started systemd-journald.service - Journal Service. May 14 17:57:03.633218 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 14 17:57:03.634731 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 17:57:03.636345 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 17:57:03.636527 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 14 17:57:03.637974 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 17:57:03.638145 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 17:57:03.639813 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 17:57:03.639973 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 17:57:03.641376 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 17:57:03.642250 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 17:57:03.643792 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 17:57:03.643952 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 14 17:57:03.645410 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 17:57:03.645593 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 17:57:03.646991 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 17:57:03.649330 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 17:57:03.652185 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 14 17:57:03.654008 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 14 17:57:03.669284 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 17:57:03.673036 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 14 17:57:03.675364 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 14 17:57:03.676650 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 17:57:03.676694 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 17:57:03.678843 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 14 17:57:03.687133 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 14 17:57:03.688429 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 17:57:03.689863 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 14 17:57:03.691981 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 14 17:57:03.693266 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 17:57:03.696315 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 14 17:57:03.697628 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 17:57:03.698974 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 17:57:03.700701 systemd-journald[1153]: Time spent on flushing to /var/log/journal/80b184f6cdb347a7a057fd81f200e440 is 14.541ms for 889 entries. May 14 17:57:03.700701 systemd-journald[1153]: System Journal (/var/log/journal/80b184f6cdb347a7a057fd81f200e440) is 8M, max 195.6M, 187.6M free. May 14 17:57:03.730697 systemd-journald[1153]: Received client request to flush runtime journal. May 14 17:57:03.730751 kernel: loop0: detected capacity change from 0 to 138376 May 14 17:57:03.701338 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 14 17:57:03.704719 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 17:57:03.709199 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 17:57:03.713678 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 14 17:57:03.714906 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 14 17:57:03.726987 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 14 17:57:03.729122 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 14 17:57:03.735154 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 14 17:57:03.737092 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 14 17:57:03.740219 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 17:57:03.753547 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. May 14 17:57:03.753579 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. May 14 17:57:03.755153 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 17:57:03.762330 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 14 17:57:03.765510 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 17:57:03.772350 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 14 17:57:03.774183 kernel: loop1: detected capacity change from 0 to 194096 May 14 17:57:03.798187 kernel: loop2: detected capacity change from 0 to 107312 May 14 17:57:03.811039 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 14 17:57:03.813808 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 17:57:03.829187 kernel: loop3: detected capacity change from 0 to 138376 May 14 17:57:03.838183 kernel: loop4: detected capacity change from 0 to 194096 May 14 17:57:03.843931 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. May 14 17:57:03.844312 kernel: loop5: detected capacity change from 0 to 107312 May 14 17:57:03.843955 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. May 14 17:57:03.848386 (sd-merge)[1222]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 14 17:57:03.848776 (sd-merge)[1222]: Merged extensions into '/usr'. May 14 17:57:03.849215 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 17:57:03.856318 systemd[1]: Reload requested from client PID 1194 ('systemd-sysext') (unit systemd-sysext.service)... May 14 17:57:03.856336 systemd[1]: Reloading... May 14 17:57:03.915210 zram_generator::config[1250]: No configuration found. May 14 17:57:03.988204 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 17:57:03.994301 ldconfig[1189]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 17:57:04.050429 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 17:57:04.050535 systemd[1]: Reloading finished in 193 ms. May 14 17:57:04.083870 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 14 17:57:04.087189 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 14 17:57:04.103497 systemd[1]: Starting ensure-sysext.service... May 14 17:57:04.105357 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 17:57:04.116024 systemd[1]: Reload requested from client PID 1284 ('systemctl') (unit ensure-sysext.service)... May 14 17:57:04.116038 systemd[1]: Reloading... May 14 17:57:04.123605 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 14 17:57:04.123976 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 14 17:57:04.124332 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 17:57:04.124681 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 14 17:57:04.125385 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 17:57:04.125786 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. May 14 17:57:04.125901 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. May 14 17:57:04.135420 systemd-tmpfiles[1285]: Detected autofs mount point /boot during canonicalization of boot. May 14 17:57:04.135490 systemd-tmpfiles[1285]: Skipping /boot May 14 17:57:04.150078 systemd-tmpfiles[1285]: Detected autofs mount point /boot during canonicalization of boot. May 14 17:57:04.150220 systemd-tmpfiles[1285]: Skipping /boot May 14 17:57:04.162190 zram_generator::config[1312]: No configuration found. May 14 17:57:04.228771 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 17:57:04.289685 systemd[1]: Reloading finished in 173 ms. May 14 17:57:04.314193 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 14 17:57:04.319809 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 17:57:04.329211 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 17:57:04.331513 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 14 17:57:04.334590 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 14 17:57:04.340447 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 17:57:04.344834 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 17:57:04.348581 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 14 17:57:04.353539 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 14 17:57:04.357790 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 17:57:04.361055 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 17:57:04.363046 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 17:57:04.370179 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 17:57:04.372313 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 17:57:04.372425 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 17:57:04.375266 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 14 17:57:04.377545 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 17:57:04.377744 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 17:57:04.379525 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 17:57:04.379674 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 17:57:04.381608 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 17:57:04.381746 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 17:57:04.388205 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 14 17:57:04.391200 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 14 17:57:04.393426 systemd-udevd[1353]: Using default interface naming scheme 'v255'. May 14 17:57:04.397667 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 17:57:04.399490 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 17:57:04.401539 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 17:57:04.411103 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 17:57:04.412199 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 17:57:04.412314 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 17:57:04.413884 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 14 17:57:04.414922 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 17:57:04.417187 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 14 17:57:04.419099 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 17:57:04.419264 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 17:57:04.420868 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 17:57:04.421046 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 17:57:04.422795 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 17:57:04.422925 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 17:57:04.429904 augenrules[1391]: No rules May 14 17:57:04.430302 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 17:57:04.431488 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 17:57:04.438140 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 17:57:04.441408 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 17:57:04.451013 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 17:57:04.452245 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 17:57:04.452375 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 17:57:04.452490 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 17:57:04.456749 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 17:57:04.459696 systemd[1]: audit-rules.service: Deactivated successfully. May 14 17:57:04.461190 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 17:57:04.464624 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 17:57:04.465334 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 17:57:04.467017 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 17:57:04.468197 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 17:57:04.470732 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 17:57:04.470873 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 17:57:04.472922 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 14 17:57:04.474578 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 17:57:04.474756 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 17:57:04.479935 systemd[1]: Finished ensure-sysext.service. May 14 17:57:04.499330 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 17:57:04.500552 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 17:57:04.500628 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 17:57:04.502477 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 14 17:57:04.512689 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 14 17:57:04.582940 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 17:57:04.589323 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 14 17:57:04.600996 systemd-networkd[1439]: lo: Link UP May 14 17:57:04.601003 systemd-networkd[1439]: lo: Gained carrier May 14 17:57:04.601758 systemd-networkd[1439]: Enumeration completed May 14 17:57:04.601860 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 17:57:04.603640 systemd-networkd[1439]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 17:57:04.603651 systemd-networkd[1439]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 17:57:04.604070 systemd-networkd[1439]: eth0: Link UP May 14 17:57:04.604194 systemd-networkd[1439]: eth0: Gained carrier May 14 17:57:04.604210 systemd-networkd[1439]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 17:57:04.604469 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 14 17:57:04.607751 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 14 17:57:04.609451 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 14 17:57:04.610996 systemd[1]: Reached target time-set.target - System Time Set. May 14 17:57:04.618578 systemd-networkd[1439]: eth0: DHCPv4 address 10.0.0.30/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 17:57:04.619036 systemd-timesyncd[1440]: Network configuration changed, trying to establish connection. May 14 17:57:04.619357 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 14 17:57:04.620222 systemd-timesyncd[1440]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 14 17:57:04.620269 systemd-timesyncd[1440]: Initial clock synchronization to Wed 2025-05-14 17:57:04.728120 UTC. May 14 17:57:04.631845 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 14 17:57:04.632689 systemd-resolved[1351]: Positive Trust Anchors: May 14 17:57:04.632709 systemd-resolved[1351]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 17:57:04.632740 systemd-resolved[1351]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 17:57:04.639793 systemd-resolved[1351]: Defaulting to hostname 'linux'. May 14 17:57:04.642565 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 17:57:04.644293 systemd[1]: Reached target network.target - Network. May 14 17:57:04.645173 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 17:57:04.646336 systemd[1]: Reached target sysinit.target - System Initialization. May 14 17:57:04.647410 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 14 17:57:04.648612 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 14 17:57:04.650109 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 14 17:57:04.651267 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 14 17:57:04.653326 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 14 17:57:04.654551 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 17:57:04.654584 systemd[1]: Reached target paths.target - Path Units. May 14 17:57:04.655512 systemd[1]: Reached target timers.target - Timer Units. May 14 17:57:04.657293 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 14 17:57:04.659427 systemd[1]: Starting docker.socket - Docker Socket for the API... May 14 17:57:04.662683 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 14 17:57:04.664357 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 14 17:57:04.665691 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 14 17:57:04.674399 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 14 17:57:04.676214 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 14 17:57:04.677845 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 14 17:57:04.679264 systemd[1]: Reached target sockets.target - Socket Units. May 14 17:57:04.680178 systemd[1]: Reached target basic.target - Basic System. May 14 17:57:04.681288 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 14 17:57:04.681319 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 14 17:57:04.682153 systemd[1]: Starting containerd.service - containerd container runtime... May 14 17:57:04.685363 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 14 17:57:04.688176 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 14 17:57:04.690763 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 14 17:57:04.692983 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 14 17:57:04.693959 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 14 17:57:04.699549 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 14 17:57:04.701554 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 14 17:57:04.704040 jq[1476]: false May 14 17:57:04.704790 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 14 17:57:04.708124 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 14 17:57:04.720402 systemd[1]: Starting systemd-logind.service - User Login Management... May 14 17:57:04.722182 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 17:57:04.722566 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 17:57:04.725633 systemd[1]: Starting update-engine.service - Update Engine... May 14 17:57:04.726745 extend-filesystems[1477]: Found loop3 May 14 17:57:04.726745 extend-filesystems[1477]: Found loop4 May 14 17:57:04.732133 extend-filesystems[1477]: Found loop5 May 14 17:57:04.732133 extend-filesystems[1477]: Found vda May 14 17:57:04.732133 extend-filesystems[1477]: Found vda1 May 14 17:57:04.732133 extend-filesystems[1477]: Found vda2 May 14 17:57:04.732133 extend-filesystems[1477]: Found vda3 May 14 17:57:04.732133 extend-filesystems[1477]: Found usr May 14 17:57:04.732133 extend-filesystems[1477]: Found vda4 May 14 17:57:04.732133 extend-filesystems[1477]: Found vda6 May 14 17:57:04.732133 extend-filesystems[1477]: Found vda7 May 14 17:57:04.732133 extend-filesystems[1477]: Found vda9 May 14 17:57:04.732133 extend-filesystems[1477]: Checking size of /dev/vda9 May 14 17:57:04.727619 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 14 17:57:04.746590 extend-filesystems[1477]: Resized partition /dev/vda9 May 14 17:57:04.732371 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 14 17:57:04.749185 extend-filesystems[1501]: resize2fs 1.47.2 (1-Jan-2025) May 14 17:57:04.751387 jq[1494]: true May 14 17:57:04.736981 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 17:57:04.737143 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 14 17:57:04.737399 systemd[1]: motdgen.service: Deactivated successfully. May 14 17:57:04.737558 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 14 17:57:04.743509 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 17:57:04.743676 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 14 17:57:04.760593 (ntainerd)[1502]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 14 17:57:04.761199 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 14 17:57:04.771468 jq[1500]: true May 14 17:57:04.771696 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 17:57:04.804208 tar[1498]: linux-arm64/helm May 14 17:57:04.804045 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 14 17:57:04.803893 dbus-daemon[1474]: [system] SELinux support is enabled May 14 17:57:04.807631 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 17:57:04.807662 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 14 17:57:04.809979 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 17:57:04.810004 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 14 17:57:04.817181 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 14 17:57:04.835647 systemd-logind[1488]: Watching system buttons on /dev/input/event0 (Power Button) May 14 17:57:04.835905 systemd-logind[1488]: New seat seat0. May 14 17:57:04.836492 systemd[1]: Started systemd-logind.service - User Login Management. May 14 17:57:04.836845 update_engine[1491]: I20250514 17:57:04.836037 1491 main.cc:92] Flatcar Update Engine starting May 14 17:57:04.838971 extend-filesystems[1501]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 14 17:57:04.838971 extend-filesystems[1501]: old_desc_blocks = 1, new_desc_blocks = 1 May 14 17:57:04.838971 extend-filesystems[1501]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 14 17:57:04.849009 extend-filesystems[1477]: Resized filesystem in /dev/vda9 May 14 17:57:04.850117 update_engine[1491]: I20250514 17:57:04.844560 1491 update_check_scheduler.cc:74] Next update check in 8m18s May 14 17:57:04.840378 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 17:57:04.840607 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 14 17:57:04.877177 bash[1536]: Updated "/home/core/.ssh/authorized_keys" May 14 17:57:04.880484 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 17:57:04.882125 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 14 17:57:04.883683 systemd[1]: Started update-engine.service - Update Engine. May 14 17:57:04.889831 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 14 17:57:04.893374 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 14 17:57:04.957016 locksmithd[1546]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 17:57:04.969325 sshd_keygen[1492]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 17:57:04.993227 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 14 17:57:04.998636 systemd[1]: Starting issuegen.service - Generate /run/issue... May 14 17:57:05.016893 systemd[1]: issuegen.service: Deactivated successfully. May 14 17:57:05.018215 systemd[1]: Finished issuegen.service - Generate /run/issue. May 14 17:57:05.021616 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 14 17:57:05.023722 containerd[1502]: time="2025-05-14T17:57:05Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 14 17:57:05.026855 containerd[1502]: time="2025-05-14T17:57:05.026788056Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 14 17:57:05.036062 containerd[1502]: time="2025-05-14T17:57:05.036008989Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.135µs" May 14 17:57:05.036186 containerd[1502]: time="2025-05-14T17:57:05.036155097Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 14 17:57:05.036244 containerd[1502]: time="2025-05-14T17:57:05.036230177Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 14 17:57:05.036470 containerd[1502]: time="2025-05-14T17:57:05.036449500Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 14 17:57:05.036535 containerd[1502]: time="2025-05-14T17:57:05.036522027Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 14 17:57:05.036600 containerd[1502]: time="2025-05-14T17:57:05.036586810Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 17:57:05.036708 containerd[1502]: time="2025-05-14T17:57:05.036690594Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 17:57:05.036761 containerd[1502]: time="2025-05-14T17:57:05.036747309Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 17:57:05.037087 containerd[1502]: time="2025-05-14T17:57:05.037058578Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 17:57:05.037155 containerd[1502]: time="2025-05-14T17:57:05.037141848Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 17:57:05.037237 containerd[1502]: time="2025-05-14T17:57:05.037220942Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 17:57:05.037304 containerd[1502]: time="2025-05-14T17:57:05.037290023Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 14 17:57:05.037451 containerd[1502]: time="2025-05-14T17:57:05.037431103Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 14 17:57:05.037716 containerd[1502]: time="2025-05-14T17:57:05.037693399Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 17:57:05.037802 containerd[1502]: time="2025-05-14T17:57:05.037786033Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 17:57:05.037860 containerd[1502]: time="2025-05-14T17:57:05.037846682Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 14 17:57:05.037937 containerd[1502]: time="2025-05-14T17:57:05.037924600Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 14 17:57:05.038325 containerd[1502]: time="2025-05-14T17:57:05.038298747Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 14 17:57:05.038465 containerd[1502]: time="2025-05-14T17:57:05.038448381Z" level=info msg="metadata content store policy set" policy=shared May 14 17:57:05.042962 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 14 17:57:05.046163 systemd[1]: Started getty@tty1.service - Getty on tty1. May 14 17:57:05.048521 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 14 17:57:05.049973 systemd[1]: Reached target getty.target - Login Prompts. May 14 17:57:05.051161 containerd[1502]: time="2025-05-14T17:57:05.051089092Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 14 17:57:05.051237 containerd[1502]: time="2025-05-14T17:57:05.051209821Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 14 17:57:05.051237 containerd[1502]: time="2025-05-14T17:57:05.051231267Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 14 17:57:05.051333 containerd[1502]: time="2025-05-14T17:57:05.051245497Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 14 17:57:05.051333 containerd[1502]: time="2025-05-14T17:57:05.051305659Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 14 17:57:05.051333 containerd[1502]: time="2025-05-14T17:57:05.051318145Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 14 17:57:05.051333 containerd[1502]: time="2025-05-14T17:57:05.051330672Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 14 17:57:05.051398 containerd[1502]: time="2025-05-14T17:57:05.051343361Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 14 17:57:05.051398 containerd[1502]: time="2025-05-14T17:57:05.051364848Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 14 17:57:05.051398 containerd[1502]: time="2025-05-14T17:57:05.051376158Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 14 17:57:05.051449 containerd[1502]: time="2025-05-14T17:57:05.051401172Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 14 17:57:05.051449 containerd[1502]: time="2025-05-14T17:57:05.051414834Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 14 17:57:05.051592 containerd[1502]: time="2025-05-14T17:57:05.051559887Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 14 17:57:05.051619 containerd[1502]: time="2025-05-14T17:57:05.051590981Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 14 17:57:05.051619 containerd[1502]: time="2025-05-14T17:57:05.051608170Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 14 17:57:05.051657 containerd[1502]: time="2025-05-14T17:57:05.051619360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 14 17:57:05.051657 containerd[1502]: time="2025-05-14T17:57:05.051632008Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 14 17:57:05.051657 containerd[1502]: time="2025-05-14T17:57:05.051643481Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 14 17:57:05.051657 containerd[1502]: time="2025-05-14T17:57:05.051654873Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 14 17:57:05.051723 containerd[1502]: time="2025-05-14T17:57:05.051666103Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 14 17:57:05.051723 containerd[1502]: time="2025-05-14T17:57:05.051676886Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 14 17:57:05.051723 containerd[1502]: time="2025-05-14T17:57:05.051687954Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 14 17:57:05.051723 containerd[1502]: time="2025-05-14T17:57:05.051703805Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 14 17:57:05.051913 containerd[1502]: time="2025-05-14T17:57:05.051890372Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 14 17:57:05.051913 containerd[1502]: time="2025-05-14T17:57:05.051911939Z" level=info msg="Start snapshots syncer" May 14 17:57:05.051970 containerd[1502]: time="2025-05-14T17:57:05.051937601Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 14 17:57:05.054174 containerd[1502]: time="2025-05-14T17:57:05.054115426Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 14 17:57:05.054284 containerd[1502]: time="2025-05-14T17:57:05.054215520Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 14 17:57:05.054349 containerd[1502]: time="2025-05-14T17:57:05.054317804Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 14 17:57:05.054506 containerd[1502]: time="2025-05-14T17:57:05.054477654Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 14 17:57:05.054533 containerd[1502]: time="2025-05-14T17:57:05.054511100Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 14 17:57:05.054533 containerd[1502]: time="2025-05-14T17:57:05.054529465Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 14 17:57:05.054624 containerd[1502]: time="2025-05-14T17:57:05.054540573Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 14 17:57:05.054624 containerd[1502]: time="2025-05-14T17:57:05.054551964Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 14 17:57:05.054624 containerd[1502]: time="2025-05-14T17:57:05.054562708Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 14 17:57:05.054624 containerd[1502]: time="2025-05-14T17:57:05.054572843Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 14 17:57:05.054624 containerd[1502]: time="2025-05-14T17:57:05.054599478Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 14 17:57:05.054624 containerd[1502]: time="2025-05-14T17:57:05.054623072Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 14 17:57:05.054734 containerd[1502]: time="2025-05-14T17:57:05.054634788Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 14 17:57:05.054734 containerd[1502]: time="2025-05-14T17:57:05.054689477Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 17:57:05.054734 containerd[1502]: time="2025-05-14T17:57:05.054705045Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 17:57:05.054734 containerd[1502]: time="2025-05-14T17:57:05.054712828Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 17:57:05.054734 containerd[1502]: time="2025-05-14T17:57:05.054721909Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 17:57:05.054734 containerd[1502]: time="2025-05-14T17:57:05.054729653Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 14 17:57:05.054830 containerd[1502]: time="2025-05-14T17:57:05.054739342Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 14 17:57:05.054830 containerd[1502]: time="2025-05-14T17:57:05.054749112Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 14 17:57:05.054830 containerd[1502]: time="2025-05-14T17:57:05.054824274Z" level=info msg="runtime interface created" May 14 17:57:05.054830 containerd[1502]: time="2025-05-14T17:57:05.054830152Z" level=info msg="created NRI interface" May 14 17:57:05.054892 containerd[1502]: time="2025-05-14T17:57:05.054838422Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 14 17:57:05.054892 containerd[1502]: time="2025-05-14T17:57:05.054851355Z" level=info msg="Connect containerd service" May 14 17:57:05.054892 containerd[1502]: time="2025-05-14T17:57:05.054876733Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 14 17:57:05.055681 containerd[1502]: time="2025-05-14T17:57:05.055647242Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 17:57:05.166470 containerd[1502]: time="2025-05-14T17:57:05.166288961Z" level=info msg="Start subscribing containerd event" May 14 17:57:05.166470 containerd[1502]: time="2025-05-14T17:57:05.166357799Z" level=info msg="Start recovering state" May 14 17:57:05.166470 containerd[1502]: time="2025-05-14T17:57:05.166440420Z" level=info msg="Start event monitor" May 14 17:57:05.166470 containerd[1502]: time="2025-05-14T17:57:05.166456433Z" level=info msg="Start cni network conf syncer for default" May 14 17:57:05.166470 containerd[1502]: time="2025-05-14T17:57:05.166464541Z" level=info msg="Start streaming server" May 14 17:57:05.166470 containerd[1502]: time="2025-05-14T17:57:05.166473014Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 14 17:57:05.166470 containerd[1502]: time="2025-05-14T17:57:05.166479460Z" level=info msg="runtime interface starting up..." May 14 17:57:05.166470 containerd[1502]: time="2025-05-14T17:57:05.166485541Z" level=info msg="starting plugins..." May 14 17:57:05.166470 containerd[1502]: time="2025-05-14T17:57:05.166498920Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 14 17:57:05.166993 containerd[1502]: time="2025-05-14T17:57:05.166973849Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 17:57:05.167074 containerd[1502]: time="2025-05-14T17:57:05.167025173Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 17:57:05.167169 containerd[1502]: time="2025-05-14T17:57:05.167120524Z" level=info msg="containerd successfully booted in 0.143749s" May 14 17:57:05.167286 systemd[1]: Started containerd.service - containerd container runtime. May 14 17:57:05.192292 tar[1498]: linux-arm64/LICENSE May 14 17:57:05.192292 tar[1498]: linux-arm64/README.md May 14 17:57:05.221695 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 14 17:57:06.199476 systemd-networkd[1439]: eth0: Gained IPv6LL May 14 17:57:06.205825 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 14 17:57:06.207569 systemd[1]: Reached target network-online.target - Network is Online. May 14 17:57:06.210563 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 14 17:57:06.212793 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 17:57:06.225721 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 14 17:57:06.244278 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 14 17:57:06.248102 systemd[1]: coreos-metadata.service: Deactivated successfully. May 14 17:57:06.248314 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 14 17:57:06.250871 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 14 17:57:06.704797 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 17:57:06.706456 systemd[1]: Reached target multi-user.target - Multi-User System. May 14 17:57:06.709306 (kubelet)[1610]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 17:57:06.713514 systemd[1]: Startup finished in 2.093s (kernel) + 5.371s (initrd) + 3.768s (userspace) = 11.233s. May 14 17:57:07.163102 kubelet[1610]: E0514 17:57:07.163007 1610 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 17:57:07.165799 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 17:57:07.165941 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 17:57:07.166289 systemd[1]: kubelet.service: Consumed 792ms CPU time, 238.7M memory peak. May 14 17:57:10.957532 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 14 17:57:10.958629 systemd[1]: Started sshd@0-10.0.0.30:22-10.0.0.1:47712.service - OpenSSH per-connection server daemon (10.0.0.1:47712). May 14 17:57:11.042594 sshd[1624]: Accepted publickey for core from 10.0.0.1 port 47712 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 17:57:11.044391 sshd-session[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 17:57:11.050193 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 14 17:57:11.054375 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 14 17:57:11.059573 systemd-logind[1488]: New session 1 of user core. May 14 17:57:11.069535 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 14 17:57:11.072216 systemd[1]: Starting user@500.service - User Manager for UID 500... May 14 17:57:11.091188 (systemd)[1628]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 17:57:11.093296 systemd-logind[1488]: New session c1 of user core. May 14 17:57:11.198687 systemd[1628]: Queued start job for default target default.target. May 14 17:57:11.215074 systemd[1628]: Created slice app.slice - User Application Slice. May 14 17:57:11.215104 systemd[1628]: Reached target paths.target - Paths. May 14 17:57:11.215140 systemd[1628]: Reached target timers.target - Timers. May 14 17:57:11.216405 systemd[1628]: Starting dbus.socket - D-Bus User Message Bus Socket... May 14 17:57:11.225053 systemd[1628]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 14 17:57:11.225110 systemd[1628]: Reached target sockets.target - Sockets. May 14 17:57:11.225144 systemd[1628]: Reached target basic.target - Basic System. May 14 17:57:11.225194 systemd[1628]: Reached target default.target - Main User Target. May 14 17:57:11.225223 systemd[1628]: Startup finished in 126ms. May 14 17:57:11.225454 systemd[1]: Started user@500.service - User Manager for UID 500. May 14 17:57:11.226817 systemd[1]: Started session-1.scope - Session 1 of User core. May 14 17:57:11.293424 systemd[1]: Started sshd@1-10.0.0.30:22-10.0.0.1:47718.service - OpenSSH per-connection server daemon (10.0.0.1:47718). May 14 17:57:11.342068 sshd[1639]: Accepted publickey for core from 10.0.0.1 port 47718 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 17:57:11.343302 sshd-session[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 17:57:11.348204 systemd-logind[1488]: New session 2 of user core. May 14 17:57:11.356358 systemd[1]: Started session-2.scope - Session 2 of User core. May 14 17:57:11.407489 sshd[1641]: Connection closed by 10.0.0.1 port 47718 May 14 17:57:11.408034 sshd-session[1639]: pam_unix(sshd:session): session closed for user core May 14 17:57:11.422141 systemd[1]: sshd@1-10.0.0.30:22-10.0.0.1:47718.service: Deactivated successfully. May 14 17:57:11.424407 systemd[1]: session-2.scope: Deactivated successfully. May 14 17:57:11.426313 systemd-logind[1488]: Session 2 logged out. Waiting for processes to exit. May 14 17:57:11.428622 systemd[1]: Started sshd@2-10.0.0.30:22-10.0.0.1:47724.service - OpenSSH per-connection server daemon (10.0.0.1:47724). May 14 17:57:11.429209 systemd-logind[1488]: Removed session 2. May 14 17:57:11.478219 sshd[1647]: Accepted publickey for core from 10.0.0.1 port 47724 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 17:57:11.479261 sshd-session[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 17:57:11.483494 systemd-logind[1488]: New session 3 of user core. May 14 17:57:11.507386 systemd[1]: Started session-3.scope - Session 3 of User core. May 14 17:57:11.555136 sshd[1650]: Connection closed by 10.0.0.1 port 47724 May 14 17:57:11.555532 sshd-session[1647]: pam_unix(sshd:session): session closed for user core May 14 17:57:11.575007 systemd[1]: sshd@2-10.0.0.30:22-10.0.0.1:47724.service: Deactivated successfully. May 14 17:57:11.576742 systemd[1]: session-3.scope: Deactivated successfully. May 14 17:57:11.579236 systemd-logind[1488]: Session 3 logged out. Waiting for processes to exit. May 14 17:57:11.581748 systemd[1]: Started sshd@3-10.0.0.30:22-10.0.0.1:47726.service - OpenSSH per-connection server daemon (10.0.0.1:47726). May 14 17:57:11.582202 systemd-logind[1488]: Removed session 3. May 14 17:57:11.642958 sshd[1656]: Accepted publickey for core from 10.0.0.1 port 47726 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 17:57:11.644200 sshd-session[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 17:57:11.648228 systemd-logind[1488]: New session 4 of user core. May 14 17:57:11.662327 systemd[1]: Started session-4.scope - Session 4 of User core. May 14 17:57:11.713456 sshd[1658]: Connection closed by 10.0.0.1 port 47726 May 14 17:57:11.713899 sshd-session[1656]: pam_unix(sshd:session): session closed for user core May 14 17:57:11.722064 systemd[1]: sshd@3-10.0.0.30:22-10.0.0.1:47726.service: Deactivated successfully. May 14 17:57:11.723496 systemd[1]: session-4.scope: Deactivated successfully. May 14 17:57:11.724740 systemd-logind[1488]: Session 4 logged out. Waiting for processes to exit. May 14 17:57:11.726467 systemd[1]: Started sshd@4-10.0.0.30:22-10.0.0.1:47728.service - OpenSSH per-connection server daemon (10.0.0.1:47728). May 14 17:57:11.727639 systemd-logind[1488]: Removed session 4. May 14 17:57:11.775863 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 47728 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 17:57:11.776984 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 17:57:11.780673 systemd-logind[1488]: New session 5 of user core. May 14 17:57:11.792396 systemd[1]: Started session-5.scope - Session 5 of User core. May 14 17:57:11.850972 sudo[1667]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 14 17:57:11.851264 sudo[1667]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 17:57:11.862812 sudo[1667]: pam_unix(sudo:session): session closed for user root May 14 17:57:11.864989 sshd[1666]: Connection closed by 10.0.0.1 port 47728 May 14 17:57:11.864756 sshd-session[1664]: pam_unix(sshd:session): session closed for user core May 14 17:57:11.872879 systemd[1]: sshd@4-10.0.0.30:22-10.0.0.1:47728.service: Deactivated successfully. May 14 17:57:11.875446 systemd[1]: session-5.scope: Deactivated successfully. May 14 17:57:11.876189 systemd-logind[1488]: Session 5 logged out. Waiting for processes to exit. May 14 17:57:11.878578 systemd[1]: Started sshd@5-10.0.0.30:22-10.0.0.1:47738.service - OpenSSH per-connection server daemon (10.0.0.1:47738). May 14 17:57:11.879075 systemd-logind[1488]: Removed session 5. May 14 17:57:11.931027 sshd[1673]: Accepted publickey for core from 10.0.0.1 port 47738 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 17:57:11.932421 sshd-session[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 17:57:11.936924 systemd-logind[1488]: New session 6 of user core. May 14 17:57:11.944316 systemd[1]: Started session-6.scope - Session 6 of User core. May 14 17:57:11.996249 sudo[1677]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 14 17:57:11.996830 sudo[1677]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 17:57:12.060555 sudo[1677]: pam_unix(sudo:session): session closed for user root May 14 17:57:12.065648 sudo[1676]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 14 17:57:12.065924 sudo[1676]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 17:57:12.074408 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 17:57:12.107887 augenrules[1699]: No rules May 14 17:57:12.109048 systemd[1]: audit-rules.service: Deactivated successfully. May 14 17:57:12.110246 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 17:57:12.111114 sudo[1676]: pam_unix(sudo:session): session closed for user root May 14 17:57:12.112315 sshd[1675]: Connection closed by 10.0.0.1 port 47738 May 14 17:57:12.113259 sshd-session[1673]: pam_unix(sshd:session): session closed for user core May 14 17:57:12.121021 systemd[1]: sshd@5-10.0.0.30:22-10.0.0.1:47738.service: Deactivated successfully. May 14 17:57:12.122519 systemd[1]: session-6.scope: Deactivated successfully. May 14 17:57:12.123153 systemd-logind[1488]: Session 6 logged out. Waiting for processes to exit. May 14 17:57:12.125515 systemd[1]: Started sshd@6-10.0.0.30:22-10.0.0.1:47752.service - OpenSSH per-connection server daemon (10.0.0.1:47752). May 14 17:57:12.126548 systemd-logind[1488]: Removed session 6. May 14 17:57:12.172948 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 47752 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 17:57:12.174188 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 17:57:12.178808 systemd-logind[1488]: New session 7 of user core. May 14 17:57:12.189408 systemd[1]: Started session-7.scope - Session 7 of User core. May 14 17:57:12.241238 sudo[1711]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 17:57:12.241498 sudo[1711]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 17:57:12.603943 systemd[1]: Starting docker.service - Docker Application Container Engine... May 14 17:57:12.627459 (dockerd)[1732]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 14 17:57:12.897594 dockerd[1732]: time="2025-05-14T17:57:12.897461351Z" level=info msg="Starting up" May 14 17:57:12.899216 dockerd[1732]: time="2025-05-14T17:57:12.899178371Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 14 17:57:12.943374 dockerd[1732]: time="2025-05-14T17:57:12.943332226Z" level=info msg="Loading containers: start." May 14 17:57:12.954188 kernel: Initializing XFRM netlink socket May 14 17:57:13.154951 systemd-networkd[1439]: docker0: Link UP May 14 17:57:13.159899 dockerd[1732]: time="2025-05-14T17:57:13.159853389Z" level=info msg="Loading containers: done." May 14 17:57:13.171346 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1899869581-merged.mount: Deactivated successfully. May 14 17:57:13.175347 dockerd[1732]: time="2025-05-14T17:57:13.175305457Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 17:57:13.175441 dockerd[1732]: time="2025-05-14T17:57:13.175384984Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 14 17:57:13.175514 dockerd[1732]: time="2025-05-14T17:57:13.175487618Z" level=info msg="Initializing buildkit" May 14 17:57:13.202097 dockerd[1732]: time="2025-05-14T17:57:13.202052105Z" level=info msg="Completed buildkit initialization" May 14 17:57:13.208881 dockerd[1732]: time="2025-05-14T17:57:13.208838256Z" level=info msg="Daemon has completed initialization" May 14 17:57:13.208969 dockerd[1732]: time="2025-05-14T17:57:13.208918105Z" level=info msg="API listen on /run/docker.sock" May 14 17:57:13.209161 systemd[1]: Started docker.service - Docker Application Container Engine. May 14 17:57:14.022793 containerd[1502]: time="2025-05-14T17:57:14.022727488Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 14 17:57:14.747616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount743749783.mount: Deactivated successfully. May 14 17:57:15.785081 containerd[1502]: time="2025-05-14T17:57:15.785026176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 17:57:15.785500 containerd[1502]: time="2025-05-14T17:57:15.785467860Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794152" May 14 17:57:15.786355 containerd[1502]: time="2025-05-14T17:57:15.786320841Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 17:57:15.789199 containerd[1502]: time="2025-05-14T17:57:15.788453191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 17:57:15.790263 containerd[1502]: time="2025-05-14T17:57:15.790231890Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 1.767461032s" May 14 17:57:15.790333 containerd[1502]: time="2025-05-14T17:57:15.790270667Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 14 17:57:15.805172 containerd[1502]: time="2025-05-14T17:57:15.805035308Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 14 17:57:17.247249 containerd[1502]: time="2025-05-14T17:57:17.247184798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 17:57:17.248573 containerd[1502]: time="2025-05-14T17:57:17.248353329Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855552" May 14 17:57:17.249525 containerd[1502]: time="2025-05-14T17:57:17.249497834Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 17:57:17.252676 containerd[1502]: time="2025-05-14T17:57:17.252641203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 17:57:17.254041 containerd[1502]: time="2025-05-14T17:57:17.254006668Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 1.448850138s" May 14 17:57:17.254175 containerd[1502]: time="2025-05-14T17:57:17.254123545Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 14 17:57:17.269871 containerd[1502]: time="2025-05-14T17:57:17.269832769Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 14 17:57:17.416269 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 17:57:17.417633 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 17:57:17.539897 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 17:57:17.543533 (kubelet)[2035]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 17:57:17.600232 kubelet[2035]: E0514 17:57:17.600152 2035 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 17:57:17.603571 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 17:57:17.603727 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 17:57:17.604021 systemd[1]: kubelet.service: Consumed 141ms CPU time, 95.3M memory peak. May 14 17:57:18.329344 containerd[1502]: time="2025-05-14T17:57:18.329297970Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 17:57:18.330206 containerd[1502]: time="2025-05-14T17:57:18.330185037Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263947" May 14 17:57:18.330846 containerd[1502]: time="2025-05-14T17:57:18.330825037Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 17:57:18.333351 containerd[1502]: time="2025-05-14T17:57:18.333318158Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 17:57:18.334258 containerd[1502]: time="2025-05-14T17:57:18.334231727Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 1.064363907s" May 14 17:57:18.334375 containerd[1502]: time="2025-05-14T17:57:18.334336456Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 14 17:57:18.349124 containerd[1502]: time="2025-05-14T17:57:18.349099437Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 14 17:57:19.439230 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2056813726.mount: Deactivated successfully. May 14 17:57:19.793292 containerd[1502]: time="2025-05-14T17:57:19.793172644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 17:57:19.794566 containerd[1502]: time="2025-05-14T17:57:19.794527340Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775707" May 14 17:57:19.795120 containerd[1502]: time="2025-05-14T17:57:19.795096042Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 17:57:19.797043 containerd[1502]: time="2025-05-14T17:57:19.797004930Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 17:57:19.797586 containerd[1502]: time="2025-05-14T17:57:19.797444123Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.448315419s" May 14 17:57:19.797586 containerd[1502]: time="2025-05-14T17:57:19.797475068Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 14 17:57:19.813057 containerd[1502]: time="2025-05-14T17:57:19.813015411Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 14 17:57:20.454402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1319121059.mount: Deactivated successfully. May 14 17:57:21.160084 containerd[1502]: time="2025-05-14T17:57:21.159055470Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 17:57:21.160084 containerd[1502]: time="2025-05-14T17:57:21.159532389Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" May 14 17:57:21.160703 containerd[1502]: time="2025-05-14T17:57:21.160673606Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 17:57:21.162954 containerd[1502]: time="2025-05-14T17:57:21.162918100Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 17:57:21.164078 containerd[1502]: time="2025-05-14T17:57:21.164035119Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.350986843s" May 14 17:57:21.164205 containerd[1502]: time="2025-05-14T17:57:21.164164284Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 14 17:57:21.179483 containerd[1502]: time="2025-05-14T17:57:21.179441330Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 14 17:57:21.677752 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4133026385.mount: Deactivated successfully. May 14 17:57:21.681261 containerd[1502]: time="2025-05-14T17:57:21.681220341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 17:57:21.681720 containerd[1502]: time="2025-05-14T17:57:21.681690169Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" May 14 17:57:21.682437 containerd[1502]: time="2025-05-14T17:57:21.682412079Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 17:57:21.684847 containerd[1502]: time="2025-05-14T17:57:21.684797197Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 17:57:21.685506 containerd[1502]: time="2025-05-14T17:57:21.685474275Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 505.819726ms" May 14 17:57:21.685506 containerd[1502]: time="2025-05-14T17:57:21.685505084Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 14 17:57:21.700759 containerd[1502]: time="2025-05-14T17:57:21.700726841Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 14 17:57:22.324833 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount58476388.mount: Deactivated successfully. May 14 17:57:23.860420 containerd[1502]: time="2025-05-14T17:57:23.860368978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 17:57:23.861173 containerd[1502]: time="2025-05-14T17:57:23.860877278Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" May 14 17:57:23.862179 containerd[1502]: time="2025-05-14T17:57:23.862136694Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 17:57:23.865599 containerd[1502]: time="2025-05-14T17:57:23.865554182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 17:57:23.867472 containerd[1502]: time="2025-05-14T17:57:23.867442124Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.166678187s" May 14 17:57:23.867536 containerd[1502]: time="2025-05-14T17:57:23.867473843Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 14 17:57:27.721203 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 14 17:57:27.723109 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 17:57:27.732719 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 14 17:57:27.732794 systemd[1]: kubelet.service: Failed with result 'signal'. May 14 17:57:27.734243 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 17:57:27.736670 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 17:57:27.754260 systemd[1]: Reload requested from client PID 2284 ('systemctl') (unit session-7.scope)... May 14 17:57:27.754276 systemd[1]: Reloading... May 14 17:57:27.829181 zram_generator::config[2327]: No configuration found. May 14 17:57:27.927063 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 17:57:28.013377 systemd[1]: Reloading finished in 258 ms. May 14 17:57:28.073678 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 14 17:57:28.073770 systemd[1]: kubelet.service: Failed with result 'signal'. May 14 17:57:28.075209 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 17:57:28.075265 systemd[1]: kubelet.service: Consumed 81ms CPU time, 82.4M memory peak. May 14 17:57:28.076936 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 17:57:28.179191 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 17:57:28.183367 (kubelet)[2372]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 17:57:28.221025 kubelet[2372]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 17:57:28.221025 kubelet[2372]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 17:57:28.221025 kubelet[2372]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 17:57:28.222685 kubelet[2372]: I0514 17:57:28.222633 2372 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 17:57:29.352503 kubelet[2372]: I0514 17:57:29.352461 2372 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 14 17:57:29.352503 kubelet[2372]: I0514 17:57:29.352490 2372 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 17:57:29.352841 kubelet[2372]: I0514 17:57:29.352689 2372 server.go:927] "Client rotation is on, will bootstrap in background" May 14 17:57:29.414926 kubelet[2372]: E0514 17:57:29.414742 2372 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.30:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.30:6443: connect: connection refused May 14 17:57:29.414926 kubelet[2372]: I0514 17:57:29.414798 2372 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 17:57:29.423643 kubelet[2372]: I0514 17:57:29.423614 2372 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 17:57:29.425886 kubelet[2372]: I0514 17:57:29.425410 2372 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 17:57:29.425886 kubelet[2372]: I0514 17:57:29.425461 2372 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 14 17:57:29.425886 kubelet[2372]: I0514 17:57:29.425819 2372 topology_manager.go:138] "Creating topology manager with none policy" May 14 17:57:29.425886 kubelet[2372]: I0514 17:57:29.425829 2372 container_manager_linux.go:301] "Creating device plugin manager" May 14 17:57:29.426138 kubelet[2372]: I0514 17:57:29.426127 2372 state_mem.go:36] "Initialized new in-memory state store" May 14 17:57:29.427613 kubelet[2372]: I0514 17:57:29.427580 2372 kubelet.go:400] "Attempting to sync node with API server" May 14 17:57:29.427613 kubelet[2372]: I0514 17:57:29.427608 2372 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 17:57:29.427840 kubelet[2372]: I0514 17:57:29.427817 2372 kubelet.go:312] "Adding apiserver pod source" May 14 17:57:29.427840 kubelet[2372]: W0514 17:57:29.427802 2372 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused May 14 17:57:29.427890 kubelet[2372]: E0514 17:57:29.427853 2372 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused May 14 17:57:29.427911 kubelet[2372]: I0514 17:57:29.427893 2372 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 17:57:29.428480 kubelet[2372]: W0514 17:57:29.428442 2372 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.30:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused May 14 17:57:29.428526 kubelet[2372]: E0514 17:57:29.428487 2372 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.30:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused May 14 17:57:29.430847 kubelet[2372]: I0514 17:57:29.430793 2372 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 14 17:57:29.431286 kubelet[2372]: I0514 17:57:29.431270 2372 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 17:57:29.431444 kubelet[2372]: W0514 17:57:29.431434 2372 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 17:57:29.432409 kubelet[2372]: I0514 17:57:29.432393 2372 server.go:1264] "Started kubelet" May 14 17:57:29.435583 kubelet[2372]: I0514 17:57:29.433015 2372 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 17:57:29.435583 kubelet[2372]: I0514 17:57:29.434265 2372 server.go:455] "Adding debug handlers to kubelet server" May 14 17:57:29.435583 kubelet[2372]: I0514 17:57:29.434391 2372 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 17:57:29.435583 kubelet[2372]: I0514 17:57:29.434703 2372 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 17:57:29.437936 kubelet[2372]: I0514 17:57:29.436100 2372 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 17:57:29.439181 kubelet[2372]: E0514 17:57:29.438260 2372 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 17:57:29.439181 kubelet[2372]: E0514 17:57:29.435833 2372 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.30:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.30:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f767a9cd585ea default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-14 17:57:29.432368618 +0000 UTC m=+1.246004261,LastTimestamp:2025-05-14 17:57:29.432368618 +0000 UTC m=+1.246004261,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 14 17:57:29.439181 kubelet[2372]: I0514 17:57:29.438366 2372 volume_manager.go:291] "Starting Kubelet Volume Manager" May 14 17:57:29.439181 kubelet[2372]: I0514 17:57:29.438439 2372 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 17:57:29.439345 kubelet[2372]: I0514 17:57:29.439208 2372 reconciler.go:26] "Reconciler: start to sync state" May 14 17:57:29.439544 kubelet[2372]: W0514 17:57:29.439491 2372 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused May 14 17:57:29.439544 kubelet[2372]: E0514 17:57:29.439545 2372 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused May 14 17:57:29.440680 kubelet[2372]: E0514 17:57:29.440630 2372 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.30:6443: connect: connection refused" interval="200ms" May 14 17:57:29.442520 kubelet[2372]: I0514 17:57:29.442486 2372 factory.go:221] Registration of the containerd container factory successfully May 14 17:57:29.442520 kubelet[2372]: I0514 17:57:29.442504 2372 factory.go:221] Registration of the systemd container factory successfully May 14 17:57:29.442626 kubelet[2372]: I0514 17:57:29.442584 2372 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 17:57:29.451141 kubelet[2372]: I0514 17:57:29.451091 2372 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 17:57:29.452320 kubelet[2372]: I0514 17:57:29.452300 2372 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 17:57:29.452430 kubelet[2372]: I0514 17:57:29.452411 2372 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 17:57:29.452499 kubelet[2372]: I0514 17:57:29.452490 2372 kubelet.go:2337] "Starting kubelet main sync loop" May 14 17:57:29.452642 kubelet[2372]: E0514 17:57:29.452623 2372 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 17:57:29.453634 kubelet[2372]: W0514 17:57:29.453513 2372 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused May 14 17:57:29.453802 kubelet[2372]: E0514 17:57:29.453787 2372 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused May 14 17:57:29.454595 kubelet[2372]: I0514 17:57:29.454567 2372 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 17:57:29.454944 kubelet[2372]: I0514 17:57:29.454670 2372 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 17:57:29.454944 kubelet[2372]: I0514 17:57:29.454688 2372 state_mem.go:36] "Initialized new in-memory state store" May 14 17:57:29.457210 kubelet[2372]: I0514 17:57:29.457193 2372 policy_none.go:49] "None policy: Start" May 14 17:57:29.458686 kubelet[2372]: I0514 17:57:29.458667 2372 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 17:57:29.458754 kubelet[2372]: I0514 17:57:29.458696 2372 state_mem.go:35] "Initializing new in-memory state store" May 14 17:57:29.466300 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 14 17:57:29.479257 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 14 17:57:29.482562 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 14 17:57:29.502138 kubelet[2372]: I0514 17:57:29.502005 2372 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 17:57:29.502535 kubelet[2372]: I0514 17:57:29.502354 2372 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 17:57:29.502535 kubelet[2372]: I0514 17:57:29.502482 2372 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 17:57:29.504550 kubelet[2372]: E0514 17:57:29.504519 2372 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 14 17:57:29.540266 kubelet[2372]: I0514 17:57:29.540233 2372 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 17:57:29.540614 kubelet[2372]: E0514 17:57:29.540574 2372 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.30:6443/api/v1/nodes\": dial tcp 10.0.0.30:6443: connect: connection refused" node="localhost" May 14 17:57:29.553887 kubelet[2372]: I0514 17:57:29.553810 2372 topology_manager.go:215] "Topology Admit Handler" podUID="0f4955890d507ed9e29f25d3dcb18e17" podNamespace="kube-system" podName="kube-apiserver-localhost" May 14 17:57:29.555110 kubelet[2372]: I0514 17:57:29.555078 2372 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 14 17:57:29.556012 kubelet[2372]: I0514 17:57:29.555973 2372 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 14 17:57:29.561572 systemd[1]: Created slice kubepods-burstable-pod0f4955890d507ed9e29f25d3dcb18e17.slice - libcontainer container kubepods-burstable-pod0f4955890d507ed9e29f25d3dcb18e17.slice. May 14 17:57:29.589706 systemd[1]: Created slice kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice - libcontainer container kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice. May 14 17:57:29.611735 systemd[1]: Created slice kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice - libcontainer container kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice. May 14 17:57:29.641239 kubelet[2372]: E0514 17:57:29.641189 2372 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.30:6443: connect: connection refused" interval="400ms" May 14 17:57:29.740961 kubelet[2372]: I0514 17:57:29.740884 2372 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 17:57:29.740961 kubelet[2372]: I0514 17:57:29.740922 2372 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 17:57:29.741616 kubelet[2372]: I0514 17:57:29.740944 2372 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 14 17:57:29.741701 kubelet[2372]: I0514 17:57:29.741623 2372 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0f4955890d507ed9e29f25d3dcb18e17-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0f4955890d507ed9e29f25d3dcb18e17\") " pod="kube-system/kube-apiserver-localhost" May 14 17:57:29.741701 kubelet[2372]: I0514 17:57:29.741647 2372 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0f4955890d507ed9e29f25d3dcb18e17-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0f4955890d507ed9e29f25d3dcb18e17\") " pod="kube-system/kube-apiserver-localhost" May 14 17:57:29.741701 kubelet[2372]: I0514 17:57:29.741666 2372 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 17:57:29.741701 kubelet[2372]: I0514 17:57:29.741682 2372 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0f4955890d507ed9e29f25d3dcb18e17-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0f4955890d507ed9e29f25d3dcb18e17\") " pod="kube-system/kube-apiserver-localhost" May 14 17:57:29.741701 kubelet[2372]: I0514 17:57:29.741698 2372 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 17:57:29.741817 kubelet[2372]: I0514 17:57:29.741715 2372 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 17:57:29.742344 kubelet[2372]: I0514 17:57:29.742325 2372 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 17:57:29.742704 kubelet[2372]: E0514 17:57:29.742681 2372 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.30:6443/api/v1/nodes\": dial tcp 10.0.0.30:6443: connect: connection refused" node="localhost" May 14 17:57:29.887980 containerd[1502]: time="2025-05-14T17:57:29.887867932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0f4955890d507ed9e29f25d3dcb18e17,Namespace:kube-system,Attempt:0,}" May 14 17:57:29.910527 containerd[1502]: time="2025-05-14T17:57:29.910480077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 14 17:57:29.916123 containerd[1502]: time="2025-05-14T17:57:29.916092071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 14 17:57:30.042562 kubelet[2372]: E0514 17:57:30.042509 2372 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.30:6443: connect: connection refused" interval="800ms" May 14 17:57:30.144011 kubelet[2372]: I0514 17:57:30.143908 2372 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 17:57:30.144318 kubelet[2372]: E0514 17:57:30.144246 2372 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.30:6443/api/v1/nodes\": dial tcp 10.0.0.30:6443: connect: connection refused" node="localhost" May 14 17:57:30.337218 kubelet[2372]: W0514 17:57:30.337123 2372 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused May 14 17:57:30.337218 kubelet[2372]: E0514 17:57:30.337212 2372 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused May 14 17:57:30.418208 kubelet[2372]: W0514 17:57:30.418072 2372 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused May 14 17:57:30.418208 kubelet[2372]: E0514 17:57:30.418141 2372 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused May 14 17:57:30.633527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3051996769.mount: Deactivated successfully. May 14 17:57:30.637839 containerd[1502]: time="2025-05-14T17:57:30.637789070Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 17:57:30.639016 containerd[1502]: time="2025-05-14T17:57:30.638982602Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 14 17:57:30.641284 containerd[1502]: time="2025-05-14T17:57:30.641247047Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 17:57:30.643324 containerd[1502]: time="2025-05-14T17:57:30.643239002Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 17:57:30.644875 containerd[1502]: time="2025-05-14T17:57:30.644824322Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 17:57:30.645408 containerd[1502]: time="2025-05-14T17:57:30.645354456Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 14 17:57:30.645927 containerd[1502]: time="2025-05-14T17:57:30.645896476Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 14 17:57:30.646324 containerd[1502]: time="2025-05-14T17:57:30.646288423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 17:57:30.648048 containerd[1502]: time="2025-05-14T17:57:30.648015011Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 735.899959ms" May 14 17:57:30.648542 containerd[1502]: time="2025-05-14T17:57:30.648514090Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 756.013261ms" May 14 17:57:30.652845 containerd[1502]: time="2025-05-14T17:57:30.652802105Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 735.14886ms" May 14 17:57:30.675231 containerd[1502]: time="2025-05-14T17:57:30.674202201Z" level=info msg="connecting to shim 5ebee57d340fdcf74fae201f15c1ffa5964017e3285e737a2ba2190108294718" address="unix:///run/containerd/s/3b32ec926697e6523053b9d7f82e7f9094ad3ccb23ae0a0f0e4466afaafa49ec" namespace=k8s.io protocol=ttrpc version=3 May 14 17:57:30.678019 containerd[1502]: time="2025-05-14T17:57:30.677977651Z" level=info msg="connecting to shim 8feabd017fca8b43497f7910bda0898e28a4d0f24fb40b202527644c57da3aea" address="unix:///run/containerd/s/d81d04d1934d8dc2df930d19d47d3f398f1838fcff0ee8b381a056f8be0c7e7b" namespace=k8s.io protocol=ttrpc version=3 May 14 17:57:30.680386 containerd[1502]: time="2025-05-14T17:57:30.680343145Z" level=info msg="connecting to shim e61420473a692a10c3fccf71c90e31aaf8a3ca06ea4212f6a26619aa1ec243be" address="unix:///run/containerd/s/a6fa74515c449c704a0b3c70be8faf1a1e110f312b62015e4e28c5eb7fff2019" namespace=k8s.io protocol=ttrpc version=3 May 14 17:57:30.697320 systemd[1]: Started cri-containerd-5ebee57d340fdcf74fae201f15c1ffa5964017e3285e737a2ba2190108294718.scope - libcontainer container 5ebee57d340fdcf74fae201f15c1ffa5964017e3285e737a2ba2190108294718. May 14 17:57:30.700520 systemd[1]: Started cri-containerd-8feabd017fca8b43497f7910bda0898e28a4d0f24fb40b202527644c57da3aea.scope - libcontainer container 8feabd017fca8b43497f7910bda0898e28a4d0f24fb40b202527644c57da3aea. May 14 17:57:30.703683 systemd[1]: Started cri-containerd-e61420473a692a10c3fccf71c90e31aaf8a3ca06ea4212f6a26619aa1ec243be.scope - libcontainer container e61420473a692a10c3fccf71c90e31aaf8a3ca06ea4212f6a26619aa1ec243be. May 14 17:57:30.738436 containerd[1502]: time="2025-05-14T17:57:30.738382921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ebee57d340fdcf74fae201f15c1ffa5964017e3285e737a2ba2190108294718\"" May 14 17:57:30.743023 containerd[1502]: time="2025-05-14T17:57:30.742984566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"8feabd017fca8b43497f7910bda0898e28a4d0f24fb40b202527644c57da3aea\"" May 14 17:57:30.747242 containerd[1502]: time="2025-05-14T17:57:30.747194184Z" level=info msg="CreateContainer within sandbox \"5ebee57d340fdcf74fae201f15c1ffa5964017e3285e737a2ba2190108294718\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 17:57:30.747951 containerd[1502]: time="2025-05-14T17:57:30.747926455Z" level=info msg="CreateContainer within sandbox \"8feabd017fca8b43497f7910bda0898e28a4d0f24fb40b202527644c57da3aea\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 17:57:30.754536 containerd[1502]: time="2025-05-14T17:57:30.754239120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0f4955890d507ed9e29f25d3dcb18e17,Namespace:kube-system,Attempt:0,} returns sandbox id \"e61420473a692a10c3fccf71c90e31aaf8a3ca06ea4212f6a26619aa1ec243be\"" May 14 17:57:30.757742 containerd[1502]: time="2025-05-14T17:57:30.757705382Z" level=info msg="Container 1a7ac18925014e4dd36c78195d5ae8f90c3d91a446807c43bef3f5c57d59f25c: CDI devices from CRI Config.CDIDevices: []" May 14 17:57:30.758141 containerd[1502]: time="2025-05-14T17:57:30.758111976Z" level=info msg="CreateContainer within sandbox \"e61420473a692a10c3fccf71c90e31aaf8a3ca06ea4212f6a26619aa1ec243be\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 17:57:30.765659 containerd[1502]: time="2025-05-14T17:57:30.765606528Z" level=info msg="Container 3caa9d6754a6f356718d10d8aa3bafe02fba221b517a88635a072782a6a46311: CDI devices from CRI Config.CDIDevices: []" May 14 17:57:30.767044 containerd[1502]: time="2025-05-14T17:57:30.767017525Z" level=info msg="Container bfca28190bedfa0aaa6774b98533dde17b70a35a7a9e65b225a0ddccdb9c1099: CDI devices from CRI Config.CDIDevices: []" May 14 17:57:30.767959 containerd[1502]: time="2025-05-14T17:57:30.767931483Z" level=info msg="CreateContainer within sandbox \"5ebee57d340fdcf74fae201f15c1ffa5964017e3285e737a2ba2190108294718\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1a7ac18925014e4dd36c78195d5ae8f90c3d91a446807c43bef3f5c57d59f25c\"" May 14 17:57:30.768615 containerd[1502]: time="2025-05-14T17:57:30.768593880Z" level=info msg="StartContainer for \"1a7ac18925014e4dd36c78195d5ae8f90c3d91a446807c43bef3f5c57d59f25c\"" May 14 17:57:30.769919 containerd[1502]: time="2025-05-14T17:57:30.769894703Z" level=info msg="connecting to shim 1a7ac18925014e4dd36c78195d5ae8f90c3d91a446807c43bef3f5c57d59f25c" address="unix:///run/containerd/s/3b32ec926697e6523053b9d7f82e7f9094ad3ccb23ae0a0f0e4466afaafa49ec" protocol=ttrpc version=3 May 14 17:57:30.772543 containerd[1502]: time="2025-05-14T17:57:30.772499112Z" level=info msg="CreateContainer within sandbox \"8feabd017fca8b43497f7910bda0898e28a4d0f24fb40b202527644c57da3aea\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3caa9d6754a6f356718d10d8aa3bafe02fba221b517a88635a072782a6a46311\"" May 14 17:57:30.773105 containerd[1502]: time="2025-05-14T17:57:30.773074868Z" level=info msg="StartContainer for \"3caa9d6754a6f356718d10d8aa3bafe02fba221b517a88635a072782a6a46311\"" May 14 17:57:30.774218 containerd[1502]: time="2025-05-14T17:57:30.774191003Z" level=info msg="connecting to shim 3caa9d6754a6f356718d10d8aa3bafe02fba221b517a88635a072782a6a46311" address="unix:///run/containerd/s/d81d04d1934d8dc2df930d19d47d3f398f1838fcff0ee8b381a056f8be0c7e7b" protocol=ttrpc version=3 May 14 17:57:30.778602 containerd[1502]: time="2025-05-14T17:57:30.778545009Z" level=info msg="CreateContainer within sandbox \"e61420473a692a10c3fccf71c90e31aaf8a3ca06ea4212f6a26619aa1ec243be\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bfca28190bedfa0aaa6774b98533dde17b70a35a7a9e65b225a0ddccdb9c1099\"" May 14 17:57:30.779074 containerd[1502]: time="2025-05-14T17:57:30.778996346Z" level=info msg="StartContainer for \"bfca28190bedfa0aaa6774b98533dde17b70a35a7a9e65b225a0ddccdb9c1099\"" May 14 17:57:30.780116 containerd[1502]: time="2025-05-14T17:57:30.780084467Z" level=info msg="connecting to shim bfca28190bedfa0aaa6774b98533dde17b70a35a7a9e65b225a0ddccdb9c1099" address="unix:///run/containerd/s/a6fa74515c449c704a0b3c70be8faf1a1e110f312b62015e4e28c5eb7fff2019" protocol=ttrpc version=3 May 14 17:57:30.789350 systemd[1]: Started cri-containerd-1a7ac18925014e4dd36c78195d5ae8f90c3d91a446807c43bef3f5c57d59f25c.scope - libcontainer container 1a7ac18925014e4dd36c78195d5ae8f90c3d91a446807c43bef3f5c57d59f25c. May 14 17:57:30.792481 systemd[1]: Started cri-containerd-3caa9d6754a6f356718d10d8aa3bafe02fba221b517a88635a072782a6a46311.scope - libcontainer container 3caa9d6754a6f356718d10d8aa3bafe02fba221b517a88635a072782a6a46311. May 14 17:57:30.812324 systemd[1]: Started cri-containerd-bfca28190bedfa0aaa6774b98533dde17b70a35a7a9e65b225a0ddccdb9c1099.scope - libcontainer container bfca28190bedfa0aaa6774b98533dde17b70a35a7a9e65b225a0ddccdb9c1099. May 14 17:57:30.820653 kubelet[2372]: W0514 17:57:30.820591 2372 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.30:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused May 14 17:57:30.820653 kubelet[2372]: E0514 17:57:30.820655 2372 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.30:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused May 14 17:57:30.843924 kubelet[2372]: E0514 17:57:30.843874 2372 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.30:6443: connect: connection refused" interval="1.6s" May 14 17:57:30.849410 containerd[1502]: time="2025-05-14T17:57:30.849360108Z" level=info msg="StartContainer for \"1a7ac18925014e4dd36c78195d5ae8f90c3d91a446807c43bef3f5c57d59f25c\" returns successfully" May 14 17:57:30.853585 containerd[1502]: time="2025-05-14T17:57:30.851261620Z" level=info msg="StartContainer for \"3caa9d6754a6f356718d10d8aa3bafe02fba221b517a88635a072782a6a46311\" returns successfully" May 14 17:57:30.872506 containerd[1502]: time="2025-05-14T17:57:30.871315631Z" level=info msg="StartContainer for \"bfca28190bedfa0aaa6774b98533dde17b70a35a7a9e65b225a0ddccdb9c1099\" returns successfully" May 14 17:57:30.907242 kubelet[2372]: W0514 17:57:30.907084 2372 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused May 14 17:57:30.907242 kubelet[2372]: E0514 17:57:30.907183 2372 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused May 14 17:57:30.946516 kubelet[2372]: I0514 17:57:30.946073 2372 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 17:57:30.946516 kubelet[2372]: E0514 17:57:30.946415 2372 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.30:6443/api/v1/nodes\": dial tcp 10.0.0.30:6443: connect: connection refused" node="localhost" May 14 17:57:32.547859 kubelet[2372]: I0514 17:57:32.547824 2372 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 17:57:33.044767 kubelet[2372]: E0514 17:57:33.044419 2372 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 14 17:57:33.132277 kubelet[2372]: I0514 17:57:33.132224 2372 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 14 17:57:33.430549 kubelet[2372]: I0514 17:57:33.430442 2372 apiserver.go:52] "Watching apiserver" May 14 17:57:33.438880 kubelet[2372]: I0514 17:57:33.438823 2372 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 17:57:35.154928 systemd[1]: Reload requested from client PID 2648 ('systemctl') (unit session-7.scope)... May 14 17:57:35.154945 systemd[1]: Reloading... May 14 17:57:35.220188 zram_generator::config[2691]: No configuration found. May 14 17:57:35.299435 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 17:57:35.397349 systemd[1]: Reloading finished in 242 ms. May 14 17:57:35.421184 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 17:57:35.436311 systemd[1]: kubelet.service: Deactivated successfully. May 14 17:57:35.436595 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 17:57:35.436657 systemd[1]: kubelet.service: Consumed 1.625s CPU time, 115.1M memory peak. May 14 17:57:35.438558 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 17:57:35.551893 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 17:57:35.555574 (kubelet)[2733]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 17:57:35.612464 kubelet[2733]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 17:57:35.612464 kubelet[2733]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 17:57:35.612464 kubelet[2733]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 17:57:35.613348 kubelet[2733]: I0514 17:57:35.613297 2733 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 17:57:35.618010 kubelet[2733]: I0514 17:57:35.617963 2733 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 14 17:57:35.618010 kubelet[2733]: I0514 17:57:35.617992 2733 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 17:57:35.618190 kubelet[2733]: I0514 17:57:35.618176 2733 server.go:927] "Client rotation is on, will bootstrap in background" May 14 17:57:35.619549 kubelet[2733]: I0514 17:57:35.619524 2733 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 17:57:35.621435 kubelet[2733]: I0514 17:57:35.621315 2733 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 17:57:35.627212 kubelet[2733]: I0514 17:57:35.627151 2733 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 17:57:35.627509 kubelet[2733]: I0514 17:57:35.627471 2733 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 17:57:35.627776 kubelet[2733]: I0514 17:57:35.627508 2733 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 14 17:57:35.627853 kubelet[2733]: I0514 17:57:35.627780 2733 topology_manager.go:138] "Creating topology manager with none policy" May 14 17:57:35.627853 kubelet[2733]: I0514 17:57:35.627790 2733 container_manager_linux.go:301] "Creating device plugin manager" May 14 17:57:35.627853 kubelet[2733]: I0514 17:57:35.627827 2733 state_mem.go:36] "Initialized new in-memory state store" May 14 17:57:35.627981 kubelet[2733]: I0514 17:57:35.627947 2733 kubelet.go:400] "Attempting to sync node with API server" May 14 17:57:35.627981 kubelet[2733]: I0514 17:57:35.627966 2733 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 17:57:35.628101 kubelet[2733]: I0514 17:57:35.628081 2733 kubelet.go:312] "Adding apiserver pod source" May 14 17:57:35.628176 kubelet[2733]: I0514 17:57:35.628166 2733 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 17:57:35.631180 kubelet[2733]: I0514 17:57:35.629441 2733 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 14 17:57:35.631180 kubelet[2733]: I0514 17:57:35.629622 2733 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 17:57:35.631180 kubelet[2733]: I0514 17:57:35.629984 2733 server.go:1264] "Started kubelet" May 14 17:57:35.631180 kubelet[2733]: I0514 17:57:35.630701 2733 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 17:57:35.631180 kubelet[2733]: I0514 17:57:35.631012 2733 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 17:57:35.631180 kubelet[2733]: I0514 17:57:35.631050 2733 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 17:57:35.632045 kubelet[2733]: I0514 17:57:35.632025 2733 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 17:57:35.633254 kubelet[2733]: I0514 17:57:35.632100 2733 server.go:455] "Adding debug handlers to kubelet server" May 14 17:57:35.633488 kubelet[2733]: E0514 17:57:35.633470 2733 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 17:57:35.633523 kubelet[2733]: I0514 17:57:35.633507 2733 volume_manager.go:291] "Starting Kubelet Volume Manager" May 14 17:57:35.633611 kubelet[2733]: I0514 17:57:35.633594 2733 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 17:57:35.633762 kubelet[2733]: I0514 17:57:35.633742 2733 reconciler.go:26] "Reconciler: start to sync state" May 14 17:57:35.647420 kubelet[2733]: E0514 17:57:35.647381 2733 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 17:57:35.647635 kubelet[2733]: I0514 17:57:35.647612 2733 factory.go:221] Registration of the systemd container factory successfully May 14 17:57:35.647824 kubelet[2733]: I0514 17:57:35.647716 2733 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 17:57:35.650453 kubelet[2733]: I0514 17:57:35.650424 2733 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 17:57:35.653315 kubelet[2733]: I0514 17:57:35.652935 2733 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 17:57:35.653810 kubelet[2733]: I0514 17:57:35.653593 2733 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 17:57:35.653810 kubelet[2733]: I0514 17:57:35.653633 2733 kubelet.go:2337] "Starting kubelet main sync loop" May 14 17:57:35.653810 kubelet[2733]: E0514 17:57:35.653682 2733 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 17:57:35.660202 kubelet[2733]: I0514 17:57:35.660177 2733 factory.go:221] Registration of the containerd container factory successfully May 14 17:57:35.698474 kubelet[2733]: I0514 17:57:35.698385 2733 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 17:57:35.698589 kubelet[2733]: I0514 17:57:35.698575 2733 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 17:57:35.698641 kubelet[2733]: I0514 17:57:35.698634 2733 state_mem.go:36] "Initialized new in-memory state store" May 14 17:57:35.698840 kubelet[2733]: I0514 17:57:35.698825 2733 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 17:57:35.698917 kubelet[2733]: I0514 17:57:35.698893 2733 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 17:57:35.698975 kubelet[2733]: I0514 17:57:35.698967 2733 policy_none.go:49] "None policy: Start" May 14 17:57:35.700208 kubelet[2733]: I0514 17:57:35.700191 2733 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 17:57:35.700400 kubelet[2733]: I0514 17:57:35.700389 2733 state_mem.go:35] "Initializing new in-memory state store" May 14 17:57:35.700586 kubelet[2733]: I0514 17:57:35.700572 2733 state_mem.go:75] "Updated machine memory state" May 14 17:57:35.704753 kubelet[2733]: I0514 17:57:35.704734 2733 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 17:57:35.705011 kubelet[2733]: I0514 17:57:35.704974 2733 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 17:57:35.705177 kubelet[2733]: I0514 17:57:35.705149 2733 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 17:57:35.735821 kubelet[2733]: I0514 17:57:35.735796 2733 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 17:57:35.743322 kubelet[2733]: I0514 17:57:35.743289 2733 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 14 17:57:35.743422 kubelet[2733]: I0514 17:57:35.743377 2733 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 14 17:57:35.754475 kubelet[2733]: I0514 17:57:35.754363 2733 topology_manager.go:215] "Topology Admit Handler" podUID="0f4955890d507ed9e29f25d3dcb18e17" podNamespace="kube-system" podName="kube-apiserver-localhost" May 14 17:57:35.754475 kubelet[2733]: I0514 17:57:35.754464 2733 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 14 17:57:35.754596 kubelet[2733]: I0514 17:57:35.754499 2733 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 14 17:57:35.834876 kubelet[2733]: I0514 17:57:35.834841 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 17:57:35.835176 kubelet[2733]: I0514 17:57:35.835096 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0f4955890d507ed9e29f25d3dcb18e17-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0f4955890d507ed9e29f25d3dcb18e17\") " pod="kube-system/kube-apiserver-localhost" May 14 17:57:35.835176 kubelet[2733]: I0514 17:57:35.835123 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0f4955890d507ed9e29f25d3dcb18e17-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0f4955890d507ed9e29f25d3dcb18e17\") " pod="kube-system/kube-apiserver-localhost" May 14 17:57:35.835176 kubelet[2733]: I0514 17:57:35.835144 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0f4955890d507ed9e29f25d3dcb18e17-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0f4955890d507ed9e29f25d3dcb18e17\") " pod="kube-system/kube-apiserver-localhost" May 14 17:57:35.835369 kubelet[2733]: I0514 17:57:35.835293 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 17:57:35.835369 kubelet[2733]: I0514 17:57:35.835316 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 17:57:35.835369 kubelet[2733]: I0514 17:57:35.835331 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 17:57:35.835601 kubelet[2733]: I0514 17:57:35.835562 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 17:57:35.835713 kubelet[2733]: I0514 17:57:35.835687 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 14 17:57:36.158250 sudo[2768]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 14 17:57:36.158525 sudo[2768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 14 17:57:36.596882 sudo[2768]: pam_unix(sudo:session): session closed for user root May 14 17:57:36.629134 kubelet[2733]: I0514 17:57:36.629087 2733 apiserver.go:52] "Watching apiserver" May 14 17:57:36.635332 kubelet[2733]: I0514 17:57:36.635302 2733 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 17:57:36.705646 kubelet[2733]: I0514 17:57:36.705525 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.705506777 podStartE2EDuration="1.705506777s" podCreationTimestamp="2025-05-14 17:57:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 17:57:36.692625086 +0000 UTC m=+1.131207336" watchObservedRunningTime="2025-05-14 17:57:36.705506777 +0000 UTC m=+1.144089027" May 14 17:57:36.718005 kubelet[2733]: I0514 17:57:36.716151 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.716134424 podStartE2EDuration="1.716134424s" podCreationTimestamp="2025-05-14 17:57:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 17:57:36.705756031 +0000 UTC m=+1.144338281" watchObservedRunningTime="2025-05-14 17:57:36.716134424 +0000 UTC m=+1.154716634" May 14 17:57:36.727997 kubelet[2733]: I0514 17:57:36.727944 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.727926831 podStartE2EDuration="1.727926831s" podCreationTimestamp="2025-05-14 17:57:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 17:57:36.718233195 +0000 UTC m=+1.156815446" watchObservedRunningTime="2025-05-14 17:57:36.727926831 +0000 UTC m=+1.166509081" May 14 17:57:38.423089 sudo[1711]: pam_unix(sudo:session): session closed for user root May 14 17:57:38.424638 sshd[1710]: Connection closed by 10.0.0.1 port 47752 May 14 17:57:38.424987 sshd-session[1708]: pam_unix(sshd:session): session closed for user core May 14 17:57:38.429883 systemd[1]: sshd@6-10.0.0.30:22-10.0.0.1:47752.service: Deactivated successfully. May 14 17:57:38.432347 systemd[1]: session-7.scope: Deactivated successfully. May 14 17:57:38.432597 systemd[1]: session-7.scope: Consumed 6.224s CPU time, 281.7M memory peak. May 14 17:57:38.435337 systemd-logind[1488]: Session 7 logged out. Waiting for processes to exit. May 14 17:57:38.437740 systemd-logind[1488]: Removed session 7. May 14 17:57:48.664265 kubelet[2733]: I0514 17:57:48.664197 2733 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 17:57:48.667143 containerd[1502]: time="2025-05-14T17:57:48.667098724Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 17:57:48.667470 kubelet[2733]: I0514 17:57:48.667384 2733 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 17:57:49.574852 kubelet[2733]: I0514 17:57:49.574739 2733 topology_manager.go:215] "Topology Admit Handler" podUID="2c1fde50-1c25-4687-8a12-5fe284b6ae21" podNamespace="kube-system" podName="cilium-krrjs" May 14 17:57:49.575039 kubelet[2733]: I0514 17:57:49.575024 2733 topology_manager.go:215] "Topology Admit Handler" podUID="0fe446fb-b1f7-4d66-b6c2-74893ef2a357" podNamespace="kube-system" podName="kube-proxy-jfv4k" May 14 17:57:49.589452 systemd[1]: Created slice kubepods-besteffort-pod0fe446fb_b1f7_4d66_b6c2_74893ef2a357.slice - libcontainer container kubepods-besteffort-pod0fe446fb_b1f7_4d66_b6c2_74893ef2a357.slice. May 14 17:57:49.609429 systemd[1]: Created slice kubepods-burstable-pod2c1fde50_1c25_4687_8a12_5fe284b6ae21.slice - libcontainer container kubepods-burstable-pod2c1fde50_1c25_4687_8a12_5fe284b6ae21.slice. May 14 17:57:49.624538 kubelet[2733]: I0514 17:57:49.624479 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0fe446fb-b1f7-4d66-b6c2-74893ef2a357-lib-modules\") pod \"kube-proxy-jfv4k\" (UID: \"0fe446fb-b1f7-4d66-b6c2-74893ef2a357\") " pod="kube-system/kube-proxy-jfv4k" May 14 17:57:49.624538 kubelet[2733]: I0514 17:57:49.624537 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2c1fde50-1c25-4687-8a12-5fe284b6ae21-host-proc-sys-net\") pod \"cilium-krrjs\" (UID: \"2c1fde50-1c25-4687-8a12-5fe284b6ae21\") " pod="kube-system/cilium-krrjs" May 14 17:57:49.624714 kubelet[2733]: I0514 17:57:49.624556 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c1fde50-1c25-4687-8a12-5fe284b6ae21-xtables-lock\") pod \"cilium-krrjs\" (UID: \"2c1fde50-1c25-4687-8a12-5fe284b6ae21\") " pod="kube-system/cilium-krrjs" May 14 17:57:49.624714 kubelet[2733]: I0514 17:57:49.624571 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2c1fde50-1c25-4687-8a12-5fe284b6ae21-hubble-tls\") pod \"cilium-krrjs\" (UID: \"2c1fde50-1c25-4687-8a12-5fe284b6ae21\") " pod="kube-system/cilium-krrjs" May 14 17:57:49.624714 kubelet[2733]: I0514 17:57:49.624598 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2c1fde50-1c25-4687-8a12-5fe284b6ae21-cilium-run\") pod \"cilium-krrjs\" (UID: \"2c1fde50-1c25-4687-8a12-5fe284b6ae21\") " pod="kube-system/cilium-krrjs" May 14 17:57:49.624714 kubelet[2733]: I0514 17:57:49.624614 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c1fde50-1c25-4687-8a12-5fe284b6ae21-etc-cni-netd\") pod \"cilium-krrjs\" (UID: \"2c1fde50-1c25-4687-8a12-5fe284b6ae21\") " pod="kube-system/cilium-krrjs" May 14 17:57:49.624714 kubelet[2733]: I0514 17:57:49.624628 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2c1fde50-1c25-4687-8a12-5fe284b6ae21-cilium-config-path\") pod \"cilium-krrjs\" (UID: \"2c1fde50-1c25-4687-8a12-5fe284b6ae21\") " pod="kube-system/cilium-krrjs" May 14 17:57:49.624714 kubelet[2733]: I0514 17:57:49.624642 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0fe446fb-b1f7-4d66-b6c2-74893ef2a357-xtables-lock\") pod \"kube-proxy-jfv4k\" (UID: \"0fe446fb-b1f7-4d66-b6c2-74893ef2a357\") " pod="kube-system/kube-proxy-jfv4k" May 14 17:57:49.624838 kubelet[2733]: I0514 17:57:49.624673 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hnqp\" (UniqueName: \"kubernetes.io/projected/0fe446fb-b1f7-4d66-b6c2-74893ef2a357-kube-api-access-8hnqp\") pod \"kube-proxy-jfv4k\" (UID: \"0fe446fb-b1f7-4d66-b6c2-74893ef2a357\") " pod="kube-system/kube-proxy-jfv4k" May 14 17:57:49.624838 kubelet[2733]: I0514 17:57:49.624690 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2c1fde50-1c25-4687-8a12-5fe284b6ae21-cilium-cgroup\") pod \"cilium-krrjs\" (UID: \"2c1fde50-1c25-4687-8a12-5fe284b6ae21\") " pod="kube-system/cilium-krrjs" May 14 17:57:49.624838 kubelet[2733]: I0514 17:57:49.624708 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2c1fde50-1c25-4687-8a12-5fe284b6ae21-host-proc-sys-kernel\") pod \"cilium-krrjs\" (UID: \"2c1fde50-1c25-4687-8a12-5fe284b6ae21\") " pod="kube-system/cilium-krrjs" May 14 17:57:49.624838 kubelet[2733]: I0514 17:57:49.624722 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5mbn\" (UniqueName: \"kubernetes.io/projected/2c1fde50-1c25-4687-8a12-5fe284b6ae21-kube-api-access-d5mbn\") pod \"cilium-krrjs\" (UID: \"2c1fde50-1c25-4687-8a12-5fe284b6ae21\") " pod="kube-system/cilium-krrjs" May 14 17:57:49.624838 kubelet[2733]: I0514 17:57:49.624746 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0fe446fb-b1f7-4d66-b6c2-74893ef2a357-kube-proxy\") pod \"kube-proxy-jfv4k\" (UID: \"0fe446fb-b1f7-4d66-b6c2-74893ef2a357\") " pod="kube-system/kube-proxy-jfv4k" May 14 17:57:49.624939 kubelet[2733]: I0514 17:57:49.624762 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2c1fde50-1c25-4687-8a12-5fe284b6ae21-hostproc\") pod \"cilium-krrjs\" (UID: \"2c1fde50-1c25-4687-8a12-5fe284b6ae21\") " pod="kube-system/cilium-krrjs" May 14 17:57:49.624939 kubelet[2733]: I0514 17:57:49.624777 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2c1fde50-1c25-4687-8a12-5fe284b6ae21-clustermesh-secrets\") pod \"cilium-krrjs\" (UID: \"2c1fde50-1c25-4687-8a12-5fe284b6ae21\") " pod="kube-system/cilium-krrjs" May 14 17:57:49.624939 kubelet[2733]: I0514 17:57:49.624793 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2c1fde50-1c25-4687-8a12-5fe284b6ae21-bpf-maps\") pod \"cilium-krrjs\" (UID: \"2c1fde50-1c25-4687-8a12-5fe284b6ae21\") " pod="kube-system/cilium-krrjs" May 14 17:57:49.624939 kubelet[2733]: I0514 17:57:49.624807 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2c1fde50-1c25-4687-8a12-5fe284b6ae21-cni-path\") pod \"cilium-krrjs\" (UID: \"2c1fde50-1c25-4687-8a12-5fe284b6ae21\") " pod="kube-system/cilium-krrjs" May 14 17:57:49.624939 kubelet[2733]: I0514 17:57:49.624832 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c1fde50-1c25-4687-8a12-5fe284b6ae21-lib-modules\") pod \"cilium-krrjs\" (UID: \"2c1fde50-1c25-4687-8a12-5fe284b6ae21\") " pod="kube-system/cilium-krrjs" May 14 17:57:49.676584 kubelet[2733]: I0514 17:57:49.676468 2733 topology_manager.go:215] "Topology Admit Handler" podUID="80bdfde4-ef00-47c6-8cd8-76e2d3f36542" podNamespace="kube-system" podName="cilium-operator-599987898-rp5zb" May 14 17:57:49.687507 systemd[1]: Created slice kubepods-besteffort-pod80bdfde4_ef00_47c6_8cd8_76e2d3f36542.slice - libcontainer container kubepods-besteffort-pod80bdfde4_ef00_47c6_8cd8_76e2d3f36542.slice. May 14 17:57:49.725926 kubelet[2733]: I0514 17:57:49.725736 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp776\" (UniqueName: \"kubernetes.io/projected/80bdfde4-ef00-47c6-8cd8-76e2d3f36542-kube-api-access-vp776\") pod \"cilium-operator-599987898-rp5zb\" (UID: \"80bdfde4-ef00-47c6-8cd8-76e2d3f36542\") " pod="kube-system/cilium-operator-599987898-rp5zb" May 14 17:57:49.726063 kubelet[2733]: I0514 17:57:49.726007 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/80bdfde4-ef00-47c6-8cd8-76e2d3f36542-cilium-config-path\") pod \"cilium-operator-599987898-rp5zb\" (UID: \"80bdfde4-ef00-47c6-8cd8-76e2d3f36542\") " pod="kube-system/cilium-operator-599987898-rp5zb" May 14 17:57:49.909226 containerd[1502]: time="2025-05-14T17:57:49.908812888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jfv4k,Uid:0fe446fb-b1f7-4d66-b6c2-74893ef2a357,Namespace:kube-system,Attempt:0,}" May 14 17:57:49.913423 containerd[1502]: time="2025-05-14T17:57:49.913385884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-krrjs,Uid:2c1fde50-1c25-4687-8a12-5fe284b6ae21,Namespace:kube-system,Attempt:0,}" May 14 17:57:49.928814 containerd[1502]: time="2025-05-14T17:57:49.928765295Z" level=info msg="connecting to shim ebc591131024544a8891a37a4034bd93b836dd13708b71f297de67e90e0e4f3c" address="unix:///run/containerd/s/e2287afdafdc7bbe43d6f19469620fa85ac30f953555f1eb0f6ea2bdb01d832f" namespace=k8s.io protocol=ttrpc version=3 May 14 17:57:49.936458 containerd[1502]: time="2025-05-14T17:57:49.936406957Z" level=info msg="connecting to shim 6ab57c4d93bfd24ed26e25f4ba3497c50f6622a56fb9a939eb49d011f86a7246" address="unix:///run/containerd/s/398aa1f20eee62252078b26e09ac3fa160f258791ab7a57ab7fbfa5cbe6be768" namespace=k8s.io protocol=ttrpc version=3 May 14 17:57:49.952341 systemd[1]: Started cri-containerd-ebc591131024544a8891a37a4034bd93b836dd13708b71f297de67e90e0e4f3c.scope - libcontainer container ebc591131024544a8891a37a4034bd93b836dd13708b71f297de67e90e0e4f3c. May 14 17:57:49.955599 systemd[1]: Started cri-containerd-6ab57c4d93bfd24ed26e25f4ba3497c50f6622a56fb9a939eb49d011f86a7246.scope - libcontainer container 6ab57c4d93bfd24ed26e25f4ba3497c50f6622a56fb9a939eb49d011f86a7246. May 14 17:57:49.988198 containerd[1502]: time="2025-05-14T17:57:49.988122593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-krrjs,Uid:2c1fde50-1c25-4687-8a12-5fe284b6ae21,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ab57c4d93bfd24ed26e25f4ba3497c50f6622a56fb9a939eb49d011f86a7246\"" May 14 17:57:49.989966 containerd[1502]: time="2025-05-14T17:57:49.989934110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jfv4k,Uid:0fe446fb-b1f7-4d66-b6c2-74893ef2a357,Namespace:kube-system,Attempt:0,} returns sandbox id \"ebc591131024544a8891a37a4034bd93b836dd13708b71f297de67e90e0e4f3c\"" May 14 17:57:49.994851 containerd[1502]: time="2025-05-14T17:57:49.994724365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-rp5zb,Uid:80bdfde4-ef00-47c6-8cd8-76e2d3f36542,Namespace:kube-system,Attempt:0,}" May 14 17:57:49.997652 containerd[1502]: time="2025-05-14T17:57:49.997598174Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 14 17:57:50.011910 containerd[1502]: time="2025-05-14T17:57:50.011474451Z" level=info msg="CreateContainer within sandbox \"ebc591131024544a8891a37a4034bd93b836dd13708b71f297de67e90e0e4f3c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 17:57:50.019402 containerd[1502]: time="2025-05-14T17:57:50.019355580Z" level=info msg="connecting to shim 60cd31103d83746e53025d8113734f48645c95e79edfd2cc377d193db815c178" address="unix:///run/containerd/s/889e4bc9bc8483a38f9edb526dc78f493d6ef718a0bc3fad9f83fad164e52db5" namespace=k8s.io protocol=ttrpc version=3 May 14 17:57:50.020973 containerd[1502]: time="2025-05-14T17:57:50.020668848Z" level=info msg="Container ed3cae36dad27fcafcc685d0dbcc1e8723da2f5d8d88992e53bfa603bedf3e3b: CDI devices from CRI Config.CDIDevices: []" May 14 17:57:50.027945 containerd[1502]: time="2025-05-14T17:57:50.027890882Z" level=info msg="CreateContainer within sandbox \"ebc591131024544a8891a37a4034bd93b836dd13708b71f297de67e90e0e4f3c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ed3cae36dad27fcafcc685d0dbcc1e8723da2f5d8d88992e53bfa603bedf3e3b\"" May 14 17:57:50.033706 containerd[1502]: time="2025-05-14T17:57:50.032920336Z" level=info msg="StartContainer for \"ed3cae36dad27fcafcc685d0dbcc1e8723da2f5d8d88992e53bfa603bedf3e3b\"" May 14 17:57:50.034700 containerd[1502]: time="2025-05-14T17:57:50.034660879Z" level=info msg="connecting to shim ed3cae36dad27fcafcc685d0dbcc1e8723da2f5d8d88992e53bfa603bedf3e3b" address="unix:///run/containerd/s/e2287afdafdc7bbe43d6f19469620fa85ac30f953555f1eb0f6ea2bdb01d832f" protocol=ttrpc version=3 May 14 17:57:50.045377 systemd[1]: Started cri-containerd-60cd31103d83746e53025d8113734f48645c95e79edfd2cc377d193db815c178.scope - libcontainer container 60cd31103d83746e53025d8113734f48645c95e79edfd2cc377d193db815c178. May 14 17:57:50.054969 systemd[1]: Started cri-containerd-ed3cae36dad27fcafcc685d0dbcc1e8723da2f5d8d88992e53bfa603bedf3e3b.scope - libcontainer container ed3cae36dad27fcafcc685d0dbcc1e8723da2f5d8d88992e53bfa603bedf3e3b. May 14 17:57:50.090878 containerd[1502]: time="2025-05-14T17:57:50.090765657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-rp5zb,Uid:80bdfde4-ef00-47c6-8cd8-76e2d3f36542,Namespace:kube-system,Attempt:0,} returns sandbox id \"60cd31103d83746e53025d8113734f48645c95e79edfd2cc377d193db815c178\"" May 14 17:57:50.106333 containerd[1502]: time="2025-05-14T17:57:50.106282654Z" level=info msg="StartContainer for \"ed3cae36dad27fcafcc685d0dbcc1e8723da2f5d8d88992e53bfa603bedf3e3b\" returns successfully" May 14 17:57:50.480663 update_engine[1491]: I20250514 17:57:50.480587 1491 update_attempter.cc:509] Updating boot flags... May 14 17:57:50.707680 kubelet[2733]: I0514 17:57:50.707466 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jfv4k" podStartSLOduration=1.707445335 podStartE2EDuration="1.707445335s" podCreationTimestamp="2025-05-14 17:57:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 17:57:50.706083463 +0000 UTC m=+15.144665713" watchObservedRunningTime="2025-05-14 17:57:50.707445335 +0000 UTC m=+15.146027585" May 14 17:57:55.065191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1277657749.mount: Deactivated successfully. May 14 17:57:56.380390 containerd[1502]: time="2025-05-14T17:57:56.380336201Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 17:57:56.381477 containerd[1502]: time="2025-05-14T17:57:56.381275619Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 14 17:57:56.382204 containerd[1502]: time="2025-05-14T17:57:56.382148793Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 17:57:56.383756 containerd[1502]: time="2025-05-14T17:57:56.383722730Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.386082993s" May 14 17:57:56.383823 containerd[1502]: time="2025-05-14T17:57:56.383761452Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 14 17:57:56.387709 containerd[1502]: time="2025-05-14T17:57:56.387655053Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 14 17:57:56.389683 containerd[1502]: time="2025-05-14T17:57:56.389643616Z" level=info msg="CreateContainer within sandbox \"6ab57c4d93bfd24ed26e25f4ba3497c50f6622a56fb9a939eb49d011f86a7246\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 17:57:56.415680 containerd[1502]: time="2025-05-14T17:57:56.415631422Z" level=info msg="Container d6f2c66ccf984dc6944a0cd12ce8f4ecf480f849cd0d4aeddda3e4e9982ebc35: CDI devices from CRI Config.CDIDevices: []" May 14 17:57:56.421515 containerd[1502]: time="2025-05-14T17:57:56.421478624Z" level=info msg="CreateContainer within sandbox \"6ab57c4d93bfd24ed26e25f4ba3497c50f6622a56fb9a939eb49d011f86a7246\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d6f2c66ccf984dc6944a0cd12ce8f4ecf480f849cd0d4aeddda3e4e9982ebc35\"" May 14 17:57:56.422294 containerd[1502]: time="2025-05-14T17:57:56.422111463Z" level=info msg="StartContainer for \"d6f2c66ccf984dc6944a0cd12ce8f4ecf480f849cd0d4aeddda3e4e9982ebc35\"" May 14 17:57:56.422871 containerd[1502]: time="2025-05-14T17:57:56.422836708Z" level=info msg="connecting to shim d6f2c66ccf984dc6944a0cd12ce8f4ecf480f849cd0d4aeddda3e4e9982ebc35" address="unix:///run/containerd/s/398aa1f20eee62252078b26e09ac3fa160f258791ab7a57ab7fbfa5cbe6be768" protocol=ttrpc version=3 May 14 17:57:56.463347 systemd[1]: Started cri-containerd-d6f2c66ccf984dc6944a0cd12ce8f4ecf480f849cd0d4aeddda3e4e9982ebc35.scope - libcontainer container d6f2c66ccf984dc6944a0cd12ce8f4ecf480f849cd0d4aeddda3e4e9982ebc35. May 14 17:57:56.491396 containerd[1502]: time="2025-05-14T17:57:56.491355383Z" level=info msg="StartContainer for \"d6f2c66ccf984dc6944a0cd12ce8f4ecf480f849cd0d4aeddda3e4e9982ebc35\" returns successfully" May 14 17:57:56.537071 systemd[1]: cri-containerd-d6f2c66ccf984dc6944a0cd12ce8f4ecf480f849cd0d4aeddda3e4e9982ebc35.scope: Deactivated successfully. May 14 17:57:56.537395 systemd[1]: cri-containerd-d6f2c66ccf984dc6944a0cd12ce8f4ecf480f849cd0d4aeddda3e4e9982ebc35.scope: Consumed 56ms CPU time, 5.1M memory peak, 3.1M written to disk. May 14 17:57:56.634878 containerd[1502]: time="2025-05-14T17:57:56.634408865Z" level=info msg="received exit event container_id:\"d6f2c66ccf984dc6944a0cd12ce8f4ecf480f849cd0d4aeddda3e4e9982ebc35\" id:\"d6f2c66ccf984dc6944a0cd12ce8f4ecf480f849cd0d4aeddda3e4e9982ebc35\" pid:3163 exited_at:{seconds:1747245476 nanos:561326228}" May 14 17:57:56.641202 containerd[1502]: time="2025-05-14T17:57:56.641127921Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d6f2c66ccf984dc6944a0cd12ce8f4ecf480f849cd0d4aeddda3e4e9982ebc35\" id:\"d6f2c66ccf984dc6944a0cd12ce8f4ecf480f849cd0d4aeddda3e4e9982ebc35\" pid:3163 exited_at:{seconds:1747245476 nanos:561326228}" May 14 17:57:56.675252 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d6f2c66ccf984dc6944a0cd12ce8f4ecf480f849cd0d4aeddda3e4e9982ebc35-rootfs.mount: Deactivated successfully. May 14 17:57:56.715455 containerd[1502]: time="2025-05-14T17:57:56.714727310Z" level=info msg="CreateContainer within sandbox \"6ab57c4d93bfd24ed26e25f4ba3497c50f6622a56fb9a939eb49d011f86a7246\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 17:57:56.722200 containerd[1502]: time="2025-05-14T17:57:56.722147209Z" level=info msg="Container d0048974ef5f063596b97e9ca23902e218ba331b61cbdbd1872a80885aa5cd82: CDI devices from CRI Config.CDIDevices: []" May 14 17:57:56.769892 containerd[1502]: time="2025-05-14T17:57:56.769849597Z" level=info msg="CreateContainer within sandbox \"6ab57c4d93bfd24ed26e25f4ba3497c50f6622a56fb9a939eb49d011f86a7246\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d0048974ef5f063596b97e9ca23902e218ba331b61cbdbd1872a80885aa5cd82\"" May 14 17:57:56.771264 containerd[1502]: time="2025-05-14T17:57:56.771236403Z" level=info msg="StartContainer for \"d0048974ef5f063596b97e9ca23902e218ba331b61cbdbd1872a80885aa5cd82\"" May 14 17:57:56.772056 containerd[1502]: time="2025-05-14T17:57:56.772032532Z" level=info msg="connecting to shim d0048974ef5f063596b97e9ca23902e218ba331b61cbdbd1872a80885aa5cd82" address="unix:///run/containerd/s/398aa1f20eee62252078b26e09ac3fa160f258791ab7a57ab7fbfa5cbe6be768" protocol=ttrpc version=3 May 14 17:57:56.791329 systemd[1]: Started cri-containerd-d0048974ef5f063596b97e9ca23902e218ba331b61cbdbd1872a80885aa5cd82.scope - libcontainer container d0048974ef5f063596b97e9ca23902e218ba331b61cbdbd1872a80885aa5cd82. May 14 17:57:56.826675 containerd[1502]: time="2025-05-14T17:57:56.826614346Z" level=info msg="StartContainer for \"d0048974ef5f063596b97e9ca23902e218ba331b61cbdbd1872a80885aa5cd82\" returns successfully" May 14 17:57:56.833099 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 17:57:56.833590 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 17:57:56.834354 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 14 17:57:56.835988 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 17:57:56.838756 systemd[1]: cri-containerd-d0048974ef5f063596b97e9ca23902e218ba331b61cbdbd1872a80885aa5cd82.scope: Deactivated successfully. May 14 17:57:56.840058 containerd[1502]: time="2025-05-14T17:57:56.840020655Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d0048974ef5f063596b97e9ca23902e218ba331b61cbdbd1872a80885aa5cd82\" id:\"d0048974ef5f063596b97e9ca23902e218ba331b61cbdbd1872a80885aa5cd82\" pid:3207 exited_at:{seconds:1747245476 nanos:839649312}" May 14 17:57:56.843552 containerd[1502]: time="2025-05-14T17:57:56.843509790Z" level=info msg="received exit event container_id:\"d0048974ef5f063596b97e9ca23902e218ba331b61cbdbd1872a80885aa5cd82\" id:\"d0048974ef5f063596b97e9ca23902e218ba331b61cbdbd1872a80885aa5cd82\" pid:3207 exited_at:{seconds:1747245476 nanos:839649312}" May 14 17:57:56.869120 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 17:57:57.716106 containerd[1502]: time="2025-05-14T17:57:57.716063868Z" level=info msg="CreateContainer within sandbox \"6ab57c4d93bfd24ed26e25f4ba3497c50f6622a56fb9a939eb49d011f86a7246\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 17:57:57.782265 containerd[1502]: time="2025-05-14T17:57:57.782218218Z" level=info msg="Container 810cb86c8610bea565eeeb25c521108adb0fc8eae0fd8dc239e194d9f3f5bb9b: CDI devices from CRI Config.CDIDevices: []" May 14 17:57:57.784480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount482837236.mount: Deactivated successfully. May 14 17:57:57.787357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount73497261.mount: Deactivated successfully. May 14 17:57:57.792245 containerd[1502]: time="2025-05-14T17:57:57.792204528Z" level=info msg="CreateContainer within sandbox \"6ab57c4d93bfd24ed26e25f4ba3497c50f6622a56fb9a939eb49d011f86a7246\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"810cb86c8610bea565eeeb25c521108adb0fc8eae0fd8dc239e194d9f3f5bb9b\"" May 14 17:57:57.793972 containerd[1502]: time="2025-05-14T17:57:57.793703017Z" level=info msg="StartContainer for \"810cb86c8610bea565eeeb25c521108adb0fc8eae0fd8dc239e194d9f3f5bb9b\"" May 14 17:57:57.796002 containerd[1502]: time="2025-05-14T17:57:57.795957190Z" level=info msg="connecting to shim 810cb86c8610bea565eeeb25c521108adb0fc8eae0fd8dc239e194d9f3f5bb9b" address="unix:///run/containerd/s/398aa1f20eee62252078b26e09ac3fa160f258791ab7a57ab7fbfa5cbe6be768" protocol=ttrpc version=3 May 14 17:57:57.820360 systemd[1]: Started cri-containerd-810cb86c8610bea565eeeb25c521108adb0fc8eae0fd8dc239e194d9f3f5bb9b.scope - libcontainer container 810cb86c8610bea565eeeb25c521108adb0fc8eae0fd8dc239e194d9f3f5bb9b. May 14 17:57:57.859615 containerd[1502]: time="2025-05-14T17:57:57.859576830Z" level=info msg="StartContainer for \"810cb86c8610bea565eeeb25c521108adb0fc8eae0fd8dc239e194d9f3f5bb9b\" returns successfully" May 14 17:57:57.887796 systemd[1]: cri-containerd-810cb86c8610bea565eeeb25c521108adb0fc8eae0fd8dc239e194d9f3f5bb9b.scope: Deactivated successfully. May 14 17:57:57.892532 containerd[1502]: time="2025-05-14T17:57:57.892494816Z" level=info msg="received exit event container_id:\"810cb86c8610bea565eeeb25c521108adb0fc8eae0fd8dc239e194d9f3f5bb9b\" id:\"810cb86c8610bea565eeeb25c521108adb0fc8eae0fd8dc239e194d9f3f5bb9b\" pid:3260 exited_at:{seconds:1747245477 nanos:892287924}" May 14 17:57:57.892745 containerd[1502]: time="2025-05-14T17:57:57.892725910Z" level=info msg="TaskExit event in podsandbox handler container_id:\"810cb86c8610bea565eeeb25c521108adb0fc8eae0fd8dc239e194d9f3f5bb9b\" id:\"810cb86c8610bea565eeeb25c521108adb0fc8eae0fd8dc239e194d9f3f5bb9b\" pid:3260 exited_at:{seconds:1747245477 nanos:892287924}" May 14 17:57:58.416689 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-810cb86c8610bea565eeeb25c521108adb0fc8eae0fd8dc239e194d9f3f5bb9b-rootfs.mount: Deactivated successfully. May 14 17:57:58.731411 containerd[1502]: time="2025-05-14T17:57:58.731329937Z" level=info msg="CreateContainer within sandbox \"6ab57c4d93bfd24ed26e25f4ba3497c50f6622a56fb9a939eb49d011f86a7246\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 17:57:58.743295 containerd[1502]: time="2025-05-14T17:57:58.743254251Z" level=info msg="Container 4d68907b329ebd4903f9382ded63ad01c0fe468b754ca6d8b4b64073435c8b52: CDI devices from CRI Config.CDIDevices: []" May 14 17:57:58.744844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount880807304.mount: Deactivated successfully. May 14 17:57:58.752893 containerd[1502]: time="2025-05-14T17:57:58.752860195Z" level=info msg="CreateContainer within sandbox \"6ab57c4d93bfd24ed26e25f4ba3497c50f6622a56fb9a939eb49d011f86a7246\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4d68907b329ebd4903f9382ded63ad01c0fe468b754ca6d8b4b64073435c8b52\"" May 14 17:57:58.753718 containerd[1502]: time="2025-05-14T17:57:58.753625678Z" level=info msg="StartContainer for \"4d68907b329ebd4903f9382ded63ad01c0fe468b754ca6d8b4b64073435c8b52\"" May 14 17:57:58.755314 containerd[1502]: time="2025-05-14T17:57:58.755289412Z" level=info msg="connecting to shim 4d68907b329ebd4903f9382ded63ad01c0fe468b754ca6d8b4b64073435c8b52" address="unix:///run/containerd/s/398aa1f20eee62252078b26e09ac3fa160f258791ab7a57ab7fbfa5cbe6be768" protocol=ttrpc version=3 May 14 17:57:58.779014 systemd[1]: Started cri-containerd-4d68907b329ebd4903f9382ded63ad01c0fe468b754ca6d8b4b64073435c8b52.scope - libcontainer container 4d68907b329ebd4903f9382ded63ad01c0fe468b754ca6d8b4b64073435c8b52. May 14 17:57:58.806465 systemd[1]: cri-containerd-4d68907b329ebd4903f9382ded63ad01c0fe468b754ca6d8b4b64073435c8b52.scope: Deactivated successfully. May 14 17:57:58.809044 containerd[1502]: time="2025-05-14T17:57:58.808954048Z" level=info msg="received exit event container_id:\"4d68907b329ebd4903f9382ded63ad01c0fe468b754ca6d8b4b64073435c8b52\" id:\"4d68907b329ebd4903f9382ded63ad01c0fe468b754ca6d8b4b64073435c8b52\" pid:3311 exited_at:{seconds:1747245478 nanos:808678432}" May 14 17:57:58.809328 containerd[1502]: time="2025-05-14T17:57:58.809051293Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4d68907b329ebd4903f9382ded63ad01c0fe468b754ca6d8b4b64073435c8b52\" id:\"4d68907b329ebd4903f9382ded63ad01c0fe468b754ca6d8b4b64073435c8b52\" pid:3311 exited_at:{seconds:1747245478 nanos:808678432}" May 14 17:57:58.810476 containerd[1502]: time="2025-05-14T17:57:58.810439892Z" level=info msg="StartContainer for \"4d68907b329ebd4903f9382ded63ad01c0fe468b754ca6d8b4b64073435c8b52\" returns successfully" May 14 17:57:58.830554 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d68907b329ebd4903f9382ded63ad01c0fe468b754ca6d8b4b64073435c8b52-rootfs.mount: Deactivated successfully. May 14 17:57:58.994324 containerd[1502]: time="2025-05-14T17:57:58.993434563Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 17:57:58.994874 containerd[1502]: time="2025-05-14T17:57:58.994839922Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 14 17:57:58.996122 containerd[1502]: time="2025-05-14T17:57:58.996091593Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 17:57:58.997268 containerd[1502]: time="2025-05-14T17:57:58.997235178Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.609528321s" May 14 17:57:58.997381 containerd[1502]: time="2025-05-14T17:57:58.997354144Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 14 17:57:59.001049 containerd[1502]: time="2025-05-14T17:57:59.001009789Z" level=info msg="CreateContainer within sandbox \"60cd31103d83746e53025d8113734f48645c95e79edfd2cc377d193db815c178\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 14 17:57:59.006795 containerd[1502]: time="2025-05-14T17:57:59.006767621Z" level=info msg="Container e38d0c45943894edbad3b800a1cca031a4ff74b22f4fa84067741866886ef5b6: CDI devices from CRI Config.CDIDevices: []" May 14 17:57:59.011634 containerd[1502]: time="2025-05-14T17:57:59.011596283Z" level=info msg="CreateContainer within sandbox \"60cd31103d83746e53025d8113734f48645c95e79edfd2cc377d193db815c178\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e38d0c45943894edbad3b800a1cca031a4ff74b22f4fa84067741866886ef5b6\"" May 14 17:57:59.012084 containerd[1502]: time="2025-05-14T17:57:59.012021666Z" level=info msg="StartContainer for \"e38d0c45943894edbad3b800a1cca031a4ff74b22f4fa84067741866886ef5b6\"" May 14 17:57:59.012946 containerd[1502]: time="2025-05-14T17:57:59.012884712Z" level=info msg="connecting to shim e38d0c45943894edbad3b800a1cca031a4ff74b22f4fa84067741866886ef5b6" address="unix:///run/containerd/s/889e4bc9bc8483a38f9edb526dc78f493d6ef718a0bc3fad9f83fad164e52db5" protocol=ttrpc version=3 May 14 17:57:59.035326 systemd[1]: Started cri-containerd-e38d0c45943894edbad3b800a1cca031a4ff74b22f4fa84067741866886ef5b6.scope - libcontainer container e38d0c45943894edbad3b800a1cca031a4ff74b22f4fa84067741866886ef5b6. May 14 17:57:59.066778 containerd[1502]: time="2025-05-14T17:57:59.066682667Z" level=info msg="StartContainer for \"e38d0c45943894edbad3b800a1cca031a4ff74b22f4fa84067741866886ef5b6\" returns successfully" May 14 17:57:59.742138 containerd[1502]: time="2025-05-14T17:57:59.742067421Z" level=info msg="CreateContainer within sandbox \"6ab57c4d93bfd24ed26e25f4ba3497c50f6622a56fb9a939eb49d011f86a7246\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 17:57:59.763035 containerd[1502]: time="2025-05-14T17:57:59.762989995Z" level=info msg="Container 27a18febb643f8a0c8faaa8de9a16ed9f43327f5c5f4a8f5d927333f72103e33: CDI devices from CRI Config.CDIDevices: []" May 14 17:57:59.765543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1057913912.mount: Deactivated successfully. May 14 17:57:59.768794 kubelet[2733]: I0514 17:57:59.768733 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-rp5zb" podStartSLOduration=1.863741214 podStartE2EDuration="10.768715745s" podCreationTimestamp="2025-05-14 17:57:49 +0000 UTC" firstStartedPulling="2025-05-14 17:57:50.093263463 +0000 UTC m=+14.531845793" lastFinishedPulling="2025-05-14 17:57:58.998238074 +0000 UTC m=+23.436820324" observedRunningTime="2025-05-14 17:57:59.749456822 +0000 UTC m=+24.188039072" watchObservedRunningTime="2025-05-14 17:57:59.768715745 +0000 UTC m=+24.207297995" May 14 17:57:59.772412 containerd[1502]: time="2025-05-14T17:57:59.772375424Z" level=info msg="CreateContainer within sandbox \"6ab57c4d93bfd24ed26e25f4ba3497c50f6622a56fb9a939eb49d011f86a7246\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"27a18febb643f8a0c8faaa8de9a16ed9f43327f5c5f4a8f5d927333f72103e33\"" May 14 17:57:59.773007 containerd[1502]: time="2025-05-14T17:57:59.772935894Z" level=info msg="StartContainer for \"27a18febb643f8a0c8faaa8de9a16ed9f43327f5c5f4a8f5d927333f72103e33\"" May 14 17:57:59.774078 containerd[1502]: time="2025-05-14T17:57:59.773901426Z" level=info msg="connecting to shim 27a18febb643f8a0c8faaa8de9a16ed9f43327f5c5f4a8f5d927333f72103e33" address="unix:///run/containerd/s/398aa1f20eee62252078b26e09ac3fa160f258791ab7a57ab7fbfa5cbe6be768" protocol=ttrpc version=3 May 14 17:57:59.799619 systemd[1]: Started cri-containerd-27a18febb643f8a0c8faaa8de9a16ed9f43327f5c5f4a8f5d927333f72103e33.scope - libcontainer container 27a18febb643f8a0c8faaa8de9a16ed9f43327f5c5f4a8f5d927333f72103e33. May 14 17:57:59.856449 containerd[1502]: time="2025-05-14T17:57:59.856341453Z" level=info msg="StartContainer for \"27a18febb643f8a0c8faaa8de9a16ed9f43327f5c5f4a8f5d927333f72103e33\" returns successfully" May 14 17:57:59.967892 containerd[1502]: time="2025-05-14T17:57:59.963706190Z" level=info msg="TaskExit event in podsandbox handler container_id:\"27a18febb643f8a0c8faaa8de9a16ed9f43327f5c5f4a8f5d927333f72103e33\" id:\"60449e0b24f943bac06e96915fbda6dbcb28c85b30e815a44c38385297271b9e\" pid:3416 exited_at:{seconds:1747245479 nanos:962842384}" May 14 17:57:59.977312 kubelet[2733]: I0514 17:57:59.977275 2733 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 14 17:58:00.027636 kubelet[2733]: I0514 17:58:00.027088 2733 topology_manager.go:215] "Topology Admit Handler" podUID="84a7463f-d8f0-450e-92a2-9a3a3c0a08a7" podNamespace="kube-system" podName="coredns-7db6d8ff4d-646g7" May 14 17:58:00.031505 kubelet[2733]: I0514 17:58:00.031132 2733 topology_manager.go:215] "Topology Admit Handler" podUID="b7eb8841-e821-4619-a70c-7cf068021b80" podNamespace="kube-system" podName="coredns-7db6d8ff4d-9m5t9" May 14 17:58:00.045611 systemd[1]: Created slice kubepods-burstable-pod84a7463f_d8f0_450e_92a2_9a3a3c0a08a7.slice - libcontainer container kubepods-burstable-pod84a7463f_d8f0_450e_92a2_9a3a3c0a08a7.slice. May 14 17:58:00.052979 systemd[1]: Created slice kubepods-burstable-podb7eb8841_e821_4619_a70c_7cf068021b80.slice - libcontainer container kubepods-burstable-podb7eb8841_e821_4619_a70c_7cf068021b80.slice. May 14 17:58:00.102172 kubelet[2733]: I0514 17:58:00.102122 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7eb8841-e821-4619-a70c-7cf068021b80-config-volume\") pod \"coredns-7db6d8ff4d-9m5t9\" (UID: \"b7eb8841-e821-4619-a70c-7cf068021b80\") " pod="kube-system/coredns-7db6d8ff4d-9m5t9" May 14 17:58:00.102308 kubelet[2733]: I0514 17:58:00.102185 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwjbv\" (UniqueName: \"kubernetes.io/projected/b7eb8841-e821-4619-a70c-7cf068021b80-kube-api-access-qwjbv\") pod \"coredns-7db6d8ff4d-9m5t9\" (UID: \"b7eb8841-e821-4619-a70c-7cf068021b80\") " pod="kube-system/coredns-7db6d8ff4d-9m5t9" May 14 17:58:00.102308 kubelet[2733]: I0514 17:58:00.102212 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84a7463f-d8f0-450e-92a2-9a3a3c0a08a7-config-volume\") pod \"coredns-7db6d8ff4d-646g7\" (UID: \"84a7463f-d8f0-450e-92a2-9a3a3c0a08a7\") " pod="kube-system/coredns-7db6d8ff4d-646g7" May 14 17:58:00.102308 kubelet[2733]: I0514 17:58:00.102229 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsqjk\" (UniqueName: \"kubernetes.io/projected/84a7463f-d8f0-450e-92a2-9a3a3c0a08a7-kube-api-access-gsqjk\") pod \"coredns-7db6d8ff4d-646g7\" (UID: \"84a7463f-d8f0-450e-92a2-9a3a3c0a08a7\") " pod="kube-system/coredns-7db6d8ff4d-646g7" May 14 17:58:00.352682 containerd[1502]: time="2025-05-14T17:58:00.352208055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-646g7,Uid:84a7463f-d8f0-450e-92a2-9a3a3c0a08a7,Namespace:kube-system,Attempt:0,}" May 14 17:58:00.358541 containerd[1502]: time="2025-05-14T17:58:00.358492381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9m5t9,Uid:b7eb8841-e821-4619-a70c-7cf068021b80,Namespace:kube-system,Attempt:0,}" May 14 17:58:00.757548 kubelet[2733]: I0514 17:58:00.757487 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-krrjs" podStartSLOduration=5.365965736 podStartE2EDuration="11.757470228s" podCreationTimestamp="2025-05-14 17:57:49 +0000 UTC" firstStartedPulling="2025-05-14 17:57:49.995832181 +0000 UTC m=+14.434414391" lastFinishedPulling="2025-05-14 17:57:56.387336633 +0000 UTC m=+20.825918883" observedRunningTime="2025-05-14 17:58:00.757322541 +0000 UTC m=+25.195904831" watchObservedRunningTime="2025-05-14 17:58:00.757470228 +0000 UTC m=+25.196052478" May 14 17:58:02.960782 systemd-networkd[1439]: cilium_host: Link UP May 14 17:58:02.960928 systemd-networkd[1439]: cilium_net: Link UP May 14 17:58:02.961059 systemd-networkd[1439]: cilium_net: Gained carrier May 14 17:58:02.961654 systemd-networkd[1439]: cilium_host: Gained carrier May 14 17:58:03.063293 systemd-networkd[1439]: cilium_vxlan: Link UP May 14 17:58:03.063302 systemd-networkd[1439]: cilium_vxlan: Gained carrier May 14 17:58:03.411223 kernel: NET: Registered PF_ALG protocol family May 14 17:58:03.543302 systemd-networkd[1439]: cilium_host: Gained IPv6LL May 14 17:58:03.927273 systemd-networkd[1439]: cilium_net: Gained IPv6LL May 14 17:58:04.013018 systemd-networkd[1439]: lxc_health: Link UP May 14 17:58:04.013895 systemd-networkd[1439]: lxc_health: Gained carrier May 14 17:58:04.184293 systemd-networkd[1439]: cilium_vxlan: Gained IPv6LL May 14 17:58:04.492568 systemd-networkd[1439]: lxccdd6ed3d8ee4: Link UP May 14 17:58:04.509860 systemd-networkd[1439]: lxce3caaa3b6a58: Link UP May 14 17:58:04.511178 kernel: eth0: renamed from tmp245e0 May 14 17:58:04.512844 systemd-networkd[1439]: lxccdd6ed3d8ee4: Gained carrier May 14 17:58:04.514846 systemd-networkd[1439]: lxce3caaa3b6a58: Gained carrier May 14 17:58:04.515272 kernel: eth0: renamed from tmpb9662 May 14 17:58:05.718225 systemd[1]: Started sshd@7-10.0.0.30:22-10.0.0.1:51240.service - OpenSSH per-connection server daemon (10.0.0.1:51240). May 14 17:58:05.771778 sshd[3893]: Accepted publickey for core from 10.0.0.1 port 51240 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 17:58:05.775442 sshd-session[3893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 17:58:05.780955 systemd-logind[1488]: New session 8 of user core. May 14 17:58:05.792357 systemd[1]: Started session-8.scope - Session 8 of User core. May 14 17:58:05.930456 sshd[3897]: Connection closed by 10.0.0.1 port 51240 May 14 17:58:05.931009 sshd-session[3893]: pam_unix(sshd:session): session closed for user core May 14 17:58:05.935790 systemd[1]: sshd@7-10.0.0.30:22-10.0.0.1:51240.service: Deactivated successfully. May 14 17:58:05.940627 systemd[1]: session-8.scope: Deactivated successfully. May 14 17:58:05.943035 systemd-logind[1488]: Session 8 logged out. Waiting for processes to exit. May 14 17:58:05.946420 systemd-logind[1488]: Removed session 8. May 14 17:58:05.975698 systemd-networkd[1439]: lxccdd6ed3d8ee4: Gained IPv6LL May 14 17:58:05.975935 systemd-networkd[1439]: lxc_health: Gained IPv6LL May 14 17:58:06.551309 systemd-networkd[1439]: lxce3caaa3b6a58: Gained IPv6LL May 14 17:58:08.117363 containerd[1502]: time="2025-05-14T17:58:08.117313754Z" level=info msg="connecting to shim 245e036efe0e765f728a6a3c65881b0032622a07fdaed17876a84fd1810089ba" address="unix:///run/containerd/s/b190953216a212462171a710c8d3e6cbc5f6c84fff573670233a830dcdede195" namespace=k8s.io protocol=ttrpc version=3 May 14 17:58:08.119670 containerd[1502]: time="2025-05-14T17:58:08.119255068Z" level=info msg="connecting to shim b9662447ff6520830162a2c2df88cb7bd74d25f8a6a70470b00a63056ac459b2" address="unix:///run/containerd/s/34bd5e80ead252773f2011d96afe1df349b455b2d40811c81e00022bc39af6c5" namespace=k8s.io protocol=ttrpc version=3 May 14 17:58:08.137463 systemd[1]: Started cri-containerd-245e036efe0e765f728a6a3c65881b0032622a07fdaed17876a84fd1810089ba.scope - libcontainer container 245e036efe0e765f728a6a3c65881b0032622a07fdaed17876a84fd1810089ba. May 14 17:58:08.143510 systemd[1]: Started cri-containerd-b9662447ff6520830162a2c2df88cb7bd74d25f8a6a70470b00a63056ac459b2.scope - libcontainer container b9662447ff6520830162a2c2df88cb7bd74d25f8a6a70470b00a63056ac459b2. May 14 17:58:08.151052 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 17:58:08.156843 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 17:58:08.184566 containerd[1502]: time="2025-05-14T17:58:08.184505537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-646g7,Uid:84a7463f-d8f0-450e-92a2-9a3a3c0a08a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"245e036efe0e765f728a6a3c65881b0032622a07fdaed17876a84fd1810089ba\"" May 14 17:58:08.188265 containerd[1502]: time="2025-05-14T17:58:08.188232800Z" level=info msg="CreateContainer within sandbox \"245e036efe0e765f728a6a3c65881b0032622a07fdaed17876a84fd1810089ba\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 17:58:08.198317 containerd[1502]: time="2025-05-14T17:58:08.198267386Z" level=info msg="Container efe3b7dd10dadd7c803eed4a8814a3a7dce262047d58d6bf5e7941920bec1fc7: CDI devices from CRI Config.CDIDevices: []" May 14 17:58:08.204687 containerd[1502]: time="2025-05-14T17:58:08.204660512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9m5t9,Uid:b7eb8841-e821-4619-a70c-7cf068021b80,Namespace:kube-system,Attempt:0,} returns sandbox id \"b9662447ff6520830162a2c2df88cb7bd74d25f8a6a70470b00a63056ac459b2\"" May 14 17:58:08.207599 containerd[1502]: time="2025-05-14T17:58:08.207554063Z" level=info msg="CreateContainer within sandbox \"245e036efe0e765f728a6a3c65881b0032622a07fdaed17876a84fd1810089ba\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"efe3b7dd10dadd7c803eed4a8814a3a7dce262047d58d6bf5e7941920bec1fc7\"" May 14 17:58:08.207904 containerd[1502]: time="2025-05-14T17:58:08.207877076Z" level=info msg="CreateContainer within sandbox \"b9662447ff6520830162a2c2df88cb7bd74d25f8a6a70470b00a63056ac459b2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 17:58:08.208538 containerd[1502]: time="2025-05-14T17:58:08.208511020Z" level=info msg="StartContainer for \"efe3b7dd10dadd7c803eed4a8814a3a7dce262047d58d6bf5e7941920bec1fc7\"" May 14 17:58:08.211790 containerd[1502]: time="2025-05-14T17:58:08.211743704Z" level=info msg="connecting to shim efe3b7dd10dadd7c803eed4a8814a3a7dce262047d58d6bf5e7941920bec1fc7" address="unix:///run/containerd/s/b190953216a212462171a710c8d3e6cbc5f6c84fff573670233a830dcdede195" protocol=ttrpc version=3 May 14 17:58:08.218179 containerd[1502]: time="2025-05-14T17:58:08.217616450Z" level=info msg="Container 1101847fe2728d6d72619d808451ff0c06ca13b74afd23f00b22d7e665d28f00: CDI devices from CRI Config.CDIDevices: []" May 14 17:58:08.222647 containerd[1502]: time="2025-05-14T17:58:08.222607082Z" level=info msg="CreateContainer within sandbox \"b9662447ff6520830162a2c2df88cb7bd74d25f8a6a70470b00a63056ac459b2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1101847fe2728d6d72619d808451ff0c06ca13b74afd23f00b22d7e665d28f00\"" May 14 17:58:08.223234 containerd[1502]: time="2025-05-14T17:58:08.223207905Z" level=info msg="StartContainer for \"1101847fe2728d6d72619d808451ff0c06ca13b74afd23f00b22d7e665d28f00\"" May 14 17:58:08.224994 containerd[1502]: time="2025-05-14T17:58:08.224963853Z" level=info msg="connecting to shim 1101847fe2728d6d72619d808451ff0c06ca13b74afd23f00b22d7e665d28f00" address="unix:///run/containerd/s/34bd5e80ead252773f2011d96afe1df349b455b2d40811c81e00022bc39af6c5" protocol=ttrpc version=3 May 14 17:58:08.231313 systemd[1]: Started cri-containerd-efe3b7dd10dadd7c803eed4a8814a3a7dce262047d58d6bf5e7941920bec1fc7.scope - libcontainer container efe3b7dd10dadd7c803eed4a8814a3a7dce262047d58d6bf5e7941920bec1fc7. May 14 17:58:08.253386 systemd[1]: Started cri-containerd-1101847fe2728d6d72619d808451ff0c06ca13b74afd23f00b22d7e665d28f00.scope - libcontainer container 1101847fe2728d6d72619d808451ff0c06ca13b74afd23f00b22d7e665d28f00. May 14 17:58:08.279595 containerd[1502]: time="2025-05-14T17:58:08.277855966Z" level=info msg="StartContainer for \"efe3b7dd10dadd7c803eed4a8814a3a7dce262047d58d6bf5e7941920bec1fc7\" returns successfully" May 14 17:58:08.315794 containerd[1502]: time="2025-05-14T17:58:08.314334449Z" level=info msg="StartContainer for \"1101847fe2728d6d72619d808451ff0c06ca13b74afd23f00b22d7e665d28f00\" returns successfully" May 14 17:58:08.781692 kubelet[2733]: I0514 17:58:08.781471 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-9m5t9" podStartSLOduration=19.781455048 podStartE2EDuration="19.781455048s" podCreationTimestamp="2025-05-14 17:57:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 17:58:08.780351486 +0000 UTC m=+33.218933736" watchObservedRunningTime="2025-05-14 17:58:08.781455048 +0000 UTC m=+33.220037258" May 14 17:58:08.791426 kubelet[2733]: I0514 17:58:08.791375 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-646g7" podStartSLOduration=19.791357389 podStartE2EDuration="19.791357389s" podCreationTimestamp="2025-05-14 17:57:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 17:58:08.790689923 +0000 UTC m=+33.229272173" watchObservedRunningTime="2025-05-14 17:58:08.791357389 +0000 UTC m=+33.229939639" May 14 17:58:10.946439 systemd[1]: Started sshd@8-10.0.0.30:22-10.0.0.1:51256.service - OpenSSH per-connection server daemon (10.0.0.1:51256). May 14 17:58:11.000067 sshd[4087]: Accepted publickey for core from 10.0.0.1 port 51256 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 17:58:11.001368 sshd-session[4087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 17:58:11.005366 systemd-logind[1488]: New session 9 of user core. May 14 17:58:11.021313 systemd[1]: Started session-9.scope - Session 9 of User core. May 14 17:58:11.138205 sshd[4089]: Connection closed by 10.0.0.1 port 51256 May 14 17:58:11.138466 sshd-session[4087]: pam_unix(sshd:session): session closed for user core May 14 17:58:11.142352 systemd[1]: sshd@8-10.0.0.30:22-10.0.0.1:51256.service: Deactivated successfully. May 14 17:58:11.144328 systemd[1]: session-9.scope: Deactivated successfully. May 14 17:58:11.145085 systemd-logind[1488]: Session 9 logged out. Waiting for processes to exit. May 14 17:58:11.146400 systemd-logind[1488]: Removed session 9. May 14 17:58:16.152002 systemd[1]: Started sshd@9-10.0.0.30:22-10.0.0.1:37756.service - OpenSSH per-connection server daemon (10.0.0.1:37756). May 14 17:58:16.213976 sshd[4105]: Accepted publickey for core from 10.0.0.1 port 37756 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 17:58:16.215375 sshd-session[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 17:58:16.219367 systemd-logind[1488]: New session 10 of user core. May 14 17:58:16.229354 systemd[1]: Started session-10.scope - Session 10 of User core. May 14 17:58:16.346874 sshd[4107]: Connection closed by 10.0.0.1 port 37756 May 14 17:58:16.348139 sshd-session[4105]: pam_unix(sshd:session): session closed for user core May 14 17:58:16.352411 systemd[1]: sshd@9-10.0.0.30:22-10.0.0.1:37756.service: Deactivated successfully. May 14 17:58:16.356679 systemd[1]: session-10.scope: Deactivated successfully. May 14 17:58:16.357555 systemd-logind[1488]: Session 10 logged out. Waiting for processes to exit. May 14 17:58:16.359344 systemd-logind[1488]: Removed session 10. May 14 17:58:21.362635 systemd[1]: Started sshd@10-10.0.0.30:22-10.0.0.1:37760.service - OpenSSH per-connection server daemon (10.0.0.1:37760). May 14 17:58:21.427176 sshd[4124]: Accepted publickey for core from 10.0.0.1 port 37760 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 17:58:21.428587 sshd-session[4124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 17:58:21.435084 systemd-logind[1488]: New session 11 of user core. May 14 17:58:21.443339 systemd[1]: Started session-11.scope - Session 11 of User core. May 14 17:58:21.569785 sshd[4126]: Connection closed by 10.0.0.1 port 37760 May 14 17:58:21.569966 sshd-session[4124]: pam_unix(sshd:session): session closed for user core May 14 17:58:21.583580 systemd[1]: sshd@10-10.0.0.30:22-10.0.0.1:37760.service: Deactivated successfully. May 14 17:58:21.586700 systemd[1]: session-11.scope: Deactivated successfully. May 14 17:58:21.587851 systemd-logind[1488]: Session 11 logged out. Waiting for processes to exit. May 14 17:58:21.590973 systemd-logind[1488]: Removed session 11. May 14 17:58:21.593486 systemd[1]: Started sshd@11-10.0.0.30:22-10.0.0.1:37768.service - OpenSSH per-connection server daemon (10.0.0.1:37768). May 14 17:58:21.652127 sshd[4140]: Accepted publickey for core from 10.0.0.1 port 37768 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 17:58:21.654073 sshd-session[4140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 17:58:21.660922 systemd-logind[1488]: New session 12 of user core. May 14 17:58:21.673366 systemd[1]: Started session-12.scope - Session 12 of User core. May 14 17:58:21.848849 sshd[4142]: Connection closed by 10.0.0.1 port 37768 May 14 17:58:21.850594 sshd-session[4140]: pam_unix(sshd:session): session closed for user core May 14 17:58:21.861819 systemd[1]: sshd@11-10.0.0.30:22-10.0.0.1:37768.service: Deactivated successfully. May 14 17:58:21.863758 systemd[1]: session-12.scope: Deactivated successfully. May 14 17:58:21.864602 systemd-logind[1488]: Session 12 logged out. Waiting for processes to exit. May 14 17:58:21.869074 systemd[1]: Started sshd@12-10.0.0.30:22-10.0.0.1:37778.service - OpenSSH per-connection server daemon (10.0.0.1:37778). May 14 17:58:21.872563 systemd-logind[1488]: Removed session 12. May 14 17:58:21.930964 sshd[4154]: Accepted publickey for core from 10.0.0.1 port 37778 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 17:58:21.931962 sshd-session[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 17:58:21.936290 systemd-logind[1488]: New session 13 of user core. May 14 17:58:21.949333 systemd[1]: Started session-13.scope - Session 13 of User core. May 14 17:58:22.066316 sshd[4156]: Connection closed by 10.0.0.1 port 37778 May 14 17:58:22.066750 sshd-session[4154]: pam_unix(sshd:session): session closed for user core May 14 17:58:22.070666 systemd[1]: sshd@12-10.0.0.30:22-10.0.0.1:37778.service: Deactivated successfully. May 14 17:58:22.072853 systemd[1]: session-13.scope: Deactivated successfully. May 14 17:58:22.073581 systemd-logind[1488]: Session 13 logged out. Waiting for processes to exit. May 14 17:58:22.074986 systemd-logind[1488]: Removed session 13. May 14 17:58:27.082572 systemd[1]: Started sshd@13-10.0.0.30:22-10.0.0.1:52244.service - OpenSSH per-connection server daemon (10.0.0.1:52244). May 14 17:58:27.134612 sshd[4169]: Accepted publickey for core from 10.0.0.1 port 52244 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 17:58:27.135815 sshd-session[4169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 17:58:27.139768 systemd-logind[1488]: New session 14 of user core. May 14 17:58:27.153331 systemd[1]: Started session-14.scope - Session 14 of User core. May 14 17:58:27.270350 sshd[4171]: Connection closed by 10.0.0.1 port 52244 May 14 17:58:27.271069 sshd-session[4169]: pam_unix(sshd:session): session closed for user core May 14 17:58:27.274582 systemd[1]: sshd@13-10.0.0.30:22-10.0.0.1:52244.service: Deactivated successfully. May 14 17:58:27.277936 systemd[1]: session-14.scope: Deactivated successfully. May 14 17:58:27.278649 systemd-logind[1488]: Session 14 logged out. Waiting for processes to exit. May 14 17:58:27.279709 systemd-logind[1488]: Removed session 14. May 14 17:58:32.282695 systemd[1]: Started sshd@14-10.0.0.30:22-10.0.0.1:52252.service - OpenSSH per-connection server daemon (10.0.0.1:52252). May 14 17:58:32.329040 sshd[4186]: Accepted publickey for core from 10.0.0.1 port 52252 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 17:58:32.330974 sshd-session[4186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 17:58:32.336469 systemd-logind[1488]: New session 15 of user core. May 14 17:58:32.344417 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 17:58:32.468379 sshd[4188]: Connection closed by 10.0.0.1 port 52252 May 14 17:58:32.468729 sshd-session[4186]: pam_unix(sshd:session): session closed for user core May 14 17:58:32.482473 systemd[1]: sshd@14-10.0.0.30:22-10.0.0.1:52252.service: Deactivated successfully. May 14 17:58:32.484241 systemd[1]: session-15.scope: Deactivated successfully. May 14 17:58:32.484992 systemd-logind[1488]: Session 15 logged out. Waiting for processes to exit. May 14 17:58:32.487272 systemd[1]: Started sshd@15-10.0.0.30:22-10.0.0.1:41908.service - OpenSSH per-connection server daemon (10.0.0.1:41908). May 14 17:58:32.489523 systemd-logind[1488]: Removed session 15. May 14 17:58:32.536988 sshd[4201]: Accepted publickey for core from 10.0.0.1 port 41908 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 17:58:32.538051 sshd-session[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 17:58:32.543169 systemd-logind[1488]: New session 16 of user core. May 14 17:58:32.552386 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 17:58:32.757747 sshd[4203]: Connection closed by 10.0.0.1 port 41908 May 14 17:58:32.759129 sshd-session[4201]: pam_unix(sshd:session): session closed for user core May 14 17:58:32.772298 systemd[1]: sshd@15-10.0.0.30:22-10.0.0.1:41908.service: Deactivated successfully. May 14 17:58:32.773849 systemd[1]: session-16.scope: Deactivated successfully. May 14 17:58:32.774535 systemd-logind[1488]: Session 16 logged out. Waiting for processes to exit. May 14 17:58:32.776977 systemd[1]: Started sshd@16-10.0.0.30:22-10.0.0.1:41924.service - OpenSSH per-connection server daemon (10.0.0.1:41924). May 14 17:58:32.777631 systemd-logind[1488]: Removed session 16. May 14 17:58:32.838815 sshd[4214]: Accepted publickey for core from 10.0.0.1 port 41924 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 17:58:32.839934 sshd-session[4214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 17:58:32.843595 systemd-logind[1488]: New session 17 of user core. May 14 17:58:32.853377 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 17:58:34.164841 sshd[4216]: Connection closed by 10.0.0.1 port 41924 May 14 17:58:34.163969 sshd-session[4214]: pam_unix(sshd:session): session closed for user core May 14 17:58:34.176995 systemd[1]: sshd@16-10.0.0.30:22-10.0.0.1:41924.service: Deactivated successfully. May 14 17:58:34.182022 systemd[1]: session-17.scope: Deactivated successfully. May 14 17:58:34.187045 systemd-logind[1488]: Session 17 logged out. Waiting for processes to exit. May 14 17:58:34.190377 systemd[1]: Started sshd@17-10.0.0.30:22-10.0.0.1:41928.service - OpenSSH per-connection server daemon (10.0.0.1:41928). May 14 17:58:34.193310 systemd-logind[1488]: Removed session 17. May 14 17:58:34.241326 sshd[4236]: Accepted publickey for core from 10.0.0.1 port 41928 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 17:58:34.242559 sshd-session[4236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 17:58:34.246822 systemd-logind[1488]: New session 18 of user core. May 14 17:58:34.262357 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 17:58:34.491564 sshd[4239]: Connection closed by 10.0.0.1 port 41928 May 14 17:58:34.493132 sshd-session[4236]: pam_unix(sshd:session): session closed for user core May 14 17:58:34.502009 systemd[1]: sshd@17-10.0.0.30:22-10.0.0.1:41928.service: Deactivated successfully. May 14 17:58:34.503861 systemd[1]: session-18.scope: Deactivated successfully. May 14 17:58:34.504975 systemd-logind[1488]: Session 18 logged out. Waiting for processes to exit. May 14 17:58:34.508636 systemd[1]: Started sshd@18-10.0.0.30:22-10.0.0.1:41942.service - OpenSSH per-connection server daemon (10.0.0.1:41942). May 14 17:58:34.509536 systemd-logind[1488]: Removed session 18. May 14 17:58:34.573590 sshd[4251]: Accepted publickey for core from 10.0.0.1 port 41942 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 17:58:34.575132 sshd-session[4251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 17:58:34.587146 systemd-logind[1488]: New session 19 of user core. May 14 17:58:34.593322 systemd[1]: Started session-19.scope - Session 19 of User core. May 14 17:58:34.701509 sshd[4253]: Connection closed by 10.0.0.1 port 41942 May 14 17:58:34.701944 sshd-session[4251]: pam_unix(sshd:session): session closed for user core May 14 17:58:34.705223 systemd[1]: sshd@18-10.0.0.30:22-10.0.0.1:41942.service: Deactivated successfully. May 14 17:58:34.707287 systemd[1]: session-19.scope: Deactivated successfully. May 14 17:58:34.709481 systemd-logind[1488]: Session 19 logged out. Waiting for processes to exit. May 14 17:58:34.710478 systemd-logind[1488]: Removed session 19. May 14 17:58:39.713491 systemd[1]: Started sshd@19-10.0.0.30:22-10.0.0.1:41952.service - OpenSSH per-connection server daemon (10.0.0.1:41952). May 14 17:58:39.757488 sshd[4272]: Accepted publickey for core from 10.0.0.1 port 41952 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 17:58:39.758784 sshd-session[4272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 17:58:39.763274 systemd-logind[1488]: New session 20 of user core. May 14 17:58:39.770325 systemd[1]: Started session-20.scope - Session 20 of User core. May 14 17:58:39.880386 sshd[4274]: Connection closed by 10.0.0.1 port 41952 May 14 17:58:39.880712 sshd-session[4272]: pam_unix(sshd:session): session closed for user core May 14 17:58:39.884330 systemd[1]: sshd@19-10.0.0.30:22-10.0.0.1:41952.service: Deactivated successfully. May 14 17:58:39.886645 systemd[1]: session-20.scope: Deactivated successfully. May 14 17:58:39.887651 systemd-logind[1488]: Session 20 logged out. Waiting for processes to exit. May 14 17:58:39.888913 systemd-logind[1488]: Removed session 20. May 14 17:58:44.893341 systemd[1]: Started sshd@20-10.0.0.30:22-10.0.0.1:42820.service - OpenSSH per-connection server daemon (10.0.0.1:42820). May 14 17:58:44.949582 sshd[4288]: Accepted publickey for core from 10.0.0.1 port 42820 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 17:58:44.950965 sshd-session[4288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 17:58:44.955140 systemd-logind[1488]: New session 21 of user core. May 14 17:58:44.964337 systemd[1]: Started session-21.scope - Session 21 of User core. May 14 17:58:45.075924 sshd[4290]: Connection closed by 10.0.0.1 port 42820 May 14 17:58:45.076271 sshd-session[4288]: pam_unix(sshd:session): session closed for user core May 14 17:58:45.079125 systemd[1]: sshd@20-10.0.0.30:22-10.0.0.1:42820.service: Deactivated successfully. May 14 17:58:45.081106 systemd[1]: session-21.scope: Deactivated successfully. May 14 17:58:45.083280 systemd-logind[1488]: Session 21 logged out. Waiting for processes to exit. May 14 17:58:45.085317 systemd-logind[1488]: Removed session 21. May 14 17:58:50.091331 systemd[1]: Started sshd@21-10.0.0.30:22-10.0.0.1:42830.service - OpenSSH per-connection server daemon (10.0.0.1:42830). May 14 17:58:50.147106 sshd[4303]: Accepted publickey for core from 10.0.0.1 port 42830 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 17:58:50.148428 sshd-session[4303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 17:58:50.152226 systemd-logind[1488]: New session 22 of user core. May 14 17:58:50.159367 systemd[1]: Started session-22.scope - Session 22 of User core. May 14 17:58:50.271573 sshd[4305]: Connection closed by 10.0.0.1 port 42830 May 14 17:58:50.272497 sshd-session[4303]: pam_unix(sshd:session): session closed for user core May 14 17:58:50.280213 systemd[1]: sshd@21-10.0.0.30:22-10.0.0.1:42830.service: Deactivated successfully. May 14 17:58:50.281934 systemd[1]: session-22.scope: Deactivated successfully. May 14 17:58:50.283822 systemd-logind[1488]: Session 22 logged out. Waiting for processes to exit. May 14 17:58:50.285751 systemd-logind[1488]: Removed session 22. May 14 17:58:50.287550 systemd[1]: Started sshd@22-10.0.0.30:22-10.0.0.1:42840.service - OpenSSH per-connection server daemon (10.0.0.1:42840). May 14 17:58:50.340021 sshd[4320]: Accepted publickey for core from 10.0.0.1 port 42840 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 17:58:50.341807 sshd-session[4320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 17:58:50.346226 systemd-logind[1488]: New session 23 of user core. May 14 17:58:50.358322 systemd[1]: Started session-23.scope - Session 23 of User core. May 14 17:58:52.391935 containerd[1502]: time="2025-05-14T17:58:52.391858946Z" level=info msg="StopContainer for \"e38d0c45943894edbad3b800a1cca031a4ff74b22f4fa84067741866886ef5b6\" with timeout 30 (s)" May 14 17:58:52.392448 containerd[1502]: time="2025-05-14T17:58:52.392412023Z" level=info msg="Stop container \"e38d0c45943894edbad3b800a1cca031a4ff74b22f4fa84067741866886ef5b6\" with signal terminated" May 14 17:58:52.411984 systemd[1]: cri-containerd-e38d0c45943894edbad3b800a1cca031a4ff74b22f4fa84067741866886ef5b6.scope: Deactivated successfully. May 14 17:58:52.413671 containerd[1502]: time="2025-05-14T17:58:52.413476725Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 17:58:52.415050 containerd[1502]: time="2025-05-14T17:58:52.414997318Z" level=info msg="received exit event container_id:\"e38d0c45943894edbad3b800a1cca031a4ff74b22f4fa84067741866886ef5b6\" id:\"e38d0c45943894edbad3b800a1cca031a4ff74b22f4fa84067741866886ef5b6\" pid:3352 exited_at:{seconds:1747245532 nanos:414711159}" May 14 17:58:52.415817 containerd[1502]: time="2025-05-14T17:58:52.415784874Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e38d0c45943894edbad3b800a1cca031a4ff74b22f4fa84067741866886ef5b6\" id:\"e38d0c45943894edbad3b800a1cca031a4ff74b22f4fa84067741866886ef5b6\" pid:3352 exited_at:{seconds:1747245532 nanos:414711159}" May 14 17:58:52.419568 containerd[1502]: time="2025-05-14T17:58:52.419538817Z" level=info msg="TaskExit event in podsandbox handler container_id:\"27a18febb643f8a0c8faaa8de9a16ed9f43327f5c5f4a8f5d927333f72103e33\" id:\"5f3876fa2c159e9feebe81f77c1ba054b54791d57360a0fb35af13a88c8ebda3\" pid:4350 exited_at:{seconds:1747245532 nanos:419126299}" May 14 17:58:52.421129 containerd[1502]: time="2025-05-14T17:58:52.421104089Z" level=info msg="StopContainer for \"27a18febb643f8a0c8faaa8de9a16ed9f43327f5c5f4a8f5d927333f72103e33\" with timeout 2 (s)" May 14 17:58:52.421545 containerd[1502]: time="2025-05-14T17:58:52.421389768Z" level=info msg="Stop container \"27a18febb643f8a0c8faaa8de9a16ed9f43327f5c5f4a8f5d927333f72103e33\" with signal terminated" May 14 17:58:52.428695 systemd-networkd[1439]: lxc_health: Link DOWN May 14 17:58:52.428701 systemd-networkd[1439]: lxc_health: Lost carrier May 14 17:58:52.440866 systemd[1]: cri-containerd-27a18febb643f8a0c8faaa8de9a16ed9f43327f5c5f4a8f5d927333f72103e33.scope: Deactivated successfully. May 14 17:58:52.441243 systemd[1]: cri-containerd-27a18febb643f8a0c8faaa8de9a16ed9f43327f5c5f4a8f5d927333f72103e33.scope: Consumed 6.496s CPU time, 123.9M memory peak, 184K read from disk, 14.3M written to disk. May 14 17:58:52.443108 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e38d0c45943894edbad3b800a1cca031a4ff74b22f4fa84067741866886ef5b6-rootfs.mount: Deactivated successfully. May 14 17:58:52.443849 containerd[1502]: time="2025-05-14T17:58:52.443818024Z" level=info msg="TaskExit event in podsandbox handler container_id:\"27a18febb643f8a0c8faaa8de9a16ed9f43327f5c5f4a8f5d927333f72103e33\" id:\"27a18febb643f8a0c8faaa8de9a16ed9f43327f5c5f4a8f5d927333f72103e33\" pid:3388 exited_at:{seconds:1747245532 nanos:442323151}" May 14 17:58:52.445135 containerd[1502]: time="2025-05-14T17:58:52.445104978Z" level=info msg="received exit event container_id:\"27a18febb643f8a0c8faaa8de9a16ed9f43327f5c5f4a8f5d927333f72103e33\" id:\"27a18febb643f8a0c8faaa8de9a16ed9f43327f5c5f4a8f5d927333f72103e33\" pid:3388 exited_at:{seconds:1747245532 nanos:442323151}" May 14 17:58:52.463656 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-27a18febb643f8a0c8faaa8de9a16ed9f43327f5c5f4a8f5d927333f72103e33-rootfs.mount: Deactivated successfully. May 14 17:58:52.525933 containerd[1502]: time="2025-05-14T17:58:52.525865681Z" level=info msg="StopContainer for \"e38d0c45943894edbad3b800a1cca031a4ff74b22f4fa84067741866886ef5b6\" returns successfully" May 14 17:58:52.527951 containerd[1502]: time="2025-05-14T17:58:52.527901072Z" level=info msg="StopContainer for \"27a18febb643f8a0c8faaa8de9a16ed9f43327f5c5f4a8f5d927333f72103e33\" returns successfully" May 14 17:58:52.528586 containerd[1502]: time="2025-05-14T17:58:52.528448869Z" level=info msg="StopPodSandbox for \"6ab57c4d93bfd24ed26e25f4ba3497c50f6622a56fb9a939eb49d011f86a7246\"" May 14 17:58:52.529073 containerd[1502]: time="2025-05-14T17:58:52.529027626Z" level=info msg="StopPodSandbox for \"60cd31103d83746e53025d8113734f48645c95e79edfd2cc377d193db815c178\"" May 14 17:58:52.530603 containerd[1502]: time="2025-05-14T17:58:52.530564379Z" level=info msg="Container to stop \"e38d0c45943894edbad3b800a1cca031a4ff74b22f4fa84067741866886ef5b6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 17:58:52.538174 containerd[1502]: time="2025-05-14T17:58:52.538120104Z" level=info msg="Container to stop \"810cb86c8610bea565eeeb25c521108adb0fc8eae0fd8dc239e194d9f3f5bb9b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 17:58:52.538588 containerd[1502]: time="2025-05-14T17:58:52.538565662Z" level=info msg="Container to stop \"d6f2c66ccf984dc6944a0cd12ce8f4ecf480f849cd0d4aeddda3e4e9982ebc35\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 17:58:52.538695 containerd[1502]: time="2025-05-14T17:58:52.538679901Z" level=info msg="Container to stop \"d0048974ef5f063596b97e9ca23902e218ba331b61cbdbd1872a80885aa5cd82\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 17:58:52.538816 containerd[1502]: time="2025-05-14T17:58:52.538732981Z" level=info msg="Container to stop \"4d68907b329ebd4903f9382ded63ad01c0fe468b754ca6d8b4b64073435c8b52\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 17:58:52.538895 containerd[1502]: time="2025-05-14T17:58:52.538866261Z" level=info msg="Container to stop \"27a18febb643f8a0c8faaa8de9a16ed9f43327f5c5f4a8f5d927333f72103e33\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 17:58:52.544519 systemd[1]: cri-containerd-60cd31103d83746e53025d8113734f48645c95e79edfd2cc377d193db815c178.scope: Deactivated successfully. May 14 17:58:52.545720 containerd[1502]: time="2025-05-14T17:58:52.545689349Z" level=info msg="TaskExit event in podsandbox handler container_id:\"60cd31103d83746e53025d8113734f48645c95e79edfd2cc377d193db815c178\" id:\"60cd31103d83746e53025d8113734f48645c95e79edfd2cc377d193db815c178\" pid:2955 exit_status:137 exited_at:{seconds:1747245532 nanos:545358110}" May 14 17:58:52.548704 systemd[1]: cri-containerd-6ab57c4d93bfd24ed26e25f4ba3497c50f6622a56fb9a939eb49d011f86a7246.scope: Deactivated successfully. May 14 17:58:52.575656 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ab57c4d93bfd24ed26e25f4ba3497c50f6622a56fb9a939eb49d011f86a7246-rootfs.mount: Deactivated successfully. May 14 17:58:52.581934 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-60cd31103d83746e53025d8113734f48645c95e79edfd2cc377d193db815c178-rootfs.mount: Deactivated successfully. May 14 17:58:52.586180 containerd[1502]: time="2025-05-14T17:58:52.586056761Z" level=info msg="shim disconnected" id=6ab57c4d93bfd24ed26e25f4ba3497c50f6622a56fb9a939eb49d011f86a7246 namespace=k8s.io May 14 17:58:52.606582 containerd[1502]: time="2025-05-14T17:58:52.586748717Z" level=warning msg="cleaning up after shim disconnected" id=6ab57c4d93bfd24ed26e25f4ba3497c50f6622a56fb9a939eb49d011f86a7246 namespace=k8s.io May 14 17:58:52.606934 containerd[1502]: time="2025-05-14T17:58:52.606740104Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 17:58:52.606934 containerd[1502]: time="2025-05-14T17:58:52.586337399Z" level=info msg="shim disconnected" id=60cd31103d83746e53025d8113734f48645c95e79edfd2cc377d193db815c178 namespace=k8s.io May 14 17:58:52.606934 containerd[1502]: time="2025-05-14T17:58:52.606862904Z" level=warning msg="cleaning up after shim disconnected" id=60cd31103d83746e53025d8113734f48645c95e79edfd2cc377d193db815c178 namespace=k8s.io May 14 17:58:52.606934 containerd[1502]: time="2025-05-14T17:58:52.606890344Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 17:58:52.619534 containerd[1502]: time="2025-05-14T17:58:52.619490765Z" level=info msg="received exit event sandbox_id:\"6ab57c4d93bfd24ed26e25f4ba3497c50f6622a56fb9a939eb49d011f86a7246\" exit_status:137 exited_at:{seconds:1747245532 nanos:555396704}" May 14 17:58:52.620997 containerd[1502]: time="2025-05-14T17:58:52.619509685Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6ab57c4d93bfd24ed26e25f4ba3497c50f6622a56fb9a939eb49d011f86a7246\" id:\"6ab57c4d93bfd24ed26e25f4ba3497c50f6622a56fb9a939eb49d011f86a7246\" pid:2893 exit_status:137 exited_at:{seconds:1747245532 nanos:555396704}" May 14 17:58:52.621021 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-60cd31103d83746e53025d8113734f48645c95e79edfd2cc377d193db815c178-shm.mount: Deactivated successfully. May 14 17:58:52.621114 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6ab57c4d93bfd24ed26e25f4ba3497c50f6622a56fb9a939eb49d011f86a7246-shm.mount: Deactivated successfully. May 14 17:58:52.621283 containerd[1502]: time="2025-05-14T17:58:52.619518885Z" level=info msg="received exit event sandbox_id:\"60cd31103d83746e53025d8113734f48645c95e79edfd2cc377d193db815c178\" exit_status:137 exited_at:{seconds:1747245532 nanos:545358110}" May 14 17:58:52.621612 containerd[1502]: time="2025-05-14T17:58:52.621576995Z" level=info msg="TearDown network for sandbox \"60cd31103d83746e53025d8113734f48645c95e79edfd2cc377d193db815c178\" successfully" May 14 17:58:52.621612 containerd[1502]: time="2025-05-14T17:58:52.621608355Z" level=info msg="StopPodSandbox for \"60cd31103d83746e53025d8113734f48645c95e79edfd2cc377d193db815c178\" returns successfully" May 14 17:58:52.621719 containerd[1502]: time="2025-05-14T17:58:52.621702115Z" level=info msg="TearDown network for sandbox \"6ab57c4d93bfd24ed26e25f4ba3497c50f6622a56fb9a939eb49d011f86a7246\" successfully" May 14 17:58:52.621746 containerd[1502]: time="2025-05-14T17:58:52.621718594Z" level=info msg="StopPodSandbox for \"6ab57c4d93bfd24ed26e25f4ba3497c50f6622a56fb9a939eb49d011f86a7246\" returns successfully" May 14 17:58:52.716028 kubelet[2733]: I0514 17:58:52.715742 2733 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2c1fde50-1c25-4687-8a12-5fe284b6ae21-hubble-tls\") pod \"2c1fde50-1c25-4687-8a12-5fe284b6ae21\" (UID: \"2c1fde50-1c25-4687-8a12-5fe284b6ae21\") " May 14 17:58:52.716028 kubelet[2733]: I0514 17:58:52.715793 2733 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2c1fde50-1c25-4687-8a12-5fe284b6ae21-cilium-config-path\") pod \"2c1fde50-1c25-4687-8a12-5fe284b6ae21\" (UID: \"2c1fde50-1c25-4687-8a12-5fe284b6ae21\") " May 14 17:58:52.716028 kubelet[2733]: I0514 17:58:52.715813 2733 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2c1fde50-1c25-4687-8a12-5fe284b6ae21-host-proc-sys-kernel\") pod \"2c1fde50-1c25-4687-8a12-5fe284b6ae21\" (UID: \"2c1fde50-1c25-4687-8a12-5fe284b6ae21\") " May 14 17:58:52.716028 kubelet[2733]: I0514 17:58:52.715830 2733 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2c1fde50-1c25-4687-8a12-5fe284b6ae21-hostproc\") pod \"2c1fde50-1c25-4687-8a12-5fe284b6ae21\" (UID: \"2c1fde50-1c25-4687-8a12-5fe284b6ae21\") " May 14 17:58:52.716028 kubelet[2733]: I0514 17:58:52.715848 2733 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2c1fde50-1c25-4687-8a12-5fe284b6ae21-cni-path\") pod \"2c1fde50-1c25-4687-8a12-5fe284b6ae21\" (UID: \"2c1fde50-1c25-4687-8a12-5fe284b6ae21\") " May 14 17:58:52.716028 kubelet[2733]: I0514 17:58:52.715864 2733 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2c1fde50-1c25-4687-8a12-5fe284b6ae21-host-proc-sys-net\") pod \"2c1fde50-1c25-4687-8a12-5fe284b6ae21\" (UID: \"2c1fde50-1c25-4687-8a12-5fe284b6ae21\") " May 14 17:58:52.719632 kubelet[2733]: I0514 17:58:52.715879 2733 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2c1fde50-1c25-4687-8a12-5fe284b6ae21-cilium-cgroup\") pod \"2c1fde50-1c25-4687-8a12-5fe284b6ae21\" (UID: \"2c1fde50-1c25-4687-8a12-5fe284b6ae21\") " May 14 17:58:52.719632 kubelet[2733]: I0514 17:58:52.715897 2733 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5mbn\" (UniqueName: \"kubernetes.io/projected/2c1fde50-1c25-4687-8a12-5fe284b6ae21-kube-api-access-d5mbn\") pod \"2c1fde50-1c25-4687-8a12-5fe284b6ae21\" (UID: \"2c1fde50-1c25-4687-8a12-5fe284b6ae21\") " May 14 17:58:52.719632 kubelet[2733]: I0514 17:58:52.715916 2733 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vp776\" (UniqueName: \"kubernetes.io/projected/80bdfde4-ef00-47c6-8cd8-76e2d3f36542-kube-api-access-vp776\") pod \"80bdfde4-ef00-47c6-8cd8-76e2d3f36542\" (UID: \"80bdfde4-ef00-47c6-8cd8-76e2d3f36542\") " May 14 17:58:52.719632 kubelet[2733]: I0514 17:58:52.715933 2733 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c1fde50-1c25-4687-8a12-5fe284b6ae21-xtables-lock\") pod \"2c1fde50-1c25-4687-8a12-5fe284b6ae21\" (UID: \"2c1fde50-1c25-4687-8a12-5fe284b6ae21\") " May 14 17:58:52.719632 kubelet[2733]: I0514 17:58:52.715949 2733 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2c1fde50-1c25-4687-8a12-5fe284b6ae21-bpf-maps\") pod \"2c1fde50-1c25-4687-8a12-5fe284b6ae21\" (UID: \"2c1fde50-1c25-4687-8a12-5fe284b6ae21\") " May 14 17:58:52.719632 kubelet[2733]: I0514 17:58:52.715964 2733 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c1fde50-1c25-4687-8a12-5fe284b6ae21-lib-modules\") pod \"2c1fde50-1c25-4687-8a12-5fe284b6ae21\" (UID: \"2c1fde50-1c25-4687-8a12-5fe284b6ae21\") " May 14 17:58:52.719814 kubelet[2733]: I0514 17:58:52.715979 2733 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2c1fde50-1c25-4687-8a12-5fe284b6ae21-cilium-run\") pod \"2c1fde50-1c25-4687-8a12-5fe284b6ae21\" (UID: \"2c1fde50-1c25-4687-8a12-5fe284b6ae21\") " May 14 17:58:52.719814 kubelet[2733]: I0514 17:58:52.715997 2733 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2c1fde50-1c25-4687-8a12-5fe284b6ae21-clustermesh-secrets\") pod \"2c1fde50-1c25-4687-8a12-5fe284b6ae21\" (UID: \"2c1fde50-1c25-4687-8a12-5fe284b6ae21\") " May 14 17:58:52.719814 kubelet[2733]: I0514 17:58:52.716017 2733 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c1fde50-1c25-4687-8a12-5fe284b6ae21-etc-cni-netd\") pod \"2c1fde50-1c25-4687-8a12-5fe284b6ae21\" (UID: \"2c1fde50-1c25-4687-8a12-5fe284b6ae21\") " May 14 17:58:52.719814 kubelet[2733]: I0514 17:58:52.716035 2733 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/80bdfde4-ef00-47c6-8cd8-76e2d3f36542-cilium-config-path\") pod \"80bdfde4-ef00-47c6-8cd8-76e2d3f36542\" (UID: \"80bdfde4-ef00-47c6-8cd8-76e2d3f36542\") " May 14 17:58:52.726915 kubelet[2733]: I0514 17:58:52.726854 2733 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c1fde50-1c25-4687-8a12-5fe284b6ae21-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2c1fde50-1c25-4687-8a12-5fe284b6ae21" (UID: "2c1fde50-1c25-4687-8a12-5fe284b6ae21"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 17:58:52.730317 kubelet[2733]: I0514 17:58:52.730275 2733 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c1fde50-1c25-4687-8a12-5fe284b6ae21-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2c1fde50-1c25-4687-8a12-5fe284b6ae21" (UID: "2c1fde50-1c25-4687-8a12-5fe284b6ae21"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 14 17:58:52.730490 kubelet[2733]: I0514 17:58:52.730475 2733 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c1fde50-1c25-4687-8a12-5fe284b6ae21-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2c1fde50-1c25-4687-8a12-5fe284b6ae21" (UID: "2c1fde50-1c25-4687-8a12-5fe284b6ae21"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 17:58:52.730555 kubelet[2733]: I0514 17:58:52.730543 2733 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c1fde50-1c25-4687-8a12-5fe284b6ae21-hostproc" (OuterVolumeSpecName: "hostproc") pod "2c1fde50-1c25-4687-8a12-5fe284b6ae21" (UID: "2c1fde50-1c25-4687-8a12-5fe284b6ae21"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 17:58:52.730618 kubelet[2733]: I0514 17:58:52.730607 2733 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c1fde50-1c25-4687-8a12-5fe284b6ae21-cni-path" (OuterVolumeSpecName: "cni-path") pod "2c1fde50-1c25-4687-8a12-5fe284b6ae21" (UID: "2c1fde50-1c25-4687-8a12-5fe284b6ae21"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 17:58:52.730693 kubelet[2733]: I0514 17:58:52.730681 2733 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c1fde50-1c25-4687-8a12-5fe284b6ae21-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2c1fde50-1c25-4687-8a12-5fe284b6ae21" (UID: "2c1fde50-1c25-4687-8a12-5fe284b6ae21"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 17:58:52.730770 kubelet[2733]: I0514 17:58:52.730736 2733 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80bdfde4-ef00-47c6-8cd8-76e2d3f36542-kube-api-access-vp776" (OuterVolumeSpecName: "kube-api-access-vp776") pod "80bdfde4-ef00-47c6-8cd8-76e2d3f36542" (UID: "80bdfde4-ef00-47c6-8cd8-76e2d3f36542"). InnerVolumeSpecName "kube-api-access-vp776". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 17:58:52.730805 kubelet[2733]: I0514 17:58:52.730718 2733 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c1fde50-1c25-4687-8a12-5fe284b6ae21-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2c1fde50-1c25-4687-8a12-5fe284b6ae21" (UID: "2c1fde50-1c25-4687-8a12-5fe284b6ae21"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 17:58:52.730858 kubelet[2733]: I0514 17:58:52.730844 2733 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c1fde50-1c25-4687-8a12-5fe284b6ae21-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2c1fde50-1c25-4687-8a12-5fe284b6ae21" (UID: "2c1fde50-1c25-4687-8a12-5fe284b6ae21"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 17:58:52.730925 kubelet[2733]: I0514 17:58:52.730913 2733 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c1fde50-1c25-4687-8a12-5fe284b6ae21-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2c1fde50-1c25-4687-8a12-5fe284b6ae21" (UID: "2c1fde50-1c25-4687-8a12-5fe284b6ae21"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 17:58:52.730997 kubelet[2733]: I0514 17:58:52.730985 2733 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c1fde50-1c25-4687-8a12-5fe284b6ae21-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2c1fde50-1c25-4687-8a12-5fe284b6ae21" (UID: "2c1fde50-1c25-4687-8a12-5fe284b6ae21"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 17:58:52.731069 kubelet[2733]: I0514 17:58:52.731055 2733 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c1fde50-1c25-4687-8a12-5fe284b6ae21-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2c1fde50-1c25-4687-8a12-5fe284b6ae21" (UID: "2c1fde50-1c25-4687-8a12-5fe284b6ae21"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 17:58:52.731141 kubelet[2733]: I0514 17:58:52.731129 2733 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c1fde50-1c25-4687-8a12-5fe284b6ae21-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2c1fde50-1c25-4687-8a12-5fe284b6ae21" (UID: "2c1fde50-1c25-4687-8a12-5fe284b6ae21"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 17:58:52.733116 kubelet[2733]: I0514 17:58:52.733072 2733 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c1fde50-1c25-4687-8a12-5fe284b6ae21-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2c1fde50-1c25-4687-8a12-5fe284b6ae21" (UID: "2c1fde50-1c25-4687-8a12-5fe284b6ae21"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 14 17:58:52.733433 kubelet[2733]: I0514 17:58:52.733399 2733 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c1fde50-1c25-4687-8a12-5fe284b6ae21-kube-api-access-d5mbn" (OuterVolumeSpecName: "kube-api-access-d5mbn") pod "2c1fde50-1c25-4687-8a12-5fe284b6ae21" (UID: "2c1fde50-1c25-4687-8a12-5fe284b6ae21"). InnerVolumeSpecName "kube-api-access-d5mbn". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 17:58:52.735037 kubelet[2733]: I0514 17:58:52.734902 2733 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80bdfde4-ef00-47c6-8cd8-76e2d3f36542-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "80bdfde4-ef00-47c6-8cd8-76e2d3f36542" (UID: "80bdfde4-ef00-47c6-8cd8-76e2d3f36542"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 14 17:58:52.816964 kubelet[2733]: I0514 17:58:52.816905 2733 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-d5mbn\" (UniqueName: \"kubernetes.io/projected/2c1fde50-1c25-4687-8a12-5fe284b6ae21-kube-api-access-d5mbn\") on node \"localhost\" DevicePath \"\"" May 14 17:58:52.817631 kubelet[2733]: I0514 17:58:52.817599 2733 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-vp776\" (UniqueName: \"kubernetes.io/projected/80bdfde4-ef00-47c6-8cd8-76e2d3f36542-kube-api-access-vp776\") on node \"localhost\" DevicePath \"\"" May 14 17:58:52.817631 kubelet[2733]: I0514 17:58:52.817626 2733 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2c1fde50-1c25-4687-8a12-5fe284b6ae21-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 14 17:58:52.817631 kubelet[2733]: I0514 17:58:52.817636 2733 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2c1fde50-1c25-4687-8a12-5fe284b6ae21-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 14 17:58:52.817726 kubelet[2733]: I0514 17:58:52.817644 2733 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2c1fde50-1c25-4687-8a12-5fe284b6ae21-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 14 17:58:52.817726 kubelet[2733]: I0514 17:58:52.817652 2733 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c1fde50-1c25-4687-8a12-5fe284b6ae21-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 14 17:58:52.817726 kubelet[2733]: I0514 17:58:52.817660 2733 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2c1fde50-1c25-4687-8a12-5fe284b6ae21-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 14 17:58:52.817726 kubelet[2733]: I0514 17:58:52.817667 2733 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c1fde50-1c25-4687-8a12-5fe284b6ae21-lib-modules\") on node \"localhost\" DevicePath \"\"" May 14 17:58:52.817726 kubelet[2733]: I0514 17:58:52.817677 2733 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2c1fde50-1c25-4687-8a12-5fe284b6ae21-cilium-run\") on node \"localhost\" DevicePath \"\"" May 14 17:58:52.817726 kubelet[2733]: I0514 17:58:52.817685 2733 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/80bdfde4-ef00-47c6-8cd8-76e2d3f36542-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 14 17:58:52.817726 kubelet[2733]: I0514 17:58:52.817691 2733 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c1fde50-1c25-4687-8a12-5fe284b6ae21-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 14 17:58:52.817726 kubelet[2733]: I0514 17:58:52.817699 2733 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2c1fde50-1c25-4687-8a12-5fe284b6ae21-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 14 17:58:52.817877 kubelet[2733]: I0514 17:58:52.817707 2733 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2c1fde50-1c25-4687-8a12-5fe284b6ae21-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 14 17:58:52.817877 kubelet[2733]: I0514 17:58:52.817716 2733 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2c1fde50-1c25-4687-8a12-5fe284b6ae21-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 14 17:58:52.817877 kubelet[2733]: I0514 17:58:52.817723 2733 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2c1fde50-1c25-4687-8a12-5fe284b6ae21-hostproc\") on node \"localhost\" DevicePath \"\"" May 14 17:58:52.817877 kubelet[2733]: I0514 17:58:52.817730 2733 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2c1fde50-1c25-4687-8a12-5fe284b6ae21-cni-path\") on node \"localhost\" DevicePath \"\"" May 14 17:58:52.856198 kubelet[2733]: I0514 17:58:52.855919 2733 scope.go:117] "RemoveContainer" containerID="e38d0c45943894edbad3b800a1cca031a4ff74b22f4fa84067741866886ef5b6" May 14 17:58:52.860321 containerd[1502]: time="2025-05-14T17:58:52.860272723Z" level=info msg="RemoveContainer for \"e38d0c45943894edbad3b800a1cca031a4ff74b22f4fa84067741866886ef5b6\"" May 14 17:58:52.863317 systemd[1]: Removed slice kubepods-besteffort-pod80bdfde4_ef00_47c6_8cd8_76e2d3f36542.slice - libcontainer container kubepods-besteffort-pod80bdfde4_ef00_47c6_8cd8_76e2d3f36542.slice. May 14 17:58:52.874730 systemd[1]: Removed slice kubepods-burstable-pod2c1fde50_1c25_4687_8a12_5fe284b6ae21.slice - libcontainer container kubepods-burstable-pod2c1fde50_1c25_4687_8a12_5fe284b6ae21.slice. May 14 17:58:52.874862 systemd[1]: kubepods-burstable-pod2c1fde50_1c25_4687_8a12_5fe284b6ae21.slice: Consumed 6.661s CPU time, 124.2M memory peak, 188K read from disk, 17.6M written to disk. May 14 17:58:52.888823 containerd[1502]: time="2025-05-14T17:58:52.888759830Z" level=info msg="RemoveContainer for \"e38d0c45943894edbad3b800a1cca031a4ff74b22f4fa84067741866886ef5b6\" returns successfully" May 14 17:58:52.889096 kubelet[2733]: I0514 17:58:52.889072 2733 scope.go:117] "RemoveContainer" containerID="e38d0c45943894edbad3b800a1cca031a4ff74b22f4fa84067741866886ef5b6" May 14 17:58:52.889541 containerd[1502]: time="2025-05-14T17:58:52.889484187Z" level=error msg="ContainerStatus for \"e38d0c45943894edbad3b800a1cca031a4ff74b22f4fa84067741866886ef5b6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e38d0c45943894edbad3b800a1cca031a4ff74b22f4fa84067741866886ef5b6\": not found" May 14 17:58:52.893860 kubelet[2733]: E0514 17:58:52.893538 2733 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e38d0c45943894edbad3b800a1cca031a4ff74b22f4fa84067741866886ef5b6\": not found" containerID="e38d0c45943894edbad3b800a1cca031a4ff74b22f4fa84067741866886ef5b6" May 14 17:58:52.893860 kubelet[2733]: I0514 17:58:52.893664 2733 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e38d0c45943894edbad3b800a1cca031a4ff74b22f4fa84067741866886ef5b6"} err="failed to get container status \"e38d0c45943894edbad3b800a1cca031a4ff74b22f4fa84067741866886ef5b6\": rpc error: code = NotFound desc = an error occurred when try to find container \"e38d0c45943894edbad3b800a1cca031a4ff74b22f4fa84067741866886ef5b6\": not found" May 14 17:58:52.893860 kubelet[2733]: I0514 17:58:52.893754 2733 scope.go:117] "RemoveContainer" containerID="27a18febb643f8a0c8faaa8de9a16ed9f43327f5c5f4a8f5d927333f72103e33" May 14 17:58:52.896491 containerd[1502]: time="2025-05-14T17:58:52.896422034Z" level=info msg="RemoveContainer for \"27a18febb643f8a0c8faaa8de9a16ed9f43327f5c5f4a8f5d927333f72103e33\"" May 14 17:58:52.905052 containerd[1502]: time="2025-05-14T17:58:52.904937355Z" level=info msg="RemoveContainer for \"27a18febb643f8a0c8faaa8de9a16ed9f43327f5c5f4a8f5d927333f72103e33\" returns successfully" May 14 17:58:52.905153 kubelet[2733]: I0514 17:58:52.905134 2733 scope.go:117] "RemoveContainer" containerID="4d68907b329ebd4903f9382ded63ad01c0fe468b754ca6d8b4b64073435c8b52" May 14 17:58:52.906538 containerd[1502]: time="2025-05-14T17:58:52.906510907Z" level=info msg="RemoveContainer for \"4d68907b329ebd4903f9382ded63ad01c0fe468b754ca6d8b4b64073435c8b52\"" May 14 17:58:52.909895 containerd[1502]: time="2025-05-14T17:58:52.909849452Z" level=info msg="RemoveContainer for \"4d68907b329ebd4903f9382ded63ad01c0fe468b754ca6d8b4b64073435c8b52\" returns successfully" May 14 17:58:52.910084 kubelet[2733]: I0514 17:58:52.910046 2733 scope.go:117] "RemoveContainer" containerID="810cb86c8610bea565eeeb25c521108adb0fc8eae0fd8dc239e194d9f3f5bb9b" May 14 17:58:52.912227 containerd[1502]: time="2025-05-14T17:58:52.912192281Z" level=info msg="RemoveContainer for \"810cb86c8610bea565eeeb25c521108adb0fc8eae0fd8dc239e194d9f3f5bb9b\"" May 14 17:58:52.921584 containerd[1502]: time="2025-05-14T17:58:52.921542477Z" level=info msg="RemoveContainer for \"810cb86c8610bea565eeeb25c521108adb0fc8eae0fd8dc239e194d9f3f5bb9b\" returns successfully" May 14 17:58:52.921791 kubelet[2733]: I0514 17:58:52.921768 2733 scope.go:117] "RemoveContainer" containerID="d0048974ef5f063596b97e9ca23902e218ba331b61cbdbd1872a80885aa5cd82" May 14 17:58:52.923217 containerd[1502]: time="2025-05-14T17:58:52.923179990Z" level=info msg="RemoveContainer for \"d0048974ef5f063596b97e9ca23902e218ba331b61cbdbd1872a80885aa5cd82\"" May 14 17:58:52.925900 containerd[1502]: time="2025-05-14T17:58:52.925851097Z" level=info msg="RemoveContainer for \"d0048974ef5f063596b97e9ca23902e218ba331b61cbdbd1872a80885aa5cd82\" returns successfully" May 14 17:58:52.926057 kubelet[2733]: I0514 17:58:52.926023 2733 scope.go:117] "RemoveContainer" containerID="d6f2c66ccf984dc6944a0cd12ce8f4ecf480f849cd0d4aeddda3e4e9982ebc35" May 14 17:58:52.927367 containerd[1502]: time="2025-05-14T17:58:52.927327210Z" level=info msg="RemoveContainer for \"d6f2c66ccf984dc6944a0cd12ce8f4ecf480f849cd0d4aeddda3e4e9982ebc35\"" May 14 17:58:52.929901 containerd[1502]: time="2025-05-14T17:58:52.929835359Z" level=info msg="RemoveContainer for \"d6f2c66ccf984dc6944a0cd12ce8f4ecf480f849cd0d4aeddda3e4e9982ebc35\" returns successfully" May 14 17:58:52.930022 kubelet[2733]: I0514 17:58:52.929980 2733 scope.go:117] "RemoveContainer" containerID="27a18febb643f8a0c8faaa8de9a16ed9f43327f5c5f4a8f5d927333f72103e33" May 14 17:58:52.930215 containerd[1502]: time="2025-05-14T17:58:52.930148917Z" level=error msg="ContainerStatus for \"27a18febb643f8a0c8faaa8de9a16ed9f43327f5c5f4a8f5d927333f72103e33\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"27a18febb643f8a0c8faaa8de9a16ed9f43327f5c5f4a8f5d927333f72103e33\": not found" May 14 17:58:52.930330 kubelet[2733]: E0514 17:58:52.930297 2733 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"27a18febb643f8a0c8faaa8de9a16ed9f43327f5c5f4a8f5d927333f72103e33\": not found" containerID="27a18febb643f8a0c8faaa8de9a16ed9f43327f5c5f4a8f5d927333f72103e33" May 14 17:58:52.930361 kubelet[2733]: I0514 17:58:52.930326 2733 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"27a18febb643f8a0c8faaa8de9a16ed9f43327f5c5f4a8f5d927333f72103e33"} err="failed to get container status \"27a18febb643f8a0c8faaa8de9a16ed9f43327f5c5f4a8f5d927333f72103e33\": rpc error: code = NotFound desc = an error occurred when try to find container \"27a18febb643f8a0c8faaa8de9a16ed9f43327f5c5f4a8f5d927333f72103e33\": not found" May 14 17:58:52.930361 kubelet[2733]: I0514 17:58:52.930351 2733 scope.go:117] "RemoveContainer" containerID="4d68907b329ebd4903f9382ded63ad01c0fe468b754ca6d8b4b64073435c8b52" May 14 17:58:52.930530 containerd[1502]: time="2025-05-14T17:58:52.930494796Z" level=error msg="ContainerStatus for \"4d68907b329ebd4903f9382ded63ad01c0fe468b754ca6d8b4b64073435c8b52\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4d68907b329ebd4903f9382ded63ad01c0fe468b754ca6d8b4b64073435c8b52\": not found" May 14 17:58:52.930615 kubelet[2733]: E0514 17:58:52.930599 2733 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4d68907b329ebd4903f9382ded63ad01c0fe468b754ca6d8b4b64073435c8b52\": not found" containerID="4d68907b329ebd4903f9382ded63ad01c0fe468b754ca6d8b4b64073435c8b52" May 14 17:58:52.930645 kubelet[2733]: I0514 17:58:52.930618 2733 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4d68907b329ebd4903f9382ded63ad01c0fe468b754ca6d8b4b64073435c8b52"} err="failed to get container status \"4d68907b329ebd4903f9382ded63ad01c0fe468b754ca6d8b4b64073435c8b52\": rpc error: code = NotFound desc = an error occurred when try to find container \"4d68907b329ebd4903f9382ded63ad01c0fe468b754ca6d8b4b64073435c8b52\": not found" May 14 17:58:52.930645 kubelet[2733]: I0514 17:58:52.930638 2733 scope.go:117] "RemoveContainer" containerID="810cb86c8610bea565eeeb25c521108adb0fc8eae0fd8dc239e194d9f3f5bb9b" May 14 17:58:52.930805 containerd[1502]: time="2025-05-14T17:58:52.930767354Z" level=error msg="ContainerStatus for \"810cb86c8610bea565eeeb25c521108adb0fc8eae0fd8dc239e194d9f3f5bb9b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"810cb86c8610bea565eeeb25c521108adb0fc8eae0fd8dc239e194d9f3f5bb9b\": not found" May 14 17:58:52.930922 kubelet[2733]: E0514 17:58:52.930901 2733 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"810cb86c8610bea565eeeb25c521108adb0fc8eae0fd8dc239e194d9f3f5bb9b\": not found" containerID="810cb86c8610bea565eeeb25c521108adb0fc8eae0fd8dc239e194d9f3f5bb9b" May 14 17:58:52.930948 kubelet[2733]: I0514 17:58:52.930930 2733 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"810cb86c8610bea565eeeb25c521108adb0fc8eae0fd8dc239e194d9f3f5bb9b"} err="failed to get container status \"810cb86c8610bea565eeeb25c521108adb0fc8eae0fd8dc239e194d9f3f5bb9b\": rpc error: code = NotFound desc = an error occurred when try to find container \"810cb86c8610bea565eeeb25c521108adb0fc8eae0fd8dc239e194d9f3f5bb9b\": not found" May 14 17:58:52.930972 kubelet[2733]: I0514 17:58:52.930952 2733 scope.go:117] "RemoveContainer" containerID="d0048974ef5f063596b97e9ca23902e218ba331b61cbdbd1872a80885aa5cd82" May 14 17:58:52.931126 containerd[1502]: time="2025-05-14T17:58:52.931080473Z" level=error msg="ContainerStatus for \"d0048974ef5f063596b97e9ca23902e218ba331b61cbdbd1872a80885aa5cd82\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d0048974ef5f063596b97e9ca23902e218ba331b61cbdbd1872a80885aa5cd82\": not found" May 14 17:58:52.931203 kubelet[2733]: E0514 17:58:52.931185 2733 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d0048974ef5f063596b97e9ca23902e218ba331b61cbdbd1872a80885aa5cd82\": not found" containerID="d0048974ef5f063596b97e9ca23902e218ba331b61cbdbd1872a80885aa5cd82" May 14 17:58:52.931247 kubelet[2733]: I0514 17:58:52.931205 2733 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d0048974ef5f063596b97e9ca23902e218ba331b61cbdbd1872a80885aa5cd82"} err="failed to get container status \"d0048974ef5f063596b97e9ca23902e218ba331b61cbdbd1872a80885aa5cd82\": rpc error: code = NotFound desc = an error occurred when try to find container \"d0048974ef5f063596b97e9ca23902e218ba331b61cbdbd1872a80885aa5cd82\": not found" May 14 17:58:52.931247 kubelet[2733]: I0514 17:58:52.931225 2733 scope.go:117] "RemoveContainer" containerID="d6f2c66ccf984dc6944a0cd12ce8f4ecf480f849cd0d4aeddda3e4e9982ebc35" May 14 17:58:52.931398 containerd[1502]: time="2025-05-14T17:58:52.931346032Z" level=error msg="ContainerStatus for \"d6f2c66ccf984dc6944a0cd12ce8f4ecf480f849cd0d4aeddda3e4e9982ebc35\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d6f2c66ccf984dc6944a0cd12ce8f4ecf480f849cd0d4aeddda3e4e9982ebc35\": not found" May 14 17:58:52.931465 kubelet[2733]: E0514 17:58:52.931450 2733 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d6f2c66ccf984dc6944a0cd12ce8f4ecf480f849cd0d4aeddda3e4e9982ebc35\": not found" containerID="d6f2c66ccf984dc6944a0cd12ce8f4ecf480f849cd0d4aeddda3e4e9982ebc35" May 14 17:58:52.931488 kubelet[2733]: I0514 17:58:52.931474 2733 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d6f2c66ccf984dc6944a0cd12ce8f4ecf480f849cd0d4aeddda3e4e9982ebc35"} err="failed to get container status \"d6f2c66ccf984dc6944a0cd12ce8f4ecf480f849cd0d4aeddda3e4e9982ebc35\": rpc error: code = NotFound desc = an error occurred when try to find container \"d6f2c66ccf984dc6944a0cd12ce8f4ecf480f849cd0d4aeddda3e4e9982ebc35\": not found" May 14 17:58:53.443658 systemd[1]: var-lib-kubelet-pods-80bdfde4\x2def00\x2d47c6\x2d8cd8\x2d76e2d3f36542-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvp776.mount: Deactivated successfully. May 14 17:58:53.444061 systemd[1]: var-lib-kubelet-pods-2c1fde50\x2d1c25\x2d4687\x2d8a12\x2d5fe284b6ae21-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd5mbn.mount: Deactivated successfully. May 14 17:58:53.444119 systemd[1]: var-lib-kubelet-pods-2c1fde50\x2d1c25\x2d4687\x2d8a12\x2d5fe284b6ae21-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 14 17:58:53.444191 systemd[1]: var-lib-kubelet-pods-2c1fde50\x2d1c25\x2d4687\x2d8a12\x2d5fe284b6ae21-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 14 17:58:53.657304 kubelet[2733]: I0514 17:58:53.657266 2733 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c1fde50-1c25-4687-8a12-5fe284b6ae21" path="/var/lib/kubelet/pods/2c1fde50-1c25-4687-8a12-5fe284b6ae21/volumes" May 14 17:58:53.657805 kubelet[2733]: I0514 17:58:53.657770 2733 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80bdfde4-ef00-47c6-8cd8-76e2d3f36542" path="/var/lib/kubelet/pods/80bdfde4-ef00-47c6-8cd8-76e2d3f36542/volumes" May 14 17:58:54.342240 sshd[4322]: Connection closed by 10.0.0.1 port 42840 May 14 17:58:54.342692 sshd-session[4320]: pam_unix(sshd:session): session closed for user core May 14 17:58:54.356255 systemd[1]: sshd@22-10.0.0.30:22-10.0.0.1:42840.service: Deactivated successfully. May 14 17:58:54.358100 systemd[1]: session-23.scope: Deactivated successfully. May 14 17:58:54.358338 systemd[1]: session-23.scope: Consumed 1.339s CPU time, 23.5M memory peak. May 14 17:58:54.359544 systemd-logind[1488]: Session 23 logged out. Waiting for processes to exit. May 14 17:58:54.363012 systemd[1]: Started sshd@23-10.0.0.30:22-10.0.0.1:60166.service - OpenSSH per-connection server daemon (10.0.0.1:60166). May 14 17:58:54.364123 systemd-logind[1488]: Removed session 23. May 14 17:58:54.419253 sshd[4478]: Accepted publickey for core from 10.0.0.1 port 60166 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 17:58:54.420396 sshd-session[4478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 17:58:54.424414 systemd-logind[1488]: New session 24 of user core. May 14 17:58:54.438481 systemd[1]: Started session-24.scope - Session 24 of User core. May 14 17:58:55.361203 sshd[4480]: Connection closed by 10.0.0.1 port 60166 May 14 17:58:55.360669 sshd-session[4478]: pam_unix(sshd:session): session closed for user core May 14 17:58:55.371584 systemd[1]: sshd@23-10.0.0.30:22-10.0.0.1:60166.service: Deactivated successfully. May 14 17:58:55.376522 systemd[1]: session-24.scope: Deactivated successfully. May 14 17:58:55.378147 systemd-logind[1488]: Session 24 logged out. Waiting for processes to exit. May 14 17:58:55.382837 systemd[1]: Started sshd@24-10.0.0.30:22-10.0.0.1:60178.service - OpenSSH per-connection server daemon (10.0.0.1:60178). May 14 17:58:55.387660 systemd-logind[1488]: Removed session 24. May 14 17:58:55.394528 kubelet[2733]: I0514 17:58:55.394465 2733 topology_manager.go:215] "Topology Admit Handler" podUID="fdd03a27-fded-4051-95be-a6707b59b280" podNamespace="kube-system" podName="cilium-p9q5j" May 14 17:58:55.395220 kubelet[2733]: E0514 17:58:55.394606 2733 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2c1fde50-1c25-4687-8a12-5fe284b6ae21" containerName="apply-sysctl-overwrites" May 14 17:58:55.395220 kubelet[2733]: E0514 17:58:55.394616 2733 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2c1fde50-1c25-4687-8a12-5fe284b6ae21" containerName="mount-bpf-fs" May 14 17:58:55.395220 kubelet[2733]: E0514 17:58:55.394622 2733 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="80bdfde4-ef00-47c6-8cd8-76e2d3f36542" containerName="cilium-operator" May 14 17:58:55.395220 kubelet[2733]: E0514 17:58:55.394627 2733 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2c1fde50-1c25-4687-8a12-5fe284b6ae21" containerName="cilium-agent" May 14 17:58:55.395220 kubelet[2733]: E0514 17:58:55.394633 2733 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2c1fde50-1c25-4687-8a12-5fe284b6ae21" containerName="mount-cgroup" May 14 17:58:55.395220 kubelet[2733]: E0514 17:58:55.394638 2733 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2c1fde50-1c25-4687-8a12-5fe284b6ae21" containerName="clean-cilium-state" May 14 17:58:55.395220 kubelet[2733]: I0514 17:58:55.394658 2733 memory_manager.go:354] "RemoveStaleState removing state" podUID="80bdfde4-ef00-47c6-8cd8-76e2d3f36542" containerName="cilium-operator" May 14 17:58:55.395220 kubelet[2733]: I0514 17:58:55.394664 2733 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c1fde50-1c25-4687-8a12-5fe284b6ae21" containerName="cilium-agent" May 14 17:58:55.414404 systemd[1]: Created slice kubepods-burstable-podfdd03a27_fded_4051_95be_a6707b59b280.slice - libcontainer container kubepods-burstable-podfdd03a27_fded_4051_95be_a6707b59b280.slice. May 14 17:58:55.435313 kubelet[2733]: I0514 17:58:55.435275 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fdd03a27-fded-4051-95be-a6707b59b280-cilium-config-path\") pod \"cilium-p9q5j\" (UID: \"fdd03a27-fded-4051-95be-a6707b59b280\") " pod="kube-system/cilium-p9q5j" May 14 17:58:55.435403 kubelet[2733]: I0514 17:58:55.435322 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fdd03a27-fded-4051-95be-a6707b59b280-cilium-ipsec-secrets\") pod \"cilium-p9q5j\" (UID: \"fdd03a27-fded-4051-95be-a6707b59b280\") " pod="kube-system/cilium-p9q5j" May 14 17:58:55.435403 kubelet[2733]: I0514 17:58:55.435354 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fdd03a27-fded-4051-95be-a6707b59b280-cilium-cgroup\") pod \"cilium-p9q5j\" (UID: \"fdd03a27-fded-4051-95be-a6707b59b280\") " pod="kube-system/cilium-p9q5j" May 14 17:58:55.435403 kubelet[2733]: I0514 17:58:55.435374 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fdd03a27-fded-4051-95be-a6707b59b280-cni-path\") pod \"cilium-p9q5j\" (UID: \"fdd03a27-fded-4051-95be-a6707b59b280\") " pod="kube-system/cilium-p9q5j" May 14 17:58:55.435403 kubelet[2733]: I0514 17:58:55.435391 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fdd03a27-fded-4051-95be-a6707b59b280-etc-cni-netd\") pod \"cilium-p9q5j\" (UID: \"fdd03a27-fded-4051-95be-a6707b59b280\") " pod="kube-system/cilium-p9q5j" May 14 17:58:55.435510 kubelet[2733]: I0514 17:58:55.435423 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fdd03a27-fded-4051-95be-a6707b59b280-xtables-lock\") pod \"cilium-p9q5j\" (UID: \"fdd03a27-fded-4051-95be-a6707b59b280\") " pod="kube-system/cilium-p9q5j" May 14 17:58:55.435510 kubelet[2733]: I0514 17:58:55.435457 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fdd03a27-fded-4051-95be-a6707b59b280-host-proc-sys-kernel\") pod \"cilium-p9q5j\" (UID: \"fdd03a27-fded-4051-95be-a6707b59b280\") " pod="kube-system/cilium-p9q5j" May 14 17:58:55.435510 kubelet[2733]: I0514 17:58:55.435486 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fdd03a27-fded-4051-95be-a6707b59b280-hubble-tls\") pod \"cilium-p9q5j\" (UID: \"fdd03a27-fded-4051-95be-a6707b59b280\") " pod="kube-system/cilium-p9q5j" May 14 17:58:55.435569 kubelet[2733]: I0514 17:58:55.435515 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fdd03a27-fded-4051-95be-a6707b59b280-bpf-maps\") pod \"cilium-p9q5j\" (UID: \"fdd03a27-fded-4051-95be-a6707b59b280\") " pod="kube-system/cilium-p9q5j" May 14 17:58:55.435569 kubelet[2733]: I0514 17:58:55.435535 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fdd03a27-fded-4051-95be-a6707b59b280-cilium-run\") pod \"cilium-p9q5j\" (UID: \"fdd03a27-fded-4051-95be-a6707b59b280\") " pod="kube-system/cilium-p9q5j" May 14 17:58:55.435569 kubelet[2733]: I0514 17:58:55.435554 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fdd03a27-fded-4051-95be-a6707b59b280-lib-modules\") pod \"cilium-p9q5j\" (UID: \"fdd03a27-fded-4051-95be-a6707b59b280\") " pod="kube-system/cilium-p9q5j" May 14 17:58:55.435628 kubelet[2733]: I0514 17:58:55.435580 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv27h\" (UniqueName: \"kubernetes.io/projected/fdd03a27-fded-4051-95be-a6707b59b280-kube-api-access-gv27h\") pod \"cilium-p9q5j\" (UID: \"fdd03a27-fded-4051-95be-a6707b59b280\") " pod="kube-system/cilium-p9q5j" May 14 17:58:55.435628 kubelet[2733]: I0514 17:58:55.435601 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fdd03a27-fded-4051-95be-a6707b59b280-clustermesh-secrets\") pod \"cilium-p9q5j\" (UID: \"fdd03a27-fded-4051-95be-a6707b59b280\") " pod="kube-system/cilium-p9q5j" May 14 17:58:55.435628 kubelet[2733]: I0514 17:58:55.435616 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fdd03a27-fded-4051-95be-a6707b59b280-host-proc-sys-net\") pod \"cilium-p9q5j\" (UID: \"fdd03a27-fded-4051-95be-a6707b59b280\") " pod="kube-system/cilium-p9q5j" May 14 17:58:55.435697 kubelet[2733]: I0514 17:58:55.435631 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fdd03a27-fded-4051-95be-a6707b59b280-hostproc\") pod \"cilium-p9q5j\" (UID: \"fdd03a27-fded-4051-95be-a6707b59b280\") " pod="kube-system/cilium-p9q5j" May 14 17:58:55.452927 sshd[4492]: Accepted publickey for core from 10.0.0.1 port 60178 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 17:58:55.454128 sshd-session[4492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 17:58:55.459062 systemd-logind[1488]: New session 25 of user core. May 14 17:58:55.468329 systemd[1]: Started session-25.scope - Session 25 of User core. May 14 17:58:55.517304 sshd[4494]: Connection closed by 10.0.0.1 port 60178 May 14 17:58:55.517777 sshd-session[4492]: pam_unix(sshd:session): session closed for user core May 14 17:58:55.531291 systemd[1]: sshd@24-10.0.0.30:22-10.0.0.1:60178.service: Deactivated successfully. May 14 17:58:55.533064 systemd[1]: session-25.scope: Deactivated successfully. May 14 17:58:55.535513 systemd-logind[1488]: Session 25 logged out. Waiting for processes to exit. May 14 17:58:55.549074 systemd[1]: Started sshd@25-10.0.0.30:22-10.0.0.1:60188.service - OpenSSH per-connection server daemon (10.0.0.1:60188). May 14 17:58:55.553032 systemd-logind[1488]: Removed session 25. May 14 17:58:55.604633 sshd[4505]: Accepted publickey for core from 10.0.0.1 port 60188 ssh2: RSA SHA256:8RMyfFXHl5/x7yT6EG1cRfaT3SGetct0J8+4HeNKBvo May 14 17:58:55.605772 sshd-session[4505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 17:58:55.610524 systemd-logind[1488]: New session 26 of user core. May 14 17:58:55.616398 systemd[1]: Started session-26.scope - Session 26 of User core. May 14 17:58:55.719474 containerd[1502]: time="2025-05-14T17:58:55.719283464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p9q5j,Uid:fdd03a27-fded-4051-95be-a6707b59b280,Namespace:kube-system,Attempt:0,}" May 14 17:58:55.729633 kubelet[2733]: E0514 17:58:55.729469 2733 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 14 17:58:55.744088 containerd[1502]: time="2025-05-14T17:58:55.744023953Z" level=info msg="connecting to shim f1a10fd248e12b9365745d141b386ea76e22b80c156f954909f573806972b74a" address="unix:///run/containerd/s/563d437c95ac88499b0e0cce65c5bb79033f7493925ba19284d9c64cd3325655" namespace=k8s.io protocol=ttrpc version=3 May 14 17:58:55.771388 systemd[1]: Started cri-containerd-f1a10fd248e12b9365745d141b386ea76e22b80c156f954909f573806972b74a.scope - libcontainer container f1a10fd248e12b9365745d141b386ea76e22b80c156f954909f573806972b74a. May 14 17:58:55.795018 containerd[1502]: time="2025-05-14T17:58:55.794970767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p9q5j,Uid:fdd03a27-fded-4051-95be-a6707b59b280,Namespace:kube-system,Attempt:0,} returns sandbox id \"f1a10fd248e12b9365745d141b386ea76e22b80c156f954909f573806972b74a\"" May 14 17:58:55.799080 containerd[1502]: time="2025-05-14T17:58:55.799022835Z" level=info msg="CreateContainer within sandbox \"f1a10fd248e12b9365745d141b386ea76e22b80c156f954909f573806972b74a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 17:58:55.804810 containerd[1502]: time="2025-05-14T17:58:55.804761579Z" level=info msg="Container b344930c318187a9caf47c364376e734b3a751e46fc599556dc106b2ffeba7d7: CDI devices from CRI Config.CDIDevices: []" May 14 17:58:55.810910 containerd[1502]: time="2025-05-14T17:58:55.810859921Z" level=info msg="CreateContainer within sandbox \"f1a10fd248e12b9365745d141b386ea76e22b80c156f954909f573806972b74a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b344930c318187a9caf47c364376e734b3a751e46fc599556dc106b2ffeba7d7\"" May 14 17:58:55.811622 containerd[1502]: time="2025-05-14T17:58:55.811586799Z" level=info msg="StartContainer for \"b344930c318187a9caf47c364376e734b3a751e46fc599556dc106b2ffeba7d7\"" May 14 17:58:55.812465 containerd[1502]: time="2025-05-14T17:58:55.812427397Z" level=info msg="connecting to shim b344930c318187a9caf47c364376e734b3a751e46fc599556dc106b2ffeba7d7" address="unix:///run/containerd/s/563d437c95ac88499b0e0cce65c5bb79033f7493925ba19284d9c64cd3325655" protocol=ttrpc version=3 May 14 17:58:55.835364 systemd[1]: Started cri-containerd-b344930c318187a9caf47c364376e734b3a751e46fc599556dc106b2ffeba7d7.scope - libcontainer container b344930c318187a9caf47c364376e734b3a751e46fc599556dc106b2ffeba7d7. May 14 17:58:55.869123 containerd[1502]: time="2025-05-14T17:58:55.869023674Z" level=info msg="StartContainer for \"b344930c318187a9caf47c364376e734b3a751e46fc599556dc106b2ffeba7d7\" returns successfully" May 14 17:58:55.904317 systemd[1]: cri-containerd-b344930c318187a9caf47c364376e734b3a751e46fc599556dc106b2ffeba7d7.scope: Deactivated successfully. May 14 17:58:55.905506 containerd[1502]: time="2025-05-14T17:58:55.905468489Z" level=info msg="received exit event container_id:\"b344930c318187a9caf47c364376e734b3a751e46fc599556dc106b2ffeba7d7\" id:\"b344930c318187a9caf47c364376e734b3a751e46fc599556dc106b2ffeba7d7\" pid:4570 exited_at:{seconds:1747245535 nanos:905138370}" May 14 17:58:55.905564 containerd[1502]: time="2025-05-14T17:58:55.905543409Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b344930c318187a9caf47c364376e734b3a751e46fc599556dc106b2ffeba7d7\" id:\"b344930c318187a9caf47c364376e734b3a751e46fc599556dc106b2ffeba7d7\" pid:4570 exited_at:{seconds:1747245535 nanos:905138370}" May 14 17:58:56.880290 containerd[1502]: time="2025-05-14T17:58:56.880221137Z" level=info msg="CreateContainer within sandbox \"f1a10fd248e12b9365745d141b386ea76e22b80c156f954909f573806972b74a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 17:58:56.890358 containerd[1502]: time="2025-05-14T17:58:56.889697395Z" level=info msg="Container 8b4fdefab80299d05b6cdf90e7b1363ee5633149f01648ee064fdb1c87f4df2e: CDI devices from CRI Config.CDIDevices: []" May 14 17:58:56.897684 containerd[1502]: time="2025-05-14T17:58:56.897551817Z" level=info msg="CreateContainer within sandbox \"f1a10fd248e12b9365745d141b386ea76e22b80c156f954909f573806972b74a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8b4fdefab80299d05b6cdf90e7b1363ee5633149f01648ee064fdb1c87f4df2e\"" May 14 17:58:56.900063 containerd[1502]: time="2025-05-14T17:58:56.899243373Z" level=info msg="StartContainer for \"8b4fdefab80299d05b6cdf90e7b1363ee5633149f01648ee064fdb1c87f4df2e\"" May 14 17:58:56.900063 containerd[1502]: time="2025-05-14T17:58:56.899997131Z" level=info msg="connecting to shim 8b4fdefab80299d05b6cdf90e7b1363ee5633149f01648ee064fdb1c87f4df2e" address="unix:///run/containerd/s/563d437c95ac88499b0e0cce65c5bb79033f7493925ba19284d9c64cd3325655" protocol=ttrpc version=3 May 14 17:58:56.921303 systemd[1]: Started cri-containerd-8b4fdefab80299d05b6cdf90e7b1363ee5633149f01648ee064fdb1c87f4df2e.scope - libcontainer container 8b4fdefab80299d05b6cdf90e7b1363ee5633149f01648ee064fdb1c87f4df2e. May 14 17:58:56.947358 containerd[1502]: time="2025-05-14T17:58:56.947315342Z" level=info msg="StartContainer for \"8b4fdefab80299d05b6cdf90e7b1363ee5633149f01648ee064fdb1c87f4df2e\" returns successfully" May 14 17:58:56.961809 systemd[1]: cri-containerd-8b4fdefab80299d05b6cdf90e7b1363ee5633149f01648ee064fdb1c87f4df2e.scope: Deactivated successfully. May 14 17:58:56.964373 containerd[1502]: time="2025-05-14T17:58:56.964299622Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8b4fdefab80299d05b6cdf90e7b1363ee5633149f01648ee064fdb1c87f4df2e\" id:\"8b4fdefab80299d05b6cdf90e7b1363ee5633149f01648ee064fdb1c87f4df2e\" pid:4617 exited_at:{seconds:1747245536 nanos:962891786}" May 14 17:58:56.964373 containerd[1502]: time="2025-05-14T17:58:56.964319142Z" level=info msg="received exit event container_id:\"8b4fdefab80299d05b6cdf90e7b1363ee5633149f01648ee064fdb1c87f4df2e\" id:\"8b4fdefab80299d05b6cdf90e7b1363ee5633149f01648ee064fdb1c87f4df2e\" pid:4617 exited_at:{seconds:1747245536 nanos:962891786}" May 14 17:58:56.982581 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b4fdefab80299d05b6cdf90e7b1363ee5633149f01648ee064fdb1c87f4df2e-rootfs.mount: Deactivated successfully. May 14 17:58:57.497611 kubelet[2733]: I0514 17:58:57.497560 2733 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-14T17:58:57Z","lastTransitionTime":"2025-05-14T17:58:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 14 17:58:57.883907 containerd[1502]: time="2025-05-14T17:58:57.883792009Z" level=info msg="CreateContainer within sandbox \"f1a10fd248e12b9365745d141b386ea76e22b80c156f954909f573806972b74a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 17:58:57.895989 containerd[1502]: time="2025-05-14T17:58:57.895956627Z" level=info msg="Container 0cd0aa112344a05169885900c174dc6e07df2e22bd66e8c400205b14ee24f35c: CDI devices from CRI Config.CDIDevices: []" May 14 17:58:57.902769 containerd[1502]: time="2025-05-14T17:58:57.902726175Z" level=info msg="CreateContainer within sandbox \"f1a10fd248e12b9365745d141b386ea76e22b80c156f954909f573806972b74a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0cd0aa112344a05169885900c174dc6e07df2e22bd66e8c400205b14ee24f35c\"" May 14 17:58:57.903494 containerd[1502]: time="2025-05-14T17:58:57.903456494Z" level=info msg="StartContainer for \"0cd0aa112344a05169885900c174dc6e07df2e22bd66e8c400205b14ee24f35c\"" May 14 17:58:57.904854 containerd[1502]: time="2025-05-14T17:58:57.904825692Z" level=info msg="connecting to shim 0cd0aa112344a05169885900c174dc6e07df2e22bd66e8c400205b14ee24f35c" address="unix:///run/containerd/s/563d437c95ac88499b0e0cce65c5bb79033f7493925ba19284d9c64cd3325655" protocol=ttrpc version=3 May 14 17:58:57.930335 systemd[1]: Started cri-containerd-0cd0aa112344a05169885900c174dc6e07df2e22bd66e8c400205b14ee24f35c.scope - libcontainer container 0cd0aa112344a05169885900c174dc6e07df2e22bd66e8c400205b14ee24f35c. May 14 17:58:57.976006 containerd[1502]: time="2025-05-14T17:58:57.975968885Z" level=info msg="StartContainer for \"0cd0aa112344a05169885900c174dc6e07df2e22bd66e8c400205b14ee24f35c\" returns successfully" May 14 17:58:57.976620 systemd[1]: cri-containerd-0cd0aa112344a05169885900c174dc6e07df2e22bd66e8c400205b14ee24f35c.scope: Deactivated successfully. May 14 17:58:57.978059 containerd[1502]: time="2025-05-14T17:58:57.977928482Z" level=info msg="received exit event container_id:\"0cd0aa112344a05169885900c174dc6e07df2e22bd66e8c400205b14ee24f35c\" id:\"0cd0aa112344a05169885900c174dc6e07df2e22bd66e8c400205b14ee24f35c\" pid:4661 exited_at:{seconds:1747245537 nanos:977692522}" May 14 17:58:57.978740 containerd[1502]: time="2025-05-14T17:58:57.978707640Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0cd0aa112344a05169885900c174dc6e07df2e22bd66e8c400205b14ee24f35c\" id:\"0cd0aa112344a05169885900c174dc6e07df2e22bd66e8c400205b14ee24f35c\" pid:4661 exited_at:{seconds:1747245537 nanos:977692522}" May 14 17:58:57.997887 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0cd0aa112344a05169885900c174dc6e07df2e22bd66e8c400205b14ee24f35c-rootfs.mount: Deactivated successfully. May 14 17:58:58.889854 containerd[1502]: time="2025-05-14T17:58:58.889815448Z" level=info msg="CreateContainer within sandbox \"f1a10fd248e12b9365745d141b386ea76e22b80c156f954909f573806972b74a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 17:58:58.903896 containerd[1502]: time="2025-05-14T17:58:58.903259751Z" level=info msg="Container 8de5e357dbdb653b1da8a69b1dad4058d65a03ef9ca2cb3a72124c3a01986130: CDI devices from CRI Config.CDIDevices: []" May 14 17:58:58.910001 containerd[1502]: time="2025-05-14T17:58:58.909892342Z" level=info msg="CreateContainer within sandbox \"f1a10fd248e12b9365745d141b386ea76e22b80c156f954909f573806972b74a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8de5e357dbdb653b1da8a69b1dad4058d65a03ef9ca2cb3a72124c3a01986130\"" May 14 17:58:58.911631 containerd[1502]: time="2025-05-14T17:58:58.911603060Z" level=info msg="StartContainer for \"8de5e357dbdb653b1da8a69b1dad4058d65a03ef9ca2cb3a72124c3a01986130\"" May 14 17:58:58.912830 containerd[1502]: time="2025-05-14T17:58:58.912802259Z" level=info msg="connecting to shim 8de5e357dbdb653b1da8a69b1dad4058d65a03ef9ca2cb3a72124c3a01986130" address="unix:///run/containerd/s/563d437c95ac88499b0e0cce65c5bb79033f7493925ba19284d9c64cd3325655" protocol=ttrpc version=3 May 14 17:58:58.937397 systemd[1]: Started cri-containerd-8de5e357dbdb653b1da8a69b1dad4058d65a03ef9ca2cb3a72124c3a01986130.scope - libcontainer container 8de5e357dbdb653b1da8a69b1dad4058d65a03ef9ca2cb3a72124c3a01986130. May 14 17:58:58.969784 systemd[1]: cri-containerd-8de5e357dbdb653b1da8a69b1dad4058d65a03ef9ca2cb3a72124c3a01986130.scope: Deactivated successfully. May 14 17:58:58.970873 containerd[1502]: time="2025-05-14T17:58:58.970767266Z" level=info msg="received exit event container_id:\"8de5e357dbdb653b1da8a69b1dad4058d65a03ef9ca2cb3a72124c3a01986130\" id:\"8de5e357dbdb653b1da8a69b1dad4058d65a03ef9ca2cb3a72124c3a01986130\" pid:4699 exited_at:{seconds:1747245538 nanos:970558746}" May 14 17:58:58.970873 containerd[1502]: time="2025-05-14T17:58:58.970863826Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8de5e357dbdb653b1da8a69b1dad4058d65a03ef9ca2cb3a72124c3a01986130\" id:\"8de5e357dbdb653b1da8a69b1dad4058d65a03ef9ca2cb3a72124c3a01986130\" pid:4699 exited_at:{seconds:1747245538 nanos:970558746}" May 14 17:58:58.979297 containerd[1502]: time="2025-05-14T17:58:58.979263455Z" level=info msg="StartContainer for \"8de5e357dbdb653b1da8a69b1dad4058d65a03ef9ca2cb3a72124c3a01986130\" returns successfully" May 14 17:58:58.990544 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8de5e357dbdb653b1da8a69b1dad4058d65a03ef9ca2cb3a72124c3a01986130-rootfs.mount: Deactivated successfully. May 14 17:58:59.895053 containerd[1502]: time="2025-05-14T17:58:59.895012242Z" level=info msg="CreateContainer within sandbox \"f1a10fd248e12b9365745d141b386ea76e22b80c156f954909f573806972b74a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 17:58:59.905427 containerd[1502]: time="2025-05-14T17:58:59.904709115Z" level=info msg="Container 9cee2841a80c11a26156e447b91eee447f57946b5834b5d6857362184b0935fe: CDI devices from CRI Config.CDIDevices: []" May 14 17:58:59.908405 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3614340082.mount: Deactivated successfully. May 14 17:58:59.914069 containerd[1502]: time="2025-05-14T17:58:59.914019508Z" level=info msg="CreateContainer within sandbox \"f1a10fd248e12b9365745d141b386ea76e22b80c156f954909f573806972b74a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9cee2841a80c11a26156e447b91eee447f57946b5834b5d6857362184b0935fe\"" May 14 17:58:59.914671 containerd[1502]: time="2025-05-14T17:58:59.914647907Z" level=info msg="StartContainer for \"9cee2841a80c11a26156e447b91eee447f57946b5834b5d6857362184b0935fe\"" May 14 17:58:59.917521 containerd[1502]: time="2025-05-14T17:58:59.917487985Z" level=info msg="connecting to shim 9cee2841a80c11a26156e447b91eee447f57946b5834b5d6857362184b0935fe" address="unix:///run/containerd/s/563d437c95ac88499b0e0cce65c5bb79033f7493925ba19284d9c64cd3325655" protocol=ttrpc version=3 May 14 17:58:59.949370 systemd[1]: Started cri-containerd-9cee2841a80c11a26156e447b91eee447f57946b5834b5d6857362184b0935fe.scope - libcontainer container 9cee2841a80c11a26156e447b91eee447f57946b5834b5d6857362184b0935fe. May 14 17:58:59.990509 containerd[1502]: time="2025-05-14T17:58:59.990383971Z" level=info msg="StartContainer for \"9cee2841a80c11a26156e447b91eee447f57946b5834b5d6857362184b0935fe\" returns successfully" May 14 17:59:00.048022 containerd[1502]: time="2025-05-14T17:59:00.047963111Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9cee2841a80c11a26156e447b91eee447f57946b5834b5d6857362184b0935fe\" id:\"8312c0d8b1831e5b2dfbdc2915e94ea11beee4602f6917a58544138626df7d67\" pid:4766 exited_at:{seconds:1747245540 nanos:47633151}" May 14 17:59:00.252191 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 14 17:59:00.916711 kubelet[2733]: I0514 17:59:00.916639 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-p9q5j" podStartSLOduration=5.916619811 podStartE2EDuration="5.916619811s" podCreationTimestamp="2025-05-14 17:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 17:59:00.916411011 +0000 UTC m=+85.354993261" watchObservedRunningTime="2025-05-14 17:59:00.916619811 +0000 UTC m=+85.355202061" May 14 17:59:01.992966 containerd[1502]: time="2025-05-14T17:59:01.992886891Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9cee2841a80c11a26156e447b91eee447f57946b5834b5d6857362184b0935fe\" id:\"c43eb93b28cc54938eef3cc50b6486b53f3172bae734baf3d3b78132a18b3e31\" pid:4927 exit_status:1 exited_at:{seconds:1747245541 nanos:992313291}" May 14 17:59:03.238190 systemd-networkd[1439]: lxc_health: Link UP May 14 17:59:03.238576 systemd-networkd[1439]: lxc_health: Gained carrier May 14 17:59:04.142511 containerd[1502]: time="2025-05-14T17:59:04.142466649Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9cee2841a80c11a26156e447b91eee447f57946b5834b5d6857362184b0935fe\" id:\"531da8da017b4ed775aa73fe924450234a778a93980e3e6b710a51ffdc195e9f\" pid:5297 exited_at:{seconds:1747245544 nanos:142139568}" May 14 17:59:04.727324 systemd-networkd[1439]: lxc_health: Gained IPv6LL May 14 17:59:06.269657 containerd[1502]: time="2025-05-14T17:59:06.269610056Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9cee2841a80c11a26156e447b91eee447f57946b5834b5d6857362184b0935fe\" id:\"56411d9897dab6f312fd641b2791cebd2b14a3c8694b73175ebbddbb8f5edda0\" pid:5332 exited_at:{seconds:1747245546 nanos:269057375}" May 14 17:59:08.417061 containerd[1502]: time="2025-05-14T17:59:08.417019584Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9cee2841a80c11a26156e447b91eee447f57946b5834b5d6857362184b0935fe\" id:\"6fa8d3801a845f1f049af212c1f8027b505e6db0f268f2beac48dac56829d3ec\" pid:5364 exited_at:{seconds:1747245548 nanos:416745983}" May 14 17:59:08.421554 sshd[4507]: Connection closed by 10.0.0.1 port 60188 May 14 17:59:08.422473 sshd-session[4505]: pam_unix(sshd:session): session closed for user core May 14 17:59:08.426895 systemd[1]: sshd@25-10.0.0.30:22-10.0.0.1:60188.service: Deactivated successfully. May 14 17:59:08.428652 systemd[1]: session-26.scope: Deactivated successfully. May 14 17:59:08.429325 systemd-logind[1488]: Session 26 logged out. Waiting for processes to exit. May 14 17:59:08.430632 systemd-logind[1488]: Removed session 26.