Sep 10 23:41:23.804013 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 10 23:41:23.804037 kernel: Linux version 6.12.46-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Wed Sep 10 22:24:03 -00 2025 Sep 10 23:41:23.804047 kernel: KASLR enabled Sep 10 23:41:23.804053 kernel: efi: EFI v2.7 by EDK II Sep 10 23:41:23.804058 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Sep 10 23:41:23.804064 kernel: random: crng init done Sep 10 23:41:23.804070 kernel: secureboot: Secure boot disabled Sep 10 23:41:23.804076 kernel: ACPI: Early table checksum verification disabled Sep 10 23:41:23.804081 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Sep 10 23:41:23.804088 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 10 23:41:23.804094 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:41:23.804100 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:41:23.804105 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:41:23.804111 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:41:23.804118 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:41:23.804125 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:41:23.804131 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:41:23.804137 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:41:23.804143 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:41:23.804149 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 10 23:41:23.804155 kernel: ACPI: Use ACPI SPCR as default console: No Sep 10 23:41:23.804161 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 10 23:41:23.804167 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Sep 10 23:41:23.804173 kernel: Zone ranges: Sep 10 23:41:23.804179 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 10 23:41:23.804186 kernel: DMA32 empty Sep 10 23:41:23.804191 kernel: Normal empty Sep 10 23:41:23.804197 kernel: Device empty Sep 10 23:41:23.804203 kernel: Movable zone start for each node Sep 10 23:41:23.804209 kernel: Early memory node ranges Sep 10 23:41:23.804214 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Sep 10 23:41:23.804220 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Sep 10 23:41:23.804226 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Sep 10 23:41:23.804232 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Sep 10 23:41:23.804238 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Sep 10 23:41:23.804244 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Sep 10 23:41:23.804249 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Sep 10 23:41:23.804257 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Sep 10 23:41:23.804262 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Sep 10 23:41:23.804268 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 10 23:41:23.804277 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 10 23:41:23.804283 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 10 23:41:23.804290 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 10 23:41:23.804297 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 10 23:41:23.804304 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 10 23:41:23.804310 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Sep 10 23:41:23.804316 kernel: psci: probing for conduit method from ACPI. Sep 10 23:41:23.804322 kernel: psci: PSCIv1.1 detected in firmware. Sep 10 23:41:23.804329 kernel: psci: Using standard PSCI v0.2 function IDs Sep 10 23:41:23.804335 kernel: psci: Trusted OS migration not required Sep 10 23:41:23.804341 kernel: psci: SMC Calling Convention v1.1 Sep 10 23:41:23.804347 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 10 23:41:23.804353 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 10 23:41:23.804361 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 10 23:41:23.804368 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 10 23:41:23.804374 kernel: Detected PIPT I-cache on CPU0 Sep 10 23:41:23.804380 kernel: CPU features: detected: GIC system register CPU interface Sep 10 23:41:23.804387 kernel: CPU features: detected: Spectre-v4 Sep 10 23:41:23.804393 kernel: CPU features: detected: Spectre-BHB Sep 10 23:41:23.804399 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 10 23:41:23.804405 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 10 23:41:23.804412 kernel: CPU features: detected: ARM erratum 1418040 Sep 10 23:41:23.804418 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 10 23:41:23.804424 kernel: alternatives: applying boot alternatives Sep 10 23:41:23.804431 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=dd9c14cce645c634e06a91b09405eea80057f02909b9267c482dc457df1cddec Sep 10 23:41:23.804439 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 10 23:41:23.804445 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 10 23:41:23.804452 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 10 23:41:23.804458 kernel: Fallback order for Node 0: 0 Sep 10 23:41:23.804464 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 10 23:41:23.804471 kernel: Policy zone: DMA Sep 10 23:41:23.804477 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 10 23:41:23.804483 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 10 23:41:23.804489 kernel: software IO TLB: area num 4. Sep 10 23:41:23.804496 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 10 23:41:23.804502 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Sep 10 23:41:23.804510 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 10 23:41:23.804516 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 10 23:41:23.804523 kernel: rcu: RCU event tracing is enabled. Sep 10 23:41:23.804530 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 10 23:41:23.804536 kernel: Trampoline variant of Tasks RCU enabled. Sep 10 23:41:23.804543 kernel: Tracing variant of Tasks RCU enabled. Sep 10 23:41:23.804549 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 10 23:41:23.804555 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 10 23:41:23.804610 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 23:41:23.804617 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 23:41:23.804623 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 10 23:41:23.804632 kernel: GICv3: 256 SPIs implemented Sep 10 23:41:23.804639 kernel: GICv3: 0 Extended SPIs implemented Sep 10 23:41:23.804645 kernel: Root IRQ handler: gic_handle_irq Sep 10 23:41:23.804651 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 10 23:41:23.804657 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 10 23:41:23.804664 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 10 23:41:23.804670 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 10 23:41:23.804676 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 10 23:41:23.804683 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 10 23:41:23.804689 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 10 23:41:23.804696 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 10 23:41:23.804702 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 10 23:41:23.804709 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 23:41:23.804716 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 10 23:41:23.804722 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 10 23:41:23.804729 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 10 23:41:23.804735 kernel: arm-pv: using stolen time PV Sep 10 23:41:23.804742 kernel: Console: colour dummy device 80x25 Sep 10 23:41:23.804748 kernel: ACPI: Core revision 20240827 Sep 10 23:41:23.804755 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 10 23:41:23.804762 kernel: pid_max: default: 32768 minimum: 301 Sep 10 23:41:23.804768 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 10 23:41:23.804776 kernel: landlock: Up and running. Sep 10 23:41:23.804782 kernel: SELinux: Initializing. Sep 10 23:41:23.804798 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 23:41:23.804807 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 23:41:23.804813 kernel: rcu: Hierarchical SRCU implementation. Sep 10 23:41:23.804820 kernel: rcu: Max phase no-delay instances is 400. Sep 10 23:41:23.804827 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 10 23:41:23.804833 kernel: Remapping and enabling EFI services. Sep 10 23:41:23.804840 kernel: smp: Bringing up secondary CPUs ... Sep 10 23:41:23.804852 kernel: Detected PIPT I-cache on CPU1 Sep 10 23:41:23.804859 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 10 23:41:23.804866 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 10 23:41:23.804874 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 23:41:23.804881 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 10 23:41:23.804887 kernel: Detected PIPT I-cache on CPU2 Sep 10 23:41:23.804895 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 10 23:41:23.804902 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 10 23:41:23.804910 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 23:41:23.804916 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 10 23:41:23.804923 kernel: Detected PIPT I-cache on CPU3 Sep 10 23:41:23.804930 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 10 23:41:23.804937 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 10 23:41:23.804944 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 23:41:23.804951 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 10 23:41:23.804958 kernel: smp: Brought up 1 node, 4 CPUs Sep 10 23:41:23.804965 kernel: SMP: Total of 4 processors activated. Sep 10 23:41:23.805010 kernel: CPU: All CPU(s) started at EL1 Sep 10 23:41:23.805018 kernel: CPU features: detected: 32-bit EL0 Support Sep 10 23:41:23.805025 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 10 23:41:23.805032 kernel: CPU features: detected: Common not Private translations Sep 10 23:41:23.805039 kernel: CPU features: detected: CRC32 instructions Sep 10 23:41:23.805046 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 10 23:41:23.805053 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 10 23:41:23.805065 kernel: CPU features: detected: LSE atomic instructions Sep 10 23:41:23.805072 kernel: CPU features: detected: Privileged Access Never Sep 10 23:41:23.805082 kernel: CPU features: detected: RAS Extension Support Sep 10 23:41:23.805093 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 10 23:41:23.805102 kernel: alternatives: applying system-wide alternatives Sep 10 23:41:23.805111 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 10 23:41:23.805118 kernel: Memory: 2424480K/2572288K available (11136K kernel code, 2436K rwdata, 9084K rodata, 38976K init, 1038K bss, 125472K reserved, 16384K cma-reserved) Sep 10 23:41:23.805126 kernel: devtmpfs: initialized Sep 10 23:41:23.805133 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 10 23:41:23.805140 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 10 23:41:23.805147 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 10 23:41:23.805155 kernel: 0 pages in range for non-PLT usage Sep 10 23:41:23.805162 kernel: 508560 pages in range for PLT usage Sep 10 23:41:23.805168 kernel: pinctrl core: initialized pinctrl subsystem Sep 10 23:41:23.805175 kernel: SMBIOS 3.0.0 present. Sep 10 23:41:23.805183 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 10 23:41:23.805189 kernel: DMI: Memory slots populated: 1/1 Sep 10 23:41:23.805196 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 10 23:41:23.805203 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 10 23:41:23.805210 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 10 23:41:23.805218 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 10 23:41:23.805225 kernel: audit: initializing netlink subsys (disabled) Sep 10 23:41:23.805232 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Sep 10 23:41:23.805239 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 10 23:41:23.805246 kernel: cpuidle: using governor menu Sep 10 23:41:23.805253 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 10 23:41:23.805259 kernel: ASID allocator initialised with 32768 entries Sep 10 23:41:23.805266 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 10 23:41:23.805273 kernel: Serial: AMBA PL011 UART driver Sep 10 23:41:23.805281 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 10 23:41:23.805288 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 10 23:41:23.805295 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 10 23:41:23.805302 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 10 23:41:23.805308 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 10 23:41:23.805315 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 10 23:41:23.805322 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 10 23:41:23.805329 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 10 23:41:23.805336 kernel: ACPI: Added _OSI(Module Device) Sep 10 23:41:23.805344 kernel: ACPI: Added _OSI(Processor Device) Sep 10 23:41:23.805351 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 10 23:41:23.805357 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 10 23:41:23.805364 kernel: ACPI: Interpreter enabled Sep 10 23:41:23.805371 kernel: ACPI: Using GIC for interrupt routing Sep 10 23:41:23.805378 kernel: ACPI: MCFG table detected, 1 entries Sep 10 23:41:23.805385 kernel: ACPI: CPU0 has been hot-added Sep 10 23:41:23.805392 kernel: ACPI: CPU1 has been hot-added Sep 10 23:41:23.805399 kernel: ACPI: CPU2 has been hot-added Sep 10 23:41:23.805406 kernel: ACPI: CPU3 has been hot-added Sep 10 23:41:23.805415 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 10 23:41:23.805422 kernel: printk: legacy console [ttyAMA0] enabled Sep 10 23:41:23.805429 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 10 23:41:23.805595 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 10 23:41:23.805682 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 10 23:41:23.805746 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 10 23:41:23.805824 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 10 23:41:23.805894 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 10 23:41:23.805904 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 10 23:41:23.805912 kernel: PCI host bridge to bus 0000:00 Sep 10 23:41:23.806063 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 10 23:41:23.806129 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 10 23:41:23.806185 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 10 23:41:23.806242 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 10 23:41:23.806330 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 10 23:41:23.806406 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 10 23:41:23.806471 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 10 23:41:23.806534 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 10 23:41:23.806620 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 10 23:41:23.806688 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 10 23:41:23.806752 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 10 23:41:23.806851 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 10 23:41:23.806910 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 10 23:41:23.806964 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 10 23:41:23.807071 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 10 23:41:23.807081 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 10 23:41:23.807089 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 10 23:41:23.807096 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 10 23:41:23.807107 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 10 23:41:23.807114 kernel: iommu: Default domain type: Translated Sep 10 23:41:23.807121 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 10 23:41:23.807128 kernel: efivars: Registered efivars operations Sep 10 23:41:23.807136 kernel: vgaarb: loaded Sep 10 23:41:23.807143 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 10 23:41:23.807150 kernel: VFS: Disk quotas dquot_6.6.0 Sep 10 23:41:23.807158 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 10 23:41:23.807165 kernel: pnp: PnP ACPI init Sep 10 23:41:23.807238 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 10 23:41:23.807248 kernel: pnp: PnP ACPI: found 1 devices Sep 10 23:41:23.807255 kernel: NET: Registered PF_INET protocol family Sep 10 23:41:23.807262 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 10 23:41:23.807269 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 10 23:41:23.807277 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 10 23:41:23.807284 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 10 23:41:23.807291 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 10 23:41:23.807300 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 10 23:41:23.807307 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 23:41:23.807314 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 23:41:23.807321 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 10 23:41:23.807328 kernel: PCI: CLS 0 bytes, default 64 Sep 10 23:41:23.807335 kernel: kvm [1]: HYP mode not available Sep 10 23:41:23.807342 kernel: Initialise system trusted keyrings Sep 10 23:41:23.807349 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 10 23:41:23.807356 kernel: Key type asymmetric registered Sep 10 23:41:23.807364 kernel: Asymmetric key parser 'x509' registered Sep 10 23:41:23.807371 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 10 23:41:23.807378 kernel: io scheduler mq-deadline registered Sep 10 23:41:23.807385 kernel: io scheduler kyber registered Sep 10 23:41:23.807392 kernel: io scheduler bfq registered Sep 10 23:41:23.807399 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 10 23:41:23.807406 kernel: ACPI: button: Power Button [PWRB] Sep 10 23:41:23.807413 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 10 23:41:23.807474 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 10 23:41:23.807485 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 10 23:41:23.807491 kernel: thunder_xcv, ver 1.0 Sep 10 23:41:23.807498 kernel: thunder_bgx, ver 1.0 Sep 10 23:41:23.807505 kernel: nicpf, ver 1.0 Sep 10 23:41:23.807512 kernel: nicvf, ver 1.0 Sep 10 23:41:23.807604 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 10 23:41:23.807664 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-10T23:41:23 UTC (1757547683) Sep 10 23:41:23.807673 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 10 23:41:23.807683 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 10 23:41:23.807690 kernel: watchdog: NMI not fully supported Sep 10 23:41:23.807697 kernel: watchdog: Hard watchdog permanently disabled Sep 10 23:41:23.807704 kernel: NET: Registered PF_INET6 protocol family Sep 10 23:41:23.807710 kernel: Segment Routing with IPv6 Sep 10 23:41:23.807717 kernel: In-situ OAM (IOAM) with IPv6 Sep 10 23:41:23.807724 kernel: NET: Registered PF_PACKET protocol family Sep 10 23:41:23.807731 kernel: Key type dns_resolver registered Sep 10 23:41:23.807737 kernel: registered taskstats version 1 Sep 10 23:41:23.807744 kernel: Loading compiled-in X.509 certificates Sep 10 23:41:23.807752 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.46-flatcar: 3c20aab1105575c84ea94c1a59a27813fcebdea7' Sep 10 23:41:23.807759 kernel: Demotion targets for Node 0: null Sep 10 23:41:23.807766 kernel: Key type .fscrypt registered Sep 10 23:41:23.807773 kernel: Key type fscrypt-provisioning registered Sep 10 23:41:23.807780 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 10 23:41:23.807787 kernel: ima: Allocated hash algorithm: sha1 Sep 10 23:41:23.807807 kernel: ima: No architecture policies found Sep 10 23:41:23.807814 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 10 23:41:23.807823 kernel: clk: Disabling unused clocks Sep 10 23:41:23.807830 kernel: PM: genpd: Disabling unused power domains Sep 10 23:41:23.807837 kernel: Warning: unable to open an initial console. Sep 10 23:41:23.807844 kernel: Freeing unused kernel memory: 38976K Sep 10 23:41:23.807851 kernel: Run /init as init process Sep 10 23:41:23.807858 kernel: with arguments: Sep 10 23:41:23.807865 kernel: /init Sep 10 23:41:23.807871 kernel: with environment: Sep 10 23:41:23.807878 kernel: HOME=/ Sep 10 23:41:23.807886 kernel: TERM=linux Sep 10 23:41:23.807893 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 10 23:41:23.807901 systemd[1]: Successfully made /usr/ read-only. Sep 10 23:41:23.807911 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 10 23:41:23.807919 systemd[1]: Detected virtualization kvm. Sep 10 23:41:23.807926 systemd[1]: Detected architecture arm64. Sep 10 23:41:23.807933 systemd[1]: Running in initrd. Sep 10 23:41:23.807940 systemd[1]: No hostname configured, using default hostname. Sep 10 23:41:23.807949 systemd[1]: Hostname set to . Sep 10 23:41:23.807956 systemd[1]: Initializing machine ID from VM UUID. Sep 10 23:41:23.807964 systemd[1]: Queued start job for default target initrd.target. Sep 10 23:41:23.808010 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 23:41:23.808019 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 23:41:23.808027 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 10 23:41:23.808035 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 23:41:23.808042 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 10 23:41:23.808053 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 10 23:41:23.808061 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 10 23:41:23.808069 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 10 23:41:23.808076 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 23:41:23.808084 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 23:41:23.808091 systemd[1]: Reached target paths.target - Path Units. Sep 10 23:41:23.808099 systemd[1]: Reached target slices.target - Slice Units. Sep 10 23:41:23.808107 systemd[1]: Reached target swap.target - Swaps. Sep 10 23:41:23.808114 systemd[1]: Reached target timers.target - Timer Units. Sep 10 23:41:23.808122 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 23:41:23.808129 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 23:41:23.808136 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 10 23:41:23.808144 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 10 23:41:23.808151 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 23:41:23.808158 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 23:41:23.808167 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 23:41:23.808175 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 23:41:23.808182 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 10 23:41:23.808189 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 23:41:23.808197 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 10 23:41:23.808205 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 10 23:41:23.808213 systemd[1]: Starting systemd-fsck-usr.service... Sep 10 23:41:23.808220 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 23:41:23.808229 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 23:41:23.808237 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 23:41:23.808244 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 10 23:41:23.808252 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 23:41:23.808260 systemd[1]: Finished systemd-fsck-usr.service. Sep 10 23:41:23.808269 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 10 23:41:23.808276 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:41:23.808284 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 10 23:41:23.808292 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 23:41:23.808323 systemd-journald[243]: Collecting audit messages is disabled. Sep 10 23:41:23.808344 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 10 23:41:23.808351 kernel: Bridge firewalling registered Sep 10 23:41:23.808359 systemd-journald[243]: Journal started Sep 10 23:41:23.808377 systemd-journald[243]: Runtime Journal (/run/log/journal/f70f52430c6e4e0fb8acec8c7a76f562) is 6M, max 48.5M, 42.4M free. Sep 10 23:41:23.787356 systemd-modules-load[245]: Inserted module 'overlay' Sep 10 23:41:23.805612 systemd-modules-load[245]: Inserted module 'br_netfilter' Sep 10 23:41:23.813622 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 23:41:23.813640 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 23:41:23.815029 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 23:41:23.818865 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 23:41:23.822798 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 23:41:23.824163 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 23:41:23.837757 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 23:41:23.840780 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 10 23:41:23.843740 systemd-tmpfiles[280]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 10 23:41:23.846420 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 23:41:23.848419 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 23:41:23.852277 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 23:41:23.858843 dracut-cmdline[284]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=dd9c14cce645c634e06a91b09405eea80057f02909b9267c482dc457df1cddec Sep 10 23:41:23.888009 systemd-resolved[294]: Positive Trust Anchors: Sep 10 23:41:23.888024 systemd-resolved[294]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 23:41:23.888056 systemd-resolved[294]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 23:41:23.892887 systemd-resolved[294]: Defaulting to hostname 'linux'. Sep 10 23:41:23.896812 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 23:41:23.897694 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 23:41:23.940652 kernel: SCSI subsystem initialized Sep 10 23:41:23.945583 kernel: Loading iSCSI transport class v2.0-870. Sep 10 23:41:23.953593 kernel: iscsi: registered transport (tcp) Sep 10 23:41:23.965680 kernel: iscsi: registered transport (qla4xxx) Sep 10 23:41:23.965747 kernel: QLogic iSCSI HBA Driver Sep 10 23:41:23.983280 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 10 23:41:24.006210 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 23:41:24.008240 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 10 23:41:24.052865 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 10 23:41:24.055027 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 10 23:41:24.113593 kernel: raid6: neonx8 gen() 15730 MB/s Sep 10 23:41:24.130581 kernel: raid6: neonx4 gen() 15789 MB/s Sep 10 23:41:24.147591 kernel: raid6: neonx2 gen() 13272 MB/s Sep 10 23:41:24.164589 kernel: raid6: neonx1 gen() 10354 MB/s Sep 10 23:41:24.181581 kernel: raid6: int64x8 gen() 6890 MB/s Sep 10 23:41:24.198582 kernel: raid6: int64x4 gen() 7344 MB/s Sep 10 23:41:24.215592 kernel: raid6: int64x2 gen() 6096 MB/s Sep 10 23:41:24.232581 kernel: raid6: int64x1 gen() 5036 MB/s Sep 10 23:41:24.232619 kernel: raid6: using algorithm neonx4 gen() 15789 MB/s Sep 10 23:41:24.249617 kernel: raid6: .... xor() 12333 MB/s, rmw enabled Sep 10 23:41:24.249660 kernel: raid6: using neon recovery algorithm Sep 10 23:41:24.254867 kernel: xor: measuring software checksum speed Sep 10 23:41:24.254907 kernel: 8regs : 21596 MB/sec Sep 10 23:41:24.256015 kernel: 32regs : 21676 MB/sec Sep 10 23:41:24.256029 kernel: arm64_neon : 28128 MB/sec Sep 10 23:41:24.256039 kernel: xor: using function: arm64_neon (28128 MB/sec) Sep 10 23:41:24.308607 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 10 23:41:24.315607 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 10 23:41:24.317940 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 23:41:24.344549 systemd-udevd[499]: Using default interface naming scheme 'v255'. Sep 10 23:41:24.348798 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 23:41:24.350554 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 10 23:41:24.371889 dracut-pre-trigger[504]: rd.md=0: removing MD RAID activation Sep 10 23:41:24.395441 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 23:41:24.399687 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 23:41:24.448858 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 23:41:24.451528 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 10 23:41:24.498582 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 10 23:41:24.500590 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 10 23:41:24.505584 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 10 23:41:24.505620 kernel: GPT:9289727 != 19775487 Sep 10 23:41:24.514603 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 10 23:41:24.515944 kernel: GPT:9289727 != 19775487 Sep 10 23:41:24.515972 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 10 23:41:24.516868 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 23:41:24.521959 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 23:41:24.522080 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:41:24.524993 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 23:41:24.528059 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 23:41:24.546442 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 10 23:41:24.553450 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:41:24.561194 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 10 23:41:24.562472 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 10 23:41:24.577150 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 10 23:41:24.583206 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 10 23:41:24.584166 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 10 23:41:24.586527 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 23:41:24.588325 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 23:41:24.589957 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 23:41:24.592257 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 10 23:41:24.593937 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 10 23:41:24.618038 disk-uuid[595]: Primary Header is updated. Sep 10 23:41:24.618038 disk-uuid[595]: Secondary Entries is updated. Sep 10 23:41:24.618038 disk-uuid[595]: Secondary Header is updated. Sep 10 23:41:24.621917 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 10 23:41:24.624626 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 23:41:25.631181 disk-uuid[601]: The operation has completed successfully. Sep 10 23:41:25.632269 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 23:41:25.658554 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 10 23:41:25.658669 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 10 23:41:25.684540 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 10 23:41:25.708701 sh[614]: Success Sep 10 23:41:25.720688 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 10 23:41:25.720731 kernel: device-mapper: uevent: version 1.0.3 Sep 10 23:41:25.721665 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 10 23:41:25.729574 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 10 23:41:25.753622 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 10 23:41:25.756443 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 10 23:41:25.767944 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 10 23:41:25.772607 kernel: BTRFS: device fsid 3b17f37f-d395-4116-a46d-e07f86112ade devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (626) Sep 10 23:41:25.772653 kernel: BTRFS info (device dm-0): first mount of filesystem 3b17f37f-d395-4116-a46d-e07f86112ade Sep 10 23:41:25.774244 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 10 23:41:25.777699 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 10 23:41:25.777736 kernel: BTRFS info (device dm-0): enabling free space tree Sep 10 23:41:25.778633 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 10 23:41:25.779696 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 10 23:41:25.780702 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 10 23:41:25.781471 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 10 23:41:25.784062 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 10 23:41:25.804580 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (657) Sep 10 23:41:25.804887 kernel: BTRFS info (device vda6): first mount of filesystem 538ffae8-60fb-4c82-9100-efc4d2404f73 Sep 10 23:41:25.806581 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 10 23:41:25.808832 kernel: BTRFS info (device vda6): turning on async discard Sep 10 23:41:25.808871 kernel: BTRFS info (device vda6): enabling free space tree Sep 10 23:41:25.813582 kernel: BTRFS info (device vda6): last unmount of filesystem 538ffae8-60fb-4c82-9100-efc4d2404f73 Sep 10 23:41:25.814169 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 10 23:41:25.816118 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 10 23:41:25.892884 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 23:41:25.895515 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 23:41:25.930903 ignition[701]: Ignition 2.21.0 Sep 10 23:41:25.930917 ignition[701]: Stage: fetch-offline Sep 10 23:41:25.930951 ignition[701]: no configs at "/usr/lib/ignition/base.d" Sep 10 23:41:25.930960 ignition[701]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:41:25.931138 ignition[701]: parsed url from cmdline: "" Sep 10 23:41:25.933390 systemd-networkd[807]: lo: Link UP Sep 10 23:41:25.931141 ignition[701]: no config URL provided Sep 10 23:41:25.933394 systemd-networkd[807]: lo: Gained carrier Sep 10 23:41:25.931146 ignition[701]: reading system config file "/usr/lib/ignition/user.ign" Sep 10 23:41:25.934270 systemd-networkd[807]: Enumeration completed Sep 10 23:41:25.931152 ignition[701]: no config at "/usr/lib/ignition/user.ign" Sep 10 23:41:25.934429 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 23:41:25.931172 ignition[701]: op(1): [started] loading QEMU firmware config module Sep 10 23:41:25.934792 systemd-networkd[807]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:41:25.931176 ignition[701]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 10 23:41:25.934796 systemd-networkd[807]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 23:41:25.935460 systemd-networkd[807]: eth0: Link UP Sep 10 23:41:25.935638 systemd-networkd[807]: eth0: Gained carrier Sep 10 23:41:25.935647 systemd-networkd[807]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:41:25.936745 systemd[1]: Reached target network.target - Network. Sep 10 23:41:25.947914 ignition[701]: op(1): [finished] loading QEMU firmware config module Sep 10 23:41:25.947941 ignition[701]: QEMU firmware config was not found. Ignoring... Sep 10 23:41:25.958644 systemd-networkd[807]: eth0: DHCPv4 address 10.0.0.21/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 23:41:26.004488 ignition[701]: parsing config with SHA512: d2c940d327c4b91cfa75c36d2329c3bc9bf32d945236e494cb127851b10fc28966039000de9a6835e9da163df886798891698ad58b8ebd78e767a5344c7ab8bd Sep 10 23:41:26.008917 unknown[701]: fetched base config from "system" Sep 10 23:41:26.008930 unknown[701]: fetched user config from "qemu" Sep 10 23:41:26.009334 ignition[701]: fetch-offline: fetch-offline passed Sep 10 23:41:26.009874 systemd-resolved[294]: Detected conflict on linux IN A 10.0.0.21 Sep 10 23:41:26.009390 ignition[701]: Ignition finished successfully Sep 10 23:41:26.009882 systemd-resolved[294]: Hostname conflict, changing published hostname from 'linux' to 'linux4'. Sep 10 23:41:26.011863 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 23:41:26.013274 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 10 23:41:26.014093 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 10 23:41:26.055327 ignition[815]: Ignition 2.21.0 Sep 10 23:41:26.055345 ignition[815]: Stage: kargs Sep 10 23:41:26.055483 ignition[815]: no configs at "/usr/lib/ignition/base.d" Sep 10 23:41:26.055492 ignition[815]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:41:26.057975 ignition[815]: kargs: kargs passed Sep 10 23:41:26.058048 ignition[815]: Ignition finished successfully Sep 10 23:41:26.061142 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 10 23:41:26.062978 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 10 23:41:26.092029 ignition[823]: Ignition 2.21.0 Sep 10 23:41:26.092048 ignition[823]: Stage: disks Sep 10 23:41:26.092185 ignition[823]: no configs at "/usr/lib/ignition/base.d" Sep 10 23:41:26.092194 ignition[823]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:41:26.094153 ignition[823]: disks: disks passed Sep 10 23:41:26.094224 ignition[823]: Ignition finished successfully Sep 10 23:41:26.095992 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 10 23:41:26.097464 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 10 23:41:26.098728 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 10 23:41:26.100324 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 23:41:26.102035 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 23:41:26.103356 systemd[1]: Reached target basic.target - Basic System. Sep 10 23:41:26.105734 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 10 23:41:26.133747 systemd-fsck[833]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 10 23:41:26.138739 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 10 23:41:26.141806 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 10 23:41:26.213586 kernel: EXT4-fs (vda9): mounted filesystem fcae628f-5f9a-4539-a638-93fb1399b5d7 r/w with ordered data mode. Quota mode: none. Sep 10 23:41:26.214408 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 10 23:41:26.215656 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 10 23:41:26.218473 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 23:41:26.220753 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 10 23:41:26.221593 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 10 23:41:26.221640 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 10 23:41:26.221667 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 23:41:26.233613 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 10 23:41:26.235741 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 10 23:41:26.238576 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (842) Sep 10 23:41:26.238603 kernel: BTRFS info (device vda6): first mount of filesystem 538ffae8-60fb-4c82-9100-efc4d2404f73 Sep 10 23:41:26.239586 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 10 23:41:26.242668 kernel: BTRFS info (device vda6): turning on async discard Sep 10 23:41:26.242699 kernel: BTRFS info (device vda6): enabling free space tree Sep 10 23:41:26.244482 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 23:41:26.281734 initrd-setup-root[866]: cut: /sysroot/etc/passwd: No such file or directory Sep 10 23:41:26.285201 initrd-setup-root[873]: cut: /sysroot/etc/group: No such file or directory Sep 10 23:41:26.289595 initrd-setup-root[880]: cut: /sysroot/etc/shadow: No such file or directory Sep 10 23:41:26.292950 initrd-setup-root[887]: cut: /sysroot/etc/gshadow: No such file or directory Sep 10 23:41:26.365052 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 10 23:41:26.367677 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 10 23:41:26.369184 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 10 23:41:26.394585 kernel: BTRFS info (device vda6): last unmount of filesystem 538ffae8-60fb-4c82-9100-efc4d2404f73 Sep 10 23:41:26.408032 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 10 23:41:26.414283 ignition[956]: INFO : Ignition 2.21.0 Sep 10 23:41:26.414283 ignition[956]: INFO : Stage: mount Sep 10 23:41:26.416416 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 23:41:26.416416 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:41:26.418127 ignition[956]: INFO : mount: mount passed Sep 10 23:41:26.418127 ignition[956]: INFO : Ignition finished successfully Sep 10 23:41:26.418933 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 10 23:41:26.421048 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 10 23:41:26.772381 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 10 23:41:26.776130 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 23:41:26.804589 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (969) Sep 10 23:41:26.806692 kernel: BTRFS info (device vda6): first mount of filesystem 538ffae8-60fb-4c82-9100-efc4d2404f73 Sep 10 23:41:26.806721 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 10 23:41:26.809608 kernel: BTRFS info (device vda6): turning on async discard Sep 10 23:41:26.809639 kernel: BTRFS info (device vda6): enabling free space tree Sep 10 23:41:26.810766 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 23:41:26.843036 ignition[986]: INFO : Ignition 2.21.0 Sep 10 23:41:26.843036 ignition[986]: INFO : Stage: files Sep 10 23:41:26.844453 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 23:41:26.844453 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:41:26.844453 ignition[986]: DEBUG : files: compiled without relabeling support, skipping Sep 10 23:41:26.847299 ignition[986]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 10 23:41:26.847299 ignition[986]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 10 23:41:26.847299 ignition[986]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 10 23:41:26.847299 ignition[986]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 10 23:41:26.847299 ignition[986]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 10 23:41:26.847205 unknown[986]: wrote ssh authorized keys file for user: core Sep 10 23:41:26.854379 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 10 23:41:26.854379 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 10 23:41:26.902346 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 10 23:41:27.232170 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 10 23:41:27.232170 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 10 23:41:27.235387 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 10 23:41:27.446142 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 10 23:41:27.558610 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 10 23:41:27.558610 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 10 23:41:27.561747 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 10 23:41:27.561747 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 10 23:41:27.561747 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 10 23:41:27.561747 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 23:41:27.561747 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 23:41:27.561747 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 23:41:27.561747 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 23:41:27.572354 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 23:41:27.572354 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 23:41:27.572354 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 10 23:41:27.572354 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 10 23:41:27.572354 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 10 23:41:27.572354 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 10 23:41:27.567790 systemd-networkd[807]: eth0: Gained IPv6LL Sep 10 23:41:27.886783 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 10 23:41:28.211381 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 10 23:41:28.211381 ignition[986]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 10 23:41:28.214627 ignition[986]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 23:41:28.214627 ignition[986]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 23:41:28.214627 ignition[986]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 10 23:41:28.214627 ignition[986]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 10 23:41:28.214627 ignition[986]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 23:41:28.214627 ignition[986]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 23:41:28.214627 ignition[986]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 10 23:41:28.214627 ignition[986]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 10 23:41:28.228876 ignition[986]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 23:41:28.233458 ignition[986]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 23:41:28.236036 ignition[986]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 10 23:41:28.236036 ignition[986]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 10 23:41:28.236036 ignition[986]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 10 23:41:28.236036 ignition[986]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 10 23:41:28.236036 ignition[986]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 10 23:41:28.236036 ignition[986]: INFO : files: files passed Sep 10 23:41:28.236036 ignition[986]: INFO : Ignition finished successfully Sep 10 23:41:28.237665 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 10 23:41:28.240866 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 10 23:41:28.244798 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 10 23:41:28.255083 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 10 23:41:28.255198 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 10 23:41:28.257730 initrd-setup-root-after-ignition[1015]: grep: /sysroot/oem/oem-release: No such file or directory Sep 10 23:41:28.259247 initrd-setup-root-after-ignition[1017]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 23:41:28.259247 initrd-setup-root-after-ignition[1017]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 10 23:41:28.262035 initrd-setup-root-after-ignition[1021]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 23:41:28.262628 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 23:41:28.264606 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 10 23:41:28.266992 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 10 23:41:28.327680 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 10 23:41:28.328653 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 10 23:41:28.329982 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 10 23:41:28.331476 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 10 23:41:28.333158 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 10 23:41:28.334065 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 10 23:41:28.360842 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 23:41:28.363223 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 10 23:41:28.389552 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 10 23:41:28.391675 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 23:41:28.392783 systemd[1]: Stopped target timers.target - Timer Units. Sep 10 23:41:28.394193 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 10 23:41:28.394331 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 23:41:28.396430 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 10 23:41:28.398100 systemd[1]: Stopped target basic.target - Basic System. Sep 10 23:41:28.399428 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 10 23:41:28.400817 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 23:41:28.402368 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 10 23:41:28.404010 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 10 23:41:28.405662 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 10 23:41:28.407250 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 23:41:28.408720 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 10 23:41:28.410329 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 10 23:41:28.411843 systemd[1]: Stopped target swap.target - Swaps. Sep 10 23:41:28.413092 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 10 23:41:28.413232 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 10 23:41:28.415336 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 10 23:41:28.416941 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 23:41:28.418586 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 10 23:41:28.418676 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 23:41:28.420496 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 10 23:41:28.420647 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 10 23:41:28.423063 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 10 23:41:28.423184 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 23:41:28.424700 systemd[1]: Stopped target paths.target - Path Units. Sep 10 23:41:28.426109 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 10 23:41:28.429595 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 23:41:28.430575 systemd[1]: Stopped target slices.target - Slice Units. Sep 10 23:41:28.432456 systemd[1]: Stopped target sockets.target - Socket Units. Sep 10 23:41:28.433814 systemd[1]: iscsid.socket: Deactivated successfully. Sep 10 23:41:28.433910 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 23:41:28.435191 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 10 23:41:28.435265 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 23:41:28.436546 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 10 23:41:28.436694 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 23:41:28.438185 systemd[1]: ignition-files.service: Deactivated successfully. Sep 10 23:41:28.438288 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 10 23:41:28.440349 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 10 23:41:28.442395 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 10 23:41:28.443305 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 10 23:41:28.443439 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 23:41:28.445221 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 10 23:41:28.445367 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 23:41:28.451888 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 10 23:41:28.452005 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 10 23:41:28.462412 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 10 23:41:28.468980 ignition[1041]: INFO : Ignition 2.21.0 Sep 10 23:41:28.468980 ignition[1041]: INFO : Stage: umount Sep 10 23:41:28.470491 ignition[1041]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 23:41:28.470491 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:41:28.470491 ignition[1041]: INFO : umount: umount passed Sep 10 23:41:28.470491 ignition[1041]: INFO : Ignition finished successfully Sep 10 23:41:28.471519 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 10 23:41:28.472694 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 10 23:41:28.474825 systemd[1]: Stopped target network.target - Network. Sep 10 23:41:28.476288 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 10 23:41:28.476375 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 10 23:41:28.478989 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 10 23:41:28.479052 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 10 23:41:28.480348 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 10 23:41:28.480392 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 10 23:41:28.481857 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 10 23:41:28.481899 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 10 23:41:28.483467 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 10 23:41:28.484819 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 10 23:41:28.492500 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 10 23:41:28.492633 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 10 23:41:28.495993 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 10 23:41:28.496304 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 10 23:41:28.496342 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 23:41:28.499409 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 10 23:41:28.499649 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 10 23:41:28.499739 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 10 23:41:28.503179 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 10 23:41:28.504326 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 10 23:41:28.507645 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 10 23:41:28.507725 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 10 23:41:28.510461 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 10 23:41:28.512140 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 10 23:41:28.512205 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 23:41:28.513976 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 10 23:41:28.514023 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 10 23:41:28.516346 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 10 23:41:28.516393 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 10 23:41:28.521985 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 23:41:28.525523 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 10 23:41:28.536204 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 10 23:41:28.541789 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 10 23:41:28.543060 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 10 23:41:28.543186 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 23:41:28.545114 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 10 23:41:28.545232 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 10 23:41:28.547118 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 10 23:41:28.547190 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 10 23:41:28.548738 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 10 23:41:28.548784 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 23:41:28.550337 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 10 23:41:28.550400 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 10 23:41:28.552959 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 10 23:41:28.553024 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 10 23:41:28.555570 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 10 23:41:28.555636 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 23:41:28.558305 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 10 23:41:28.558363 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 10 23:41:28.560871 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 10 23:41:28.562185 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 10 23:41:28.562240 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 23:41:28.565909 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 10 23:41:28.565955 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 23:41:28.568220 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 10 23:41:28.568263 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 23:41:28.571262 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 10 23:41:28.571309 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 23:41:28.573001 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 23:41:28.573049 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:41:28.577047 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 10 23:41:28.578613 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 10 23:41:28.580508 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 10 23:41:28.583063 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 10 23:41:28.606257 systemd[1]: Switching root. Sep 10 23:41:28.642056 systemd-journald[243]: Journal stopped Sep 10 23:41:29.426768 systemd-journald[243]: Received SIGTERM from PID 1 (systemd). Sep 10 23:41:29.426832 kernel: SELinux: policy capability network_peer_controls=1 Sep 10 23:41:29.426851 kernel: SELinux: policy capability open_perms=1 Sep 10 23:41:29.426865 kernel: SELinux: policy capability extended_socket_class=1 Sep 10 23:41:29.426884 kernel: SELinux: policy capability always_check_network=0 Sep 10 23:41:29.426894 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 10 23:41:29.426904 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 10 23:41:29.426915 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 10 23:41:29.426925 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 10 23:41:29.426934 kernel: SELinux: policy capability userspace_initial_context=0 Sep 10 23:41:29.426944 kernel: audit: type=1403 audit(1757547688.814:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 10 23:41:29.426960 systemd[1]: Successfully loaded SELinux policy in 35.003ms. Sep 10 23:41:29.426978 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.953ms. Sep 10 23:41:29.426989 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 10 23:41:29.427000 systemd[1]: Detected virtualization kvm. Sep 10 23:41:29.427012 systemd[1]: Detected architecture arm64. Sep 10 23:41:29.427022 systemd[1]: Detected first boot. Sep 10 23:41:29.427031 systemd[1]: Initializing machine ID from VM UUID. Sep 10 23:41:29.427041 zram_generator::config[1086]: No configuration found. Sep 10 23:41:29.427052 kernel: NET: Registered PF_VSOCK protocol family Sep 10 23:41:29.427062 systemd[1]: Populated /etc with preset unit settings. Sep 10 23:41:29.427072 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 10 23:41:29.427083 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 10 23:41:29.427092 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 10 23:41:29.427157 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 10 23:41:29.427172 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 10 23:41:29.427182 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 10 23:41:29.427193 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 10 23:41:29.427203 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 10 23:41:29.427213 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 10 23:41:29.427223 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 10 23:41:29.427233 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 10 23:41:29.427246 systemd[1]: Created slice user.slice - User and Session Slice. Sep 10 23:41:29.427255 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 23:41:29.427266 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 23:41:29.427276 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 10 23:41:29.427286 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 10 23:41:29.427296 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 10 23:41:29.427306 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 23:41:29.427316 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 10 23:41:29.427326 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 23:41:29.427346 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 23:41:29.427357 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 10 23:41:29.427367 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 10 23:41:29.427377 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 10 23:41:29.427387 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 10 23:41:29.427398 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 23:41:29.427408 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 23:41:29.427418 systemd[1]: Reached target slices.target - Slice Units. Sep 10 23:41:29.427429 systemd[1]: Reached target swap.target - Swaps. Sep 10 23:41:29.427439 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 10 23:41:29.427449 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 10 23:41:29.427458 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 10 23:41:29.427469 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 23:41:29.427479 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 23:41:29.427489 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 23:41:29.427499 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 10 23:41:29.427509 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 10 23:41:29.427521 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 10 23:41:29.427541 systemd[1]: Mounting media.mount - External Media Directory... Sep 10 23:41:29.427551 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 10 23:41:29.427587 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 10 23:41:29.427598 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 10 23:41:29.427609 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 10 23:41:29.427619 systemd[1]: Reached target machines.target - Containers. Sep 10 23:41:29.427630 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 10 23:41:29.427641 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 23:41:29.427654 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 23:41:29.427664 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 10 23:41:29.427675 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 23:41:29.427684 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 23:41:29.427695 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 23:41:29.427705 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 10 23:41:29.427715 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 23:41:29.427726 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 10 23:41:29.427737 kernel: fuse: init (API version 7.41) Sep 10 23:41:29.427747 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 10 23:41:29.427764 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 10 23:41:29.427775 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 10 23:41:29.427785 systemd[1]: Stopped systemd-fsck-usr.service. Sep 10 23:41:29.427796 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 23:41:29.427830 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 23:41:29.427840 kernel: ACPI: bus type drm_connector registered Sep 10 23:41:29.427850 kernel: loop: module loaded Sep 10 23:41:29.427863 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 23:41:29.427873 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 10 23:41:29.427883 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 10 23:41:29.427895 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 10 23:41:29.427933 systemd-journald[1168]: Collecting audit messages is disabled. Sep 10 23:41:29.427959 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 23:41:29.427971 systemd-journald[1168]: Journal started Sep 10 23:41:29.427992 systemd-journald[1168]: Runtime Journal (/run/log/journal/f70f52430c6e4e0fb8acec8c7a76f562) is 6M, max 48.5M, 42.4M free. Sep 10 23:41:29.207642 systemd[1]: Queued start job for default target multi-user.target. Sep 10 23:41:29.232683 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 10 23:41:29.233149 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 10 23:41:29.429344 systemd[1]: verity-setup.service: Deactivated successfully. Sep 10 23:41:29.431161 systemd[1]: Stopped verity-setup.service. Sep 10 23:41:29.435025 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 23:41:29.435675 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 10 23:41:29.436553 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 10 23:41:29.437481 systemd[1]: Mounted media.mount - External Media Directory. Sep 10 23:41:29.438413 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 10 23:41:29.439463 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 10 23:41:29.440492 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 10 23:41:29.448857 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 10 23:41:29.451617 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 23:41:29.453034 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 10 23:41:29.453203 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 10 23:41:29.454536 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 23:41:29.454768 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 23:41:29.456818 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 23:41:29.456989 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 23:41:29.458987 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 23:41:29.459244 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 23:41:29.460676 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 10 23:41:29.460949 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 10 23:41:29.462271 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 23:41:29.462475 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 23:41:29.464109 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 23:41:29.465414 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 23:41:29.467096 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 10 23:41:29.470013 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 10 23:41:29.481880 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 10 23:41:29.484500 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 10 23:41:29.487101 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 10 23:41:29.488056 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 10 23:41:29.488098 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 23:41:29.490224 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 10 23:41:29.503936 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 10 23:41:29.504938 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 23:41:29.506351 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 10 23:41:29.508863 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 10 23:41:29.510104 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 23:41:29.511703 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 10 23:41:29.512970 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 23:41:29.519270 systemd-journald[1168]: Time spent on flushing to /var/log/journal/f70f52430c6e4e0fb8acec8c7a76f562 is 28.747ms for 891 entries. Sep 10 23:41:29.519270 systemd-journald[1168]: System Journal (/var/log/journal/f70f52430c6e4e0fb8acec8c7a76f562) is 8M, max 195.6M, 187.6M free. Sep 10 23:41:29.572797 systemd-journald[1168]: Received client request to flush runtime journal. Sep 10 23:41:29.572902 kernel: loop0: detected capacity change from 0 to 138376 Sep 10 23:41:29.572923 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 10 23:41:29.516826 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 23:41:29.520370 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 10 23:41:29.523025 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 10 23:41:29.529067 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 23:41:29.530436 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 10 23:41:29.532877 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 10 23:41:29.534138 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 10 23:41:29.541857 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 10 23:41:29.544774 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 10 23:41:29.562933 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 23:41:29.567126 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Sep 10 23:41:29.567137 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Sep 10 23:41:29.572090 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 23:41:29.575637 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 10 23:41:29.580785 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 10 23:41:29.583601 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 10 23:41:29.588580 kernel: loop1: detected capacity change from 0 to 203944 Sep 10 23:41:29.608592 kernel: loop2: detected capacity change from 0 to 107312 Sep 10 23:41:29.614183 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 10 23:41:29.618315 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 23:41:29.646694 kernel: loop3: detected capacity change from 0 to 138376 Sep 10 23:41:29.650246 systemd-tmpfiles[1225]: ACLs are not supported, ignoring. Sep 10 23:41:29.650266 systemd-tmpfiles[1225]: ACLs are not supported, ignoring. Sep 10 23:41:29.654520 kernel: loop4: detected capacity change from 0 to 203944 Sep 10 23:41:29.657773 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 23:41:29.666611 kernel: loop5: detected capacity change from 0 to 107312 Sep 10 23:41:29.671102 (sd-merge)[1227]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 10 23:41:29.671584 (sd-merge)[1227]: Merged extensions into '/usr'. Sep 10 23:41:29.675763 systemd[1]: Reload requested from client PID 1202 ('systemd-sysext') (unit systemd-sysext.service)... Sep 10 23:41:29.675781 systemd[1]: Reloading... Sep 10 23:41:29.743598 zram_generator::config[1250]: No configuration found. Sep 10 23:41:29.775101 ldconfig[1197]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 10 23:41:29.833795 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 23:41:29.898981 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 10 23:41:29.899259 systemd[1]: Reloading finished in 223 ms. Sep 10 23:41:29.917396 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 10 23:41:29.920587 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 10 23:41:29.935100 systemd[1]: Starting ensure-sysext.service... Sep 10 23:41:29.937053 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 23:41:29.946686 systemd[1]: Reload requested from client PID 1288 ('systemctl') (unit ensure-sysext.service)... Sep 10 23:41:29.946705 systemd[1]: Reloading... Sep 10 23:41:29.958085 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 10 23:41:29.958118 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 10 23:41:29.958339 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 10 23:41:29.958524 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 10 23:41:29.959162 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 10 23:41:29.959362 systemd-tmpfiles[1290]: ACLs are not supported, ignoring. Sep 10 23:41:29.959407 systemd-tmpfiles[1290]: ACLs are not supported, ignoring. Sep 10 23:41:29.962076 systemd-tmpfiles[1290]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 23:41:29.962089 systemd-tmpfiles[1290]: Skipping /boot Sep 10 23:41:29.970942 systemd-tmpfiles[1290]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 23:41:29.970957 systemd-tmpfiles[1290]: Skipping /boot Sep 10 23:41:30.000609 zram_generator::config[1320]: No configuration found. Sep 10 23:41:30.074030 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 23:41:30.138701 systemd[1]: Reloading finished in 191 ms. Sep 10 23:41:30.150597 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 10 23:41:30.156756 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 23:41:30.167752 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 10 23:41:30.170117 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 10 23:41:30.172306 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 10 23:41:30.175185 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 23:41:30.179093 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 23:41:30.181326 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 10 23:41:30.189097 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 23:41:30.194907 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 23:41:30.198087 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 23:41:30.201173 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 23:41:30.202190 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 23:41:30.202305 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 23:41:30.205984 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 10 23:41:30.208137 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 23:41:30.209994 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 23:41:30.211862 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 23:41:30.212788 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 23:41:30.214468 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 23:41:30.214782 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 23:41:30.218210 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 10 23:41:30.222494 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 23:41:30.223180 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 23:41:30.224993 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 10 23:41:30.229888 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 10 23:41:30.233691 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 23:41:30.234493 augenrules[1387]: No rules Sep 10 23:41:30.239622 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 23:41:30.242831 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 23:41:30.245359 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 23:41:30.246471 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 23:41:30.246612 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 23:41:30.248911 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 10 23:41:30.251231 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 23:41:30.252877 systemd-udevd[1362]: Using default interface naming scheme 'v255'. Sep 10 23:41:30.256234 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 10 23:41:30.258641 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 10 23:41:30.260404 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 10 23:41:30.263825 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 23:41:30.263994 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 23:41:30.267119 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 23:41:30.267301 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 23:41:30.268858 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 23:41:30.269019 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 23:41:30.278735 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 10 23:41:30.279696 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 23:41:30.281012 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 23:41:30.288618 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 23:41:30.291821 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 23:41:30.296810 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 23:41:30.297783 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 23:41:30.297914 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 23:41:30.298033 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 10 23:41:30.298854 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 23:41:30.311247 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 23:41:30.312724 systemd[1]: Finished ensure-sysext.service. Sep 10 23:41:30.324756 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 23:41:30.324993 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 23:41:30.326318 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 23:41:30.326893 systemd-resolved[1357]: Positive Trust Anchors: Sep 10 23:41:30.326926 systemd-resolved[1357]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 23:41:30.326960 systemd-resolved[1357]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 23:41:30.328616 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 23:41:30.334492 systemd-resolved[1357]: Defaulting to hostname 'linux'. Sep 10 23:41:30.337386 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 10 23:41:30.339555 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 23:41:30.340898 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 23:41:30.343644 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 10 23:41:30.344952 augenrules[1408]: /sbin/augenrules: No change Sep 10 23:41:30.345591 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 23:41:30.347605 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 23:41:30.357752 augenrules[1468]: No rules Sep 10 23:41:30.363758 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 23:41:30.363968 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 10 23:41:30.365196 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 23:41:30.365398 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 23:41:30.371282 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 23:41:30.371345 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 23:41:30.444113 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 10 23:41:30.449534 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 10 23:41:30.475733 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 10 23:41:30.489089 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 23:41:30.509460 systemd-networkd[1449]: lo: Link UP Sep 10 23:41:30.509468 systemd-networkd[1449]: lo: Gained carrier Sep 10 23:41:30.510348 systemd-networkd[1449]: Enumeration completed Sep 10 23:41:30.510462 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 23:41:30.510799 systemd-networkd[1449]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:41:30.510808 systemd-networkd[1449]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 23:41:30.511401 systemd-networkd[1449]: eth0: Link UP Sep 10 23:41:30.511518 systemd-networkd[1449]: eth0: Gained carrier Sep 10 23:41:30.511538 systemd-networkd[1449]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:41:30.511709 systemd[1]: Reached target network.target - Network. Sep 10 23:41:30.514315 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 10 23:41:30.518695 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 10 23:41:30.523608 systemd-networkd[1449]: eth0: DHCPv4 address 10.0.0.21/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 23:41:30.546010 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 10 23:41:30.550695 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 10 23:41:30.552079 systemd-timesyncd[1453]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 10 23:41:30.552134 systemd-timesyncd[1453]: Initial clock synchronization to Wed 2025-09-10 23:41:30.873973 UTC. Sep 10 23:41:30.552139 systemd[1]: Reached target time-set.target - System Time Set. Sep 10 23:41:30.574617 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:41:30.575892 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 23:41:30.576968 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 10 23:41:30.577991 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 10 23:41:30.579375 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 10 23:41:30.580379 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 10 23:41:30.581474 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 10 23:41:30.582499 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 10 23:41:30.582536 systemd[1]: Reached target paths.target - Path Units. Sep 10 23:41:30.583314 systemd[1]: Reached target timers.target - Timer Units. Sep 10 23:41:30.585162 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 10 23:41:30.587464 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 10 23:41:30.590631 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 10 23:41:30.591805 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 10 23:41:30.592798 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 10 23:41:30.609681 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 10 23:41:30.611193 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 10 23:41:30.612737 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 10 23:41:30.613611 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 23:41:30.614325 systemd[1]: Reached target basic.target - Basic System. Sep 10 23:41:30.615088 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 10 23:41:30.615119 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 10 23:41:30.616263 systemd[1]: Starting containerd.service - containerd container runtime... Sep 10 23:41:30.618326 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 10 23:41:30.620170 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 10 23:41:30.622109 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 10 23:41:30.624000 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 10 23:41:30.624933 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 10 23:41:30.627757 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 10 23:41:30.629583 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 10 23:41:30.631701 jq[1510]: false Sep 10 23:41:30.634123 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 10 23:41:30.637740 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 10 23:41:30.638637 extend-filesystems[1511]: Found /dev/vda6 Sep 10 23:41:30.641251 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 10 23:41:30.645421 extend-filesystems[1511]: Found /dev/vda9 Sep 10 23:41:30.643947 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 10 23:41:30.646411 extend-filesystems[1511]: Checking size of /dev/vda9 Sep 10 23:41:30.644891 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 10 23:41:30.647798 systemd[1]: Starting update-engine.service - Update Engine... Sep 10 23:41:30.649722 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 10 23:41:30.654179 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 10 23:41:30.656068 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 10 23:41:30.657761 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 10 23:41:30.658170 systemd[1]: motdgen.service: Deactivated successfully. Sep 10 23:41:30.658336 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 10 23:41:30.659644 jq[1530]: true Sep 10 23:41:30.660928 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 10 23:41:30.661117 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 10 23:41:30.668581 extend-filesystems[1511]: Resized partition /dev/vda9 Sep 10 23:41:30.672629 extend-filesystems[1543]: resize2fs 1.47.2 (1-Jan-2025) Sep 10 23:41:30.681592 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 10 23:41:30.684083 (ntainerd)[1539]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 10 23:41:30.687486 update_engine[1527]: I20250910 23:41:30.687309 1527 main.cc:92] Flatcar Update Engine starting Sep 10 23:41:30.695916 jq[1538]: true Sep 10 23:41:30.698778 tar[1534]: linux-arm64/helm Sep 10 23:41:30.717218 dbus-daemon[1508]: [system] SELinux support is enabled Sep 10 23:41:30.717429 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 10 23:41:30.720583 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 10 23:41:30.721590 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 10 23:41:30.721624 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 10 23:41:30.739729 extend-filesystems[1543]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 10 23:41:30.739729 extend-filesystems[1543]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 10 23:41:30.739729 extend-filesystems[1543]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 10 23:41:30.749840 update_engine[1527]: I20250910 23:41:30.729761 1527 update_check_scheduler.cc:74] Next update check in 3m58s Sep 10 23:41:30.722721 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 10 23:41:30.749966 extend-filesystems[1511]: Resized filesystem in /dev/vda9 Sep 10 23:41:30.722738 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 10 23:41:30.729671 systemd[1]: Started update-engine.service - Update Engine. Sep 10 23:41:30.732153 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 10 23:41:30.739361 systemd-logind[1522]: Watching system buttons on /dev/input/event0 (Power Button) Sep 10 23:41:30.739588 systemd-logind[1522]: New seat seat0. Sep 10 23:41:30.740268 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 10 23:41:30.745643 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 10 23:41:30.747882 systemd[1]: Started systemd-logind.service - User Login Management. Sep 10 23:41:30.774501 bash[1571]: Updated "/home/core/.ssh/authorized_keys" Sep 10 23:41:30.775199 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 10 23:41:30.778903 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 10 23:41:30.808068 locksmithd[1559]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 10 23:41:30.879930 containerd[1539]: time="2025-09-10T23:41:30Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 10 23:41:30.883023 containerd[1539]: time="2025-09-10T23:41:30.882975640Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 10 23:41:30.892605 containerd[1539]: time="2025-09-10T23:41:30.892543920Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.8µs" Sep 10 23:41:30.892605 containerd[1539]: time="2025-09-10T23:41:30.892597400Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 10 23:41:30.892712 containerd[1539]: time="2025-09-10T23:41:30.892618240Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 10 23:41:30.892821 containerd[1539]: time="2025-09-10T23:41:30.892799960Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 10 23:41:30.892867 containerd[1539]: time="2025-09-10T23:41:30.892821800Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 10 23:41:30.892867 containerd[1539]: time="2025-09-10T23:41:30.892847360Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 10 23:41:30.892962 containerd[1539]: time="2025-09-10T23:41:30.892901600Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 10 23:41:30.892962 containerd[1539]: time="2025-09-10T23:41:30.892916640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 10 23:41:30.893210 containerd[1539]: time="2025-09-10T23:41:30.893149000Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 10 23:41:30.893210 containerd[1539]: time="2025-09-10T23:41:30.893169520Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 10 23:41:30.893210 containerd[1539]: time="2025-09-10T23:41:30.893181120Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 10 23:41:30.893210 containerd[1539]: time="2025-09-10T23:41:30.893188920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 10 23:41:30.893295 containerd[1539]: time="2025-09-10T23:41:30.893254960Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 10 23:41:30.893465 containerd[1539]: time="2025-09-10T23:41:30.893439760Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 10 23:41:30.893494 containerd[1539]: time="2025-09-10T23:41:30.893473560Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 10 23:41:30.893494 containerd[1539]: time="2025-09-10T23:41:30.893484720Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 10 23:41:30.893532 containerd[1539]: time="2025-09-10T23:41:30.893513440Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 10 23:41:30.893868 containerd[1539]: time="2025-09-10T23:41:30.893820960Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 10 23:41:30.893974 containerd[1539]: time="2025-09-10T23:41:30.893952920Z" level=info msg="metadata content store policy set" policy=shared Sep 10 23:41:30.919839 containerd[1539]: time="2025-09-10T23:41:30.919778960Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 10 23:41:30.919948 containerd[1539]: time="2025-09-10T23:41:30.919863960Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 10 23:41:30.919948 containerd[1539]: time="2025-09-10T23:41:30.919886960Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 10 23:41:30.919948 containerd[1539]: time="2025-09-10T23:41:30.919901000Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 10 23:41:30.919948 containerd[1539]: time="2025-09-10T23:41:30.919915680Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 10 23:41:30.919948 containerd[1539]: time="2025-09-10T23:41:30.919928360Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 10 23:41:30.919948 containerd[1539]: time="2025-09-10T23:41:30.919940440Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 10 23:41:30.920069 containerd[1539]: time="2025-09-10T23:41:30.919963040Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 10 23:41:30.920069 containerd[1539]: time="2025-09-10T23:41:30.919977320Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 10 23:41:30.920069 containerd[1539]: time="2025-09-10T23:41:30.919988640Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 10 23:41:30.920069 containerd[1539]: time="2025-09-10T23:41:30.919999640Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 10 23:41:30.920069 containerd[1539]: time="2025-09-10T23:41:30.920013440Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 10 23:41:30.920496 containerd[1539]: time="2025-09-10T23:41:30.920186640Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 10 23:41:30.920496 containerd[1539]: time="2025-09-10T23:41:30.920238840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 10 23:41:30.920496 containerd[1539]: time="2025-09-10T23:41:30.920257000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 10 23:41:30.920496 containerd[1539]: time="2025-09-10T23:41:30.920268880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 10 23:41:30.920496 containerd[1539]: time="2025-09-10T23:41:30.920279640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 10 23:41:30.920496 containerd[1539]: time="2025-09-10T23:41:30.920290000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 10 23:41:30.920496 containerd[1539]: time="2025-09-10T23:41:30.920302040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 10 23:41:30.920496 containerd[1539]: time="2025-09-10T23:41:30.920314880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 10 23:41:30.920496 containerd[1539]: time="2025-09-10T23:41:30.920330760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 10 23:41:30.920496 containerd[1539]: time="2025-09-10T23:41:30.920347120Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 10 23:41:30.920496 containerd[1539]: time="2025-09-10T23:41:30.920359200Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 10 23:41:30.920713 containerd[1539]: time="2025-09-10T23:41:30.920580840Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 10 23:41:30.920713 containerd[1539]: time="2025-09-10T23:41:30.920598560Z" level=info msg="Start snapshots syncer" Sep 10 23:41:30.920713 containerd[1539]: time="2025-09-10T23:41:30.920631440Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 10 23:41:30.921093 containerd[1539]: time="2025-09-10T23:41:30.920861720Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 10 23:41:30.921093 containerd[1539]: time="2025-09-10T23:41:30.920920880Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 10 23:41:30.921216 containerd[1539]: time="2025-09-10T23:41:30.921005400Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 10 23:41:30.921216 containerd[1539]: time="2025-09-10T23:41:30.921130640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 10 23:41:30.921216 containerd[1539]: time="2025-09-10T23:41:30.921159080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 10 23:41:30.921216 containerd[1539]: time="2025-09-10T23:41:30.921169920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 10 23:41:30.921216 containerd[1539]: time="2025-09-10T23:41:30.921180920Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 10 23:41:30.921216 containerd[1539]: time="2025-09-10T23:41:30.921195920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 10 23:41:30.921216 containerd[1539]: time="2025-09-10T23:41:30.921206960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 10 23:41:30.921216 containerd[1539]: time="2025-09-10T23:41:30.921218200Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 10 23:41:30.921344 containerd[1539]: time="2025-09-10T23:41:30.921247640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 10 23:41:30.921344 containerd[1539]: time="2025-09-10T23:41:30.921260440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 10 23:41:30.921344 containerd[1539]: time="2025-09-10T23:41:30.921270880Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 10 23:41:30.921344 containerd[1539]: time="2025-09-10T23:41:30.921314440Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 10 23:41:30.921344 containerd[1539]: time="2025-09-10T23:41:30.921330600Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 10 23:41:30.921344 containerd[1539]: time="2025-09-10T23:41:30.921338880Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 10 23:41:30.921442 containerd[1539]: time="2025-09-10T23:41:30.921347640Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 10 23:41:30.921442 containerd[1539]: time="2025-09-10T23:41:30.921355200Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 10 23:41:30.921442 containerd[1539]: time="2025-09-10T23:41:30.921366120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 10 23:41:30.921442 containerd[1539]: time="2025-09-10T23:41:30.921378480Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 10 23:41:30.921506 containerd[1539]: time="2025-09-10T23:41:30.921455600Z" level=info msg="runtime interface created" Sep 10 23:41:30.921506 containerd[1539]: time="2025-09-10T23:41:30.921461120Z" level=info msg="created NRI interface" Sep 10 23:41:30.921506 containerd[1539]: time="2025-09-10T23:41:30.921472680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 10 23:41:30.921506 containerd[1539]: time="2025-09-10T23:41:30.921485560Z" level=info msg="Connect containerd service" Sep 10 23:41:30.921582 containerd[1539]: time="2025-09-10T23:41:30.921513160Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 10 23:41:30.922445 containerd[1539]: time="2025-09-10T23:41:30.922345200Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 23:41:31.013205 containerd[1539]: time="2025-09-10T23:41:31.012995420Z" level=info msg="Start subscribing containerd event" Sep 10 23:41:31.013317 containerd[1539]: time="2025-09-10T23:41:31.013218074Z" level=info msg="Start recovering state" Sep 10 23:41:31.013529 containerd[1539]: time="2025-09-10T23:41:31.013507347Z" level=info msg="Start event monitor" Sep 10 23:41:31.013557 containerd[1539]: time="2025-09-10T23:41:31.013536682Z" level=info msg="Start cni network conf syncer for default" Sep 10 23:41:31.013557 containerd[1539]: time="2025-09-10T23:41:31.013545586Z" level=info msg="Start streaming server" Sep 10 23:41:31.013689 containerd[1539]: time="2025-09-10T23:41:31.013670333Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 10 23:41:31.013689 containerd[1539]: time="2025-09-10T23:41:31.013688184Z" level=info msg="runtime interface starting up..." Sep 10 23:41:31.013755 containerd[1539]: time="2025-09-10T23:41:31.013695424Z" level=info msg="starting plugins..." Sep 10 23:41:31.013755 containerd[1539]: time="2025-09-10T23:41:31.013713898Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 10 23:41:31.014452 containerd[1539]: time="2025-09-10T23:41:31.013820254Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 10 23:41:31.014452 containerd[1539]: time="2025-09-10T23:41:31.013863944Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 10 23:41:31.014452 containerd[1539]: time="2025-09-10T23:41:31.013916997Z" level=info msg="containerd successfully booted in 0.134429s" Sep 10 23:41:31.014047 systemd[1]: Started containerd.service - containerd container runtime. Sep 10 23:41:31.101378 tar[1534]: linux-arm64/LICENSE Sep 10 23:41:31.101481 tar[1534]: linux-arm64/README.md Sep 10 23:41:31.120242 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 10 23:41:32.430856 systemd-networkd[1449]: eth0: Gained IPv6LL Sep 10 23:41:32.433306 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 10 23:41:32.435161 systemd[1]: Reached target network-online.target - Network is Online. Sep 10 23:41:32.441454 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 10 23:41:32.452180 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:41:32.454718 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 10 23:41:32.477399 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 10 23:41:32.479782 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 10 23:41:32.483428 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 10 23:41:32.494912 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 10 23:41:32.720386 sshd_keygen[1535]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 10 23:41:32.742295 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 10 23:41:32.745983 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 10 23:41:32.769777 systemd[1]: issuegen.service: Deactivated successfully. Sep 10 23:41:32.770841 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 10 23:41:32.777138 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 10 23:41:32.800383 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 10 23:41:32.806511 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 10 23:41:32.809683 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 10 23:41:32.811113 systemd[1]: Reached target getty.target - Login Prompts. Sep 10 23:41:33.081921 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:41:33.083449 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 10 23:41:33.085887 systemd[1]: Startup finished in 2.032s (kernel) + 5.227s (initrd) + 4.306s (userspace) = 11.566s. Sep 10 23:41:33.087471 (kubelet)[1643]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 23:41:33.534467 kubelet[1643]: E0910 23:41:33.534328 1643 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 23:41:33.537110 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 23:41:33.537262 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 23:41:33.537579 systemd[1]: kubelet.service: Consumed 799ms CPU time, 256.2M memory peak. Sep 10 23:41:37.001244 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 10 23:41:37.002601 systemd[1]: Started sshd@0-10.0.0.21:22-10.0.0.1:38552.service - OpenSSH per-connection server daemon (10.0.0.1:38552). Sep 10 23:41:37.086433 sshd[1656]: Accepted publickey for core from 10.0.0.1 port 38552 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:41:37.088312 sshd-session[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:41:37.095161 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 10 23:41:37.096412 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 10 23:41:37.102621 systemd-logind[1522]: New session 1 of user core. Sep 10 23:41:37.117502 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 10 23:41:37.120395 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 10 23:41:37.141837 (systemd)[1660]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 10 23:41:37.145024 systemd-logind[1522]: New session c1 of user core. Sep 10 23:41:37.261705 systemd[1660]: Queued start job for default target default.target. Sep 10 23:41:37.274662 systemd[1660]: Created slice app.slice - User Application Slice. Sep 10 23:41:37.274696 systemd[1660]: Reached target paths.target - Paths. Sep 10 23:41:37.274736 systemd[1660]: Reached target timers.target - Timers. Sep 10 23:41:37.276102 systemd[1660]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 10 23:41:37.286229 systemd[1660]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 10 23:41:37.286299 systemd[1660]: Reached target sockets.target - Sockets. Sep 10 23:41:37.286343 systemd[1660]: Reached target basic.target - Basic System. Sep 10 23:41:37.286371 systemd[1660]: Reached target default.target - Main User Target. Sep 10 23:41:37.286398 systemd[1660]: Startup finished in 134ms. Sep 10 23:41:37.286597 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 10 23:41:37.290973 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 10 23:41:37.362205 systemd[1]: Started sshd@1-10.0.0.21:22-10.0.0.1:38554.service - OpenSSH per-connection server daemon (10.0.0.1:38554). Sep 10 23:41:37.421703 sshd[1671]: Accepted publickey for core from 10.0.0.1 port 38554 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:41:37.423326 sshd-session[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:41:37.438835 systemd-logind[1522]: New session 2 of user core. Sep 10 23:41:37.455786 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 10 23:41:37.512086 sshd[1673]: Connection closed by 10.0.0.1 port 38554 Sep 10 23:41:37.512555 sshd-session[1671]: pam_unix(sshd:session): session closed for user core Sep 10 23:41:37.536871 systemd[1]: sshd@1-10.0.0.21:22-10.0.0.1:38554.service: Deactivated successfully. Sep 10 23:41:37.541270 systemd[1]: session-2.scope: Deactivated successfully. Sep 10 23:41:37.542685 systemd-logind[1522]: Session 2 logged out. Waiting for processes to exit. Sep 10 23:41:37.550134 systemd[1]: Started sshd@2-10.0.0.21:22-10.0.0.1:38556.service - OpenSSH per-connection server daemon (10.0.0.1:38556). Sep 10 23:41:37.551232 systemd-logind[1522]: Removed session 2. Sep 10 23:41:37.620859 sshd[1679]: Accepted publickey for core from 10.0.0.1 port 38556 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:41:37.622323 sshd-session[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:41:37.627648 systemd-logind[1522]: New session 3 of user core. Sep 10 23:41:37.636965 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 10 23:41:37.686681 sshd[1681]: Connection closed by 10.0.0.1 port 38556 Sep 10 23:41:37.687320 sshd-session[1679]: pam_unix(sshd:session): session closed for user core Sep 10 23:41:37.706656 systemd[1]: sshd@2-10.0.0.21:22-10.0.0.1:38556.service: Deactivated successfully. Sep 10 23:41:37.708746 systemd[1]: session-3.scope: Deactivated successfully. Sep 10 23:41:37.711982 systemd-logind[1522]: Session 3 logged out. Waiting for processes to exit. Sep 10 23:41:37.715545 systemd[1]: Started sshd@3-10.0.0.21:22-10.0.0.1:38568.service - OpenSSH per-connection server daemon (10.0.0.1:38568). Sep 10 23:41:37.717434 systemd-logind[1522]: Removed session 3. Sep 10 23:41:37.769298 sshd[1687]: Accepted publickey for core from 10.0.0.1 port 38568 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:41:37.770201 sshd-session[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:41:37.775071 systemd-logind[1522]: New session 4 of user core. Sep 10 23:41:37.796239 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 10 23:41:37.854374 sshd[1689]: Connection closed by 10.0.0.1 port 38568 Sep 10 23:41:37.854887 sshd-session[1687]: pam_unix(sshd:session): session closed for user core Sep 10 23:41:37.866425 systemd[1]: sshd@3-10.0.0.21:22-10.0.0.1:38568.service: Deactivated successfully. Sep 10 23:41:37.868414 systemd[1]: session-4.scope: Deactivated successfully. Sep 10 23:41:37.870394 systemd-logind[1522]: Session 4 logged out. Waiting for processes to exit. Sep 10 23:41:37.873108 systemd[1]: Started sshd@4-10.0.0.21:22-10.0.0.1:38574.service - OpenSSH per-connection server daemon (10.0.0.1:38574). Sep 10 23:41:37.874704 systemd-logind[1522]: Removed session 4. Sep 10 23:41:37.934099 sshd[1695]: Accepted publickey for core from 10.0.0.1 port 38574 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:41:37.936115 sshd-session[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:41:37.940955 systemd-logind[1522]: New session 5 of user core. Sep 10 23:41:37.956790 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 10 23:41:38.016715 sudo[1698]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 10 23:41:38.016990 sudo[1698]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 23:41:38.042452 sudo[1698]: pam_unix(sudo:session): session closed for user root Sep 10 23:41:38.045724 sshd[1697]: Connection closed by 10.0.0.1 port 38574 Sep 10 23:41:38.046108 sshd-session[1695]: pam_unix(sshd:session): session closed for user core Sep 10 23:41:38.067235 systemd[1]: sshd@4-10.0.0.21:22-10.0.0.1:38574.service: Deactivated successfully. Sep 10 23:41:38.070406 systemd[1]: session-5.scope: Deactivated successfully. Sep 10 23:41:38.071169 systemd-logind[1522]: Session 5 logged out. Waiting for processes to exit. Sep 10 23:41:38.073798 systemd[1]: Started sshd@5-10.0.0.21:22-10.0.0.1:38582.service - OpenSSH per-connection server daemon (10.0.0.1:38582). Sep 10 23:41:38.074708 systemd-logind[1522]: Removed session 5. Sep 10 23:41:38.130055 sshd[1704]: Accepted publickey for core from 10.0.0.1 port 38582 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:41:38.134816 sshd-session[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:41:38.139708 systemd-logind[1522]: New session 6 of user core. Sep 10 23:41:38.153757 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 10 23:41:38.205831 sudo[1708]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 10 23:41:38.206125 sudo[1708]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 23:41:38.284679 sudo[1708]: pam_unix(sudo:session): session closed for user root Sep 10 23:41:38.289749 sudo[1707]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 10 23:41:38.290081 sudo[1707]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 23:41:38.299901 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 10 23:41:38.338050 augenrules[1730]: No rules Sep 10 23:41:38.339538 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 23:41:38.339824 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 10 23:41:38.341756 sudo[1707]: pam_unix(sudo:session): session closed for user root Sep 10 23:41:38.342986 sshd[1706]: Connection closed by 10.0.0.1 port 38582 Sep 10 23:41:38.343510 sshd-session[1704]: pam_unix(sshd:session): session closed for user core Sep 10 23:41:38.351728 systemd[1]: sshd@5-10.0.0.21:22-10.0.0.1:38582.service: Deactivated successfully. Sep 10 23:41:38.353217 systemd[1]: session-6.scope: Deactivated successfully. Sep 10 23:41:38.354157 systemd-logind[1522]: Session 6 logged out. Waiting for processes to exit. Sep 10 23:41:38.356925 systemd[1]: Started sshd@6-10.0.0.21:22-10.0.0.1:38592.service - OpenSSH per-connection server daemon (10.0.0.1:38592). Sep 10 23:41:38.358142 systemd-logind[1522]: Removed session 6. Sep 10 23:41:38.419668 sshd[1739]: Accepted publickey for core from 10.0.0.1 port 38592 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:41:38.421746 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:41:38.427273 systemd-logind[1522]: New session 7 of user core. Sep 10 23:41:38.437761 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 10 23:41:38.494873 sudo[1742]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 10 23:41:38.495153 sudo[1742]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 23:41:38.879408 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 10 23:41:38.907996 (dockerd)[1762]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 10 23:41:39.135482 dockerd[1762]: time="2025-09-10T23:41:39.135354988Z" level=info msg="Starting up" Sep 10 23:41:39.137642 dockerd[1762]: time="2025-09-10T23:41:39.137510512Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 10 23:41:39.187950 dockerd[1762]: time="2025-09-10T23:41:39.187723879Z" level=info msg="Loading containers: start." Sep 10 23:41:39.200605 kernel: Initializing XFRM netlink socket Sep 10 23:41:39.456490 systemd-networkd[1449]: docker0: Link UP Sep 10 23:41:39.460817 dockerd[1762]: time="2025-09-10T23:41:39.460744021Z" level=info msg="Loading containers: done." Sep 10 23:41:39.480897 dockerd[1762]: time="2025-09-10T23:41:39.480847454Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 10 23:41:39.481063 dockerd[1762]: time="2025-09-10T23:41:39.480950297Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 10 23:41:39.481089 dockerd[1762]: time="2025-09-10T23:41:39.481069240Z" level=info msg="Initializing buildkit" Sep 10 23:41:39.506999 dockerd[1762]: time="2025-09-10T23:41:39.506950935Z" level=info msg="Completed buildkit initialization" Sep 10 23:41:39.513523 dockerd[1762]: time="2025-09-10T23:41:39.513470753Z" level=info msg="Daemon has completed initialization" Sep 10 23:41:39.513714 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 10 23:41:39.514005 dockerd[1762]: time="2025-09-10T23:41:39.513618853Z" level=info msg="API listen on /run/docker.sock" Sep 10 23:41:40.113729 containerd[1539]: time="2025-09-10T23:41:40.113684889Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 10 23:41:40.804162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2196625993.mount: Deactivated successfully. Sep 10 23:41:41.654545 containerd[1539]: time="2025-09-10T23:41:41.653841976Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:41:41.655637 containerd[1539]: time="2025-09-10T23:41:41.655593123Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=25687327" Sep 10 23:41:41.656671 containerd[1539]: time="2025-09-10T23:41:41.656644661Z" level=info msg="ImageCreate event name:\"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:41:41.659543 containerd[1539]: time="2025-09-10T23:41:41.659505677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:41:41.660675 containerd[1539]: time="2025-09-10T23:41:41.660644044Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"25683924\" in 1.546911116s" Sep 10 23:41:41.660815 containerd[1539]: time="2025-09-10T23:41:41.660786577Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\"" Sep 10 23:41:41.662270 containerd[1539]: time="2025-09-10T23:41:41.662230668Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 10 23:41:42.709137 containerd[1539]: time="2025-09-10T23:41:42.709084513Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:41:42.710273 containerd[1539]: time="2025-09-10T23:41:42.710243876Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=22459769" Sep 10 23:41:42.711233 containerd[1539]: time="2025-09-10T23:41:42.711205503Z" level=info msg="ImageCreate event name:\"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:41:42.713600 containerd[1539]: time="2025-09-10T23:41:42.713393711Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:41:42.715477 containerd[1539]: time="2025-09-10T23:41:42.715442276Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"24028542\" in 1.053167678s" Sep 10 23:41:42.715602 containerd[1539]: time="2025-09-10T23:41:42.715580505Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\"" Sep 10 23:41:42.716174 containerd[1539]: time="2025-09-10T23:41:42.716122682Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 10 23:41:43.677599 containerd[1539]: time="2025-09-10T23:41:43.676878944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:41:43.677726 containerd[1539]: time="2025-09-10T23:41:43.677630265Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=17127508" Sep 10 23:41:43.678324 containerd[1539]: time="2025-09-10T23:41:43.678296906Z" level=info msg="ImageCreate event name:\"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:41:43.680821 containerd[1539]: time="2025-09-10T23:41:43.680789672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:41:43.683665 containerd[1539]: time="2025-09-10T23:41:43.683415185Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"18696299\" in 967.117793ms" Sep 10 23:41:43.683741 containerd[1539]: time="2025-09-10T23:41:43.683672414Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\"" Sep 10 23:41:43.684380 containerd[1539]: time="2025-09-10T23:41:43.684153523Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 10 23:41:43.787637 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 10 23:41:43.789079 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:41:43.930433 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:41:43.935006 (kubelet)[2045]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 23:41:43.972189 kubelet[2045]: E0910 23:41:43.972137 2045 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 23:41:43.975265 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 23:41:43.975405 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 23:41:43.975709 systemd[1]: kubelet.service: Consumed 145ms CPU time, 107.8M memory peak. Sep 10 23:41:44.709633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4285419245.mount: Deactivated successfully. Sep 10 23:41:45.097719 containerd[1539]: time="2025-09-10T23:41:45.097581289Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:41:45.098463 containerd[1539]: time="2025-09-10T23:41:45.098424410Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=26954909" Sep 10 23:41:45.099336 containerd[1539]: time="2025-09-10T23:41:45.099291237Z" level=info msg="ImageCreate event name:\"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:41:45.102037 containerd[1539]: time="2025-09-10T23:41:45.102003367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:41:45.102634 containerd[1539]: time="2025-09-10T23:41:45.102596506Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"26953926\" in 1.418412395s" Sep 10 23:41:45.102714 containerd[1539]: time="2025-09-10T23:41:45.102635265Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\"" Sep 10 23:41:45.103322 containerd[1539]: time="2025-09-10T23:41:45.103245267Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 10 23:41:45.737310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3010683370.mount: Deactivated successfully. Sep 10 23:41:46.426973 containerd[1539]: time="2025-09-10T23:41:46.426913625Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:41:46.428297 containerd[1539]: time="2025-09-10T23:41:46.428259092Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 10 23:41:46.430356 containerd[1539]: time="2025-09-10T23:41:46.430323966Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:41:46.433376 containerd[1539]: time="2025-09-10T23:41:46.432787593Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:41:46.434042 containerd[1539]: time="2025-09-10T23:41:46.434012088Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.330699597s" Sep 10 23:41:46.434103 containerd[1539]: time="2025-09-10T23:41:46.434047640Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 10 23:41:46.434587 containerd[1539]: time="2025-09-10T23:41:46.434505513Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 10 23:41:46.860892 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1015054552.mount: Deactivated successfully. Sep 10 23:41:46.867528 containerd[1539]: time="2025-09-10T23:41:46.867469032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 23:41:46.868272 containerd[1539]: time="2025-09-10T23:41:46.868231350Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 10 23:41:46.869049 containerd[1539]: time="2025-09-10T23:41:46.869006095Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 23:41:46.871201 containerd[1539]: time="2025-09-10T23:41:46.871162583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 23:41:46.871928 containerd[1539]: time="2025-09-10T23:41:46.871889751Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 437.352628ms" Sep 10 23:41:46.871977 containerd[1539]: time="2025-09-10T23:41:46.871933306Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 10 23:41:46.872744 containerd[1539]: time="2025-09-10T23:41:46.872708896Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 10 23:41:47.366996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3632524053.mount: Deactivated successfully. Sep 10 23:41:48.778956 containerd[1539]: time="2025-09-10T23:41:48.778895055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:41:48.780225 containerd[1539]: time="2025-09-10T23:41:48.780194195Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537163" Sep 10 23:41:48.781197 containerd[1539]: time="2025-09-10T23:41:48.781173972Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:41:48.783852 containerd[1539]: time="2025-09-10T23:41:48.783822579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:41:48.785856 containerd[1539]: time="2025-09-10T23:41:48.785825513Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 1.913082364s" Sep 10 23:41:48.785856 containerd[1539]: time="2025-09-10T23:41:48.785858289Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 10 23:41:53.096770 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:41:53.096931 systemd[1]: kubelet.service: Consumed 145ms CPU time, 107.8M memory peak. Sep 10 23:41:53.101947 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:41:53.124547 systemd[1]: Reload requested from client PID 2200 ('systemctl') (unit session-7.scope)... Sep 10 23:41:53.124735 systemd[1]: Reloading... Sep 10 23:41:53.205594 zram_generator::config[2242]: No configuration found. Sep 10 23:41:53.317843 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 23:41:53.405458 systemd[1]: Reloading finished in 280 ms. Sep 10 23:41:53.463445 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:41:53.465450 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:41:53.468011 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 23:41:53.468203 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:41:53.468239 systemd[1]: kubelet.service: Consumed 93ms CPU time, 95.2M memory peak. Sep 10 23:41:53.469595 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:41:53.603734 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:41:53.607212 (kubelet)[2289]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 23:41:53.640534 kubelet[2289]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 23:41:53.640534 kubelet[2289]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 10 23:41:53.640534 kubelet[2289]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 23:41:53.640885 kubelet[2289]: I0910 23:41:53.640629 2289 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 23:41:55.089733 kubelet[2289]: I0910 23:41:55.089684 2289 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 10 23:41:55.089733 kubelet[2289]: I0910 23:41:55.089718 2289 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 23:41:55.090074 kubelet[2289]: I0910 23:41:55.089966 2289 server.go:934] "Client rotation is on, will bootstrap in background" Sep 10 23:41:55.114390 kubelet[2289]: E0910 23:41:55.114229 2289 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.21:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Sep 10 23:41:55.115687 kubelet[2289]: I0910 23:41:55.115651 2289 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 23:41:55.122410 kubelet[2289]: I0910 23:41:55.122328 2289 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 10 23:41:55.126069 kubelet[2289]: I0910 23:41:55.126039 2289 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 23:41:55.126884 kubelet[2289]: I0910 23:41:55.126847 2289 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 10 23:41:55.127046 kubelet[2289]: I0910 23:41:55.127012 2289 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 23:41:55.127229 kubelet[2289]: I0910 23:41:55.127049 2289 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 23:41:55.127316 kubelet[2289]: I0910 23:41:55.127240 2289 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 23:41:55.127316 kubelet[2289]: I0910 23:41:55.127249 2289 container_manager_linux.go:300] "Creating device plugin manager" Sep 10 23:41:55.127496 kubelet[2289]: I0910 23:41:55.127482 2289 state_mem.go:36] "Initialized new in-memory state store" Sep 10 23:41:55.130405 kubelet[2289]: I0910 23:41:55.130197 2289 kubelet.go:408] "Attempting to sync node with API server" Sep 10 23:41:55.130405 kubelet[2289]: I0910 23:41:55.130234 2289 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 23:41:55.130405 kubelet[2289]: I0910 23:41:55.130258 2289 kubelet.go:314] "Adding apiserver pod source" Sep 10 23:41:55.130405 kubelet[2289]: I0910 23:41:55.130336 2289 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 23:41:55.134247 kubelet[2289]: I0910 23:41:55.134219 2289 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 10 23:41:55.134980 kubelet[2289]: I0910 23:41:55.134953 2289 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 10 23:41:55.135229 kubelet[2289]: W0910 23:41:55.135188 2289 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 10 23:41:55.135405 kubelet[2289]: W0910 23:41:55.135341 2289 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Sep 10 23:41:55.135466 kubelet[2289]: E0910 23:41:55.135423 2289 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Sep 10 23:41:55.135544 kubelet[2289]: W0910 23:41:55.135375 2289 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Sep 10 23:41:55.135632 kubelet[2289]: E0910 23:41:55.135612 2289 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Sep 10 23:41:55.136859 kubelet[2289]: I0910 23:41:55.136256 2289 server.go:1274] "Started kubelet" Sep 10 23:41:55.136859 kubelet[2289]: I0910 23:41:55.136537 2289 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 23:41:55.137429 kubelet[2289]: I0910 23:41:55.137359 2289 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 23:41:55.140544 kubelet[2289]: I0910 23:41:55.140508 2289 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 23:41:55.141668 kubelet[2289]: I0910 23:41:55.141639 2289 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 23:41:55.142645 kubelet[2289]: I0910 23:41:55.142617 2289 server.go:449] "Adding debug handlers to kubelet server" Sep 10 23:41:55.143104 kubelet[2289]: E0910 23:41:55.142039 2289 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.21:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.21:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18641055c47e7cda default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-10 23:41:55.136224474 +0000 UTC m=+1.525952460,LastTimestamp:2025-09-10 23:41:55.136224474 +0000 UTC m=+1.525952460,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 10 23:41:55.143984 kubelet[2289]: I0910 23:41:55.143950 2289 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 23:41:55.144231 kubelet[2289]: E0910 23:41:55.144211 2289 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 23:41:55.144578 kubelet[2289]: I0910 23:41:55.144549 2289 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 10 23:41:55.144894 kubelet[2289]: I0910 23:41:55.144870 2289 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 10 23:41:55.145066 kubelet[2289]: I0910 23:41:55.145054 2289 reconciler.go:26] "Reconciler: start to sync state" Sep 10 23:41:55.145868 kubelet[2289]: W0910 23:41:55.145818 2289 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Sep 10 23:41:55.145933 kubelet[2289]: E0910 23:41:55.145882 2289 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Sep 10 23:41:55.145933 kubelet[2289]: E0910 23:41:55.145905 2289 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="200ms" Sep 10 23:41:55.146614 kubelet[2289]: I0910 23:41:55.146591 2289 factory.go:221] Registration of the systemd container factory successfully Sep 10 23:41:55.147147 kubelet[2289]: I0910 23:41:55.147122 2289 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 23:41:55.147220 kubelet[2289]: E0910 23:41:55.147172 2289 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 23:41:55.149596 kubelet[2289]: I0910 23:41:55.149577 2289 factory.go:221] Registration of the containerd container factory successfully Sep 10 23:41:55.158630 kubelet[2289]: I0910 23:41:55.158537 2289 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 10 23:41:55.159730 kubelet[2289]: I0910 23:41:55.159695 2289 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 10 23:41:55.159730 kubelet[2289]: I0910 23:41:55.159726 2289 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 10 23:41:55.159805 kubelet[2289]: I0910 23:41:55.159746 2289 kubelet.go:2321] "Starting kubelet main sync loop" Sep 10 23:41:55.159805 kubelet[2289]: E0910 23:41:55.159789 2289 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 23:41:55.162967 kubelet[2289]: W0910 23:41:55.162912 2289 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Sep 10 23:41:55.163051 kubelet[2289]: E0910 23:41:55.162971 2289 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Sep 10 23:41:55.163051 kubelet[2289]: I0910 23:41:55.163041 2289 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 10 23:41:55.163096 kubelet[2289]: I0910 23:41:55.163050 2289 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 10 23:41:55.163096 kubelet[2289]: I0910 23:41:55.163083 2289 state_mem.go:36] "Initialized new in-memory state store" Sep 10 23:41:55.165162 kubelet[2289]: I0910 23:41:55.165128 2289 policy_none.go:49] "None policy: Start" Sep 10 23:41:55.165790 kubelet[2289]: I0910 23:41:55.165772 2289 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 10 23:41:55.165860 kubelet[2289]: I0910 23:41:55.165795 2289 state_mem.go:35] "Initializing new in-memory state store" Sep 10 23:41:55.173120 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 10 23:41:55.186824 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 10 23:41:55.209448 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 10 23:41:55.212946 kubelet[2289]: I0910 23:41:55.212795 2289 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 10 23:41:55.213470 kubelet[2289]: I0910 23:41:55.213437 2289 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 23:41:55.213506 kubelet[2289]: I0910 23:41:55.213458 2289 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 23:41:55.214188 kubelet[2289]: I0910 23:41:55.214030 2289 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 23:41:55.215094 kubelet[2289]: E0910 23:41:55.215070 2289 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 10 23:41:55.268430 systemd[1]: Created slice kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice - libcontainer container kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice. Sep 10 23:41:55.286059 systemd[1]: Created slice kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice - libcontainer container kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice. Sep 10 23:41:55.290159 systemd[1]: Created slice kubepods-burstable-pod7dbf5d4262c971b769ec472652e24952.slice - libcontainer container kubepods-burstable-pod7dbf5d4262c971b769ec472652e24952.slice. Sep 10 23:41:55.314737 kubelet[2289]: I0910 23:41:55.314696 2289 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 23:41:55.315399 kubelet[2289]: E0910 23:41:55.315346 2289 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": dial tcp 10.0.0.21:6443: connect: connection refused" node="localhost" Sep 10 23:41:55.346140 kubelet[2289]: I0910 23:41:55.345837 2289 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7dbf5d4262c971b769ec472652e24952-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7dbf5d4262c971b769ec472652e24952\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:41:55.346140 kubelet[2289]: I0910 23:41:55.345881 2289 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7dbf5d4262c971b769ec472652e24952-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7dbf5d4262c971b769ec472652e24952\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:41:55.346140 kubelet[2289]: I0910 23:41:55.345904 2289 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:41:55.346140 kubelet[2289]: I0910 23:41:55.345921 2289 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:41:55.346140 kubelet[2289]: I0910 23:41:55.345954 2289 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:41:55.346461 kubelet[2289]: I0910 23:41:55.345969 2289 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7dbf5d4262c971b769ec472652e24952-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7dbf5d4262c971b769ec472652e24952\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:41:55.346461 kubelet[2289]: I0910 23:41:55.345984 2289 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:41:55.346461 kubelet[2289]: I0910 23:41:55.345999 2289 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:41:55.346461 kubelet[2289]: I0910 23:41:55.346024 2289 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 10 23:41:55.346461 kubelet[2289]: E0910 23:41:55.346436 2289 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="400ms" Sep 10 23:41:55.517524 kubelet[2289]: I0910 23:41:55.517489 2289 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 23:41:55.517968 kubelet[2289]: E0910 23:41:55.517929 2289 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": dial tcp 10.0.0.21:6443: connect: connection refused" node="localhost" Sep 10 23:41:55.559810 kubelet[2289]: E0910 23:41:55.559701 2289 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.21:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.21:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18641055c47e7cda default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-10 23:41:55.136224474 +0000 UTC m=+1.525952460,LastTimestamp:2025-09-10 23:41:55.136224474 +0000 UTC m=+1.525952460,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 10 23:41:55.583579 containerd[1539]: time="2025-09-10T23:41:55.583524253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,}" Sep 10 23:41:55.589429 containerd[1539]: time="2025-09-10T23:41:55.589285004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,}" Sep 10 23:41:55.593256 containerd[1539]: time="2025-09-10T23:41:55.593224146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7dbf5d4262c971b769ec472652e24952,Namespace:kube-system,Attempt:0,}" Sep 10 23:41:55.606647 containerd[1539]: time="2025-09-10T23:41:55.606351145Z" level=info msg="connecting to shim d725853e86162504c9fda4e6ae1237bc341e2396a385e9cb0b8d1874b092d603" address="unix:///run/containerd/s/e05c1ef8becbe7cb0f36f75170bcf893ee868974e9f9a1ae546c9f4d75b5b0c9" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:41:55.620779 containerd[1539]: time="2025-09-10T23:41:55.620705746Z" level=info msg="connecting to shim 5027adbbead8080d123fb8897419d2ad26e0d9bcdf5c45e129a6a43cb73f57b8" address="unix:///run/containerd/s/efee7966a663bf7d542364422c899fe8c938dc11ce16596eeecff53f9886b443" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:41:55.629897 containerd[1539]: time="2025-09-10T23:41:55.629852176Z" level=info msg="connecting to shim d59eb9cfe8d7115c3c956c28c72af13a137764ed8209a809dbbee05e2a308d20" address="unix:///run/containerd/s/8e0ca4ccbb1bab67493bf1f0b7f52c38eb2bba5d2586490942e378869afcbfb1" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:41:55.640882 systemd[1]: Started cri-containerd-d725853e86162504c9fda4e6ae1237bc341e2396a385e9cb0b8d1874b092d603.scope - libcontainer container d725853e86162504c9fda4e6ae1237bc341e2396a385e9cb0b8d1874b092d603. Sep 10 23:41:55.645206 systemd[1]: Started cri-containerd-5027adbbead8080d123fb8897419d2ad26e0d9bcdf5c45e129a6a43cb73f57b8.scope - libcontainer container 5027adbbead8080d123fb8897419d2ad26e0d9bcdf5c45e129a6a43cb73f57b8. Sep 10 23:41:55.658752 systemd[1]: Started cri-containerd-d59eb9cfe8d7115c3c956c28c72af13a137764ed8209a809dbbee05e2a308d20.scope - libcontainer container d59eb9cfe8d7115c3c956c28c72af13a137764ed8209a809dbbee05e2a308d20. Sep 10 23:41:55.698416 containerd[1539]: time="2025-09-10T23:41:55.698293347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7dbf5d4262c971b769ec472652e24952,Namespace:kube-system,Attempt:0,} returns sandbox id \"5027adbbead8080d123fb8897419d2ad26e0d9bcdf5c45e129a6a43cb73f57b8\"" Sep 10 23:41:55.699316 containerd[1539]: time="2025-09-10T23:41:55.699278794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"d725853e86162504c9fda4e6ae1237bc341e2396a385e9cb0b8d1874b092d603\"" Sep 10 23:41:55.703575 containerd[1539]: time="2025-09-10T23:41:55.703494466Z" level=info msg="CreateContainer within sandbox \"5027adbbead8080d123fb8897419d2ad26e0d9bcdf5c45e129a6a43cb73f57b8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 10 23:41:55.705602 containerd[1539]: time="2025-09-10T23:41:55.704940303Z" level=info msg="CreateContainer within sandbox \"d725853e86162504c9fda4e6ae1237bc341e2396a385e9cb0b8d1874b092d603\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 10 23:41:55.707751 containerd[1539]: time="2025-09-10T23:41:55.706073550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"d59eb9cfe8d7115c3c956c28c72af13a137764ed8209a809dbbee05e2a308d20\"" Sep 10 23:41:55.711137 containerd[1539]: time="2025-09-10T23:41:55.711091450Z" level=info msg="CreateContainer within sandbox \"d59eb9cfe8d7115c3c956c28c72af13a137764ed8209a809dbbee05e2a308d20\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 10 23:41:55.712771 containerd[1539]: time="2025-09-10T23:41:55.712739177Z" level=info msg="Container 2f537bc9ecac08ded3908e03f72ebe0c7a991be2ecd772e217be1f74cd473b95: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:41:55.721513 containerd[1539]: time="2025-09-10T23:41:55.721209304Z" level=info msg="Container 8c7f604e20acfeab765fe63a7323c67b9ec0df10954d8943493bb2c214daba70: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:41:55.723348 containerd[1539]: time="2025-09-10T23:41:55.723310810Z" level=info msg="Container 3b38943cd59229b3905c6a07592aa11fb68c81b5f090b372d622833f670ba833: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:41:55.726165 containerd[1539]: time="2025-09-10T23:41:55.726091864Z" level=info msg="CreateContainer within sandbox \"5027adbbead8080d123fb8897419d2ad26e0d9bcdf5c45e129a6a43cb73f57b8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2f537bc9ecac08ded3908e03f72ebe0c7a991be2ecd772e217be1f74cd473b95\"" Sep 10 23:41:55.727012 containerd[1539]: time="2025-09-10T23:41:55.726861078Z" level=info msg="StartContainer for \"2f537bc9ecac08ded3908e03f72ebe0c7a991be2ecd772e217be1f74cd473b95\"" Sep 10 23:41:55.728228 containerd[1539]: time="2025-09-10T23:41:55.728190645Z" level=info msg="connecting to shim 2f537bc9ecac08ded3908e03f72ebe0c7a991be2ecd772e217be1f74cd473b95" address="unix:///run/containerd/s/efee7966a663bf7d542364422c899fe8c938dc11ce16596eeecff53f9886b443" protocol=ttrpc version=3 Sep 10 23:41:55.728462 containerd[1539]: time="2025-09-10T23:41:55.728429034Z" level=info msg="CreateContainer within sandbox \"d59eb9cfe8d7115c3c956c28c72af13a137764ed8209a809dbbee05e2a308d20\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8c7f604e20acfeab765fe63a7323c67b9ec0df10954d8943493bb2c214daba70\"" Sep 10 23:41:55.728827 containerd[1539]: time="2025-09-10T23:41:55.728799959Z" level=info msg="StartContainer for \"8c7f604e20acfeab765fe63a7323c67b9ec0df10954d8943493bb2c214daba70\"" Sep 10 23:41:55.730269 containerd[1539]: time="2025-09-10T23:41:55.729886490Z" level=info msg="connecting to shim 8c7f604e20acfeab765fe63a7323c67b9ec0df10954d8943493bb2c214daba70" address="unix:///run/containerd/s/8e0ca4ccbb1bab67493bf1f0b7f52c38eb2bba5d2586490942e378869afcbfb1" protocol=ttrpc version=3 Sep 10 23:41:55.735083 containerd[1539]: time="2025-09-10T23:41:55.735013047Z" level=info msg="CreateContainer within sandbox \"d725853e86162504c9fda4e6ae1237bc341e2396a385e9cb0b8d1874b092d603\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3b38943cd59229b3905c6a07592aa11fb68c81b5f090b372d622833f670ba833\"" Sep 10 23:41:55.735903 containerd[1539]: time="2025-09-10T23:41:55.735875733Z" level=info msg="StartContainer for \"3b38943cd59229b3905c6a07592aa11fb68c81b5f090b372d622833f670ba833\"" Sep 10 23:41:55.737804 containerd[1539]: time="2025-09-10T23:41:55.737765654Z" level=info msg="connecting to shim 3b38943cd59229b3905c6a07592aa11fb68c81b5f090b372d622833f670ba833" address="unix:///run/containerd/s/e05c1ef8becbe7cb0f36f75170bcf893ee868974e9f9a1ae546c9f4d75b5b0c9" protocol=ttrpc version=3 Sep 10 23:41:55.747700 kubelet[2289]: E0910 23:41:55.747649 2289 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="800ms" Sep 10 23:41:55.749752 systemd[1]: Started cri-containerd-8c7f604e20acfeab765fe63a7323c67b9ec0df10954d8943493bb2c214daba70.scope - libcontainer container 8c7f604e20acfeab765fe63a7323c67b9ec0df10954d8943493bb2c214daba70. Sep 10 23:41:55.753228 systemd[1]: Started cri-containerd-2f537bc9ecac08ded3908e03f72ebe0c7a991be2ecd772e217be1f74cd473b95.scope - libcontainer container 2f537bc9ecac08ded3908e03f72ebe0c7a991be2ecd772e217be1f74cd473b95. Sep 10 23:41:55.774820 systemd[1]: Started cri-containerd-3b38943cd59229b3905c6a07592aa11fb68c81b5f090b372d622833f670ba833.scope - libcontainer container 3b38943cd59229b3905c6a07592aa11fb68c81b5f090b372d622833f670ba833. Sep 10 23:41:55.799631 containerd[1539]: time="2025-09-10T23:41:55.799590880Z" level=info msg="StartContainer for \"8c7f604e20acfeab765fe63a7323c67b9ec0df10954d8943493bb2c214daba70\" returns successfully" Sep 10 23:41:55.825207 containerd[1539]: time="2025-09-10T23:41:55.825108319Z" level=info msg="StartContainer for \"2f537bc9ecac08ded3908e03f72ebe0c7a991be2ecd772e217be1f74cd473b95\" returns successfully" Sep 10 23:41:55.841443 containerd[1539]: time="2025-09-10T23:41:55.841403202Z" level=info msg="StartContainer for \"3b38943cd59229b3905c6a07592aa11fb68c81b5f090b372d622833f670ba833\" returns successfully" Sep 10 23:41:55.921283 kubelet[2289]: I0910 23:41:55.920717 2289 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 23:41:57.515105 kubelet[2289]: E0910 23:41:57.515047 2289 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 10 23:41:57.619368 kubelet[2289]: I0910 23:41:57.619261 2289 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 10 23:41:57.619368 kubelet[2289]: E0910 23:41:57.619304 2289 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 10 23:41:58.132984 kubelet[2289]: I0910 23:41:58.132942 2289 apiserver.go:52] "Watching apiserver" Sep 10 23:41:58.145146 kubelet[2289]: I0910 23:41:58.145104 2289 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 10 23:41:58.185285 kubelet[2289]: E0910 23:41:58.185241 2289 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 10 23:41:59.547855 systemd[1]: Reload requested from client PID 2566 ('systemctl') (unit session-7.scope)... Sep 10 23:41:59.547872 systemd[1]: Reloading... Sep 10 23:41:59.624604 zram_generator::config[2615]: No configuration found. Sep 10 23:41:59.700188 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 23:41:59.804010 systemd[1]: Reloading finished in 255 ms. Sep 10 23:41:59.832175 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:41:59.851261 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 23:41:59.852626 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:41:59.852679 systemd[1]: kubelet.service: Consumed 1.892s CPU time, 127.9M memory peak. Sep 10 23:41:59.855151 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:42:00.013902 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:42:00.031936 (kubelet)[2651]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 23:42:00.086691 kubelet[2651]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 23:42:00.086691 kubelet[2651]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 10 23:42:00.086691 kubelet[2651]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 23:42:00.086691 kubelet[2651]: I0910 23:42:00.085929 2651 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 23:42:00.093832 kubelet[2651]: I0910 23:42:00.093797 2651 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 10 23:42:00.093980 kubelet[2651]: I0910 23:42:00.093970 2651 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 23:42:00.094268 kubelet[2651]: I0910 23:42:00.094252 2651 server.go:934] "Client rotation is on, will bootstrap in background" Sep 10 23:42:00.095700 kubelet[2651]: I0910 23:42:00.095681 2651 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 10 23:42:00.097775 kubelet[2651]: I0910 23:42:00.097739 2651 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 23:42:00.101915 kubelet[2651]: I0910 23:42:00.101893 2651 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 10 23:42:00.104831 kubelet[2651]: I0910 23:42:00.104808 2651 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 23:42:00.104988 kubelet[2651]: I0910 23:42:00.104974 2651 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 10 23:42:00.105111 kubelet[2651]: I0910 23:42:00.105085 2651 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 23:42:00.105284 kubelet[2651]: I0910 23:42:00.105112 2651 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 23:42:00.105357 kubelet[2651]: I0910 23:42:00.105296 2651 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 23:42:00.105357 kubelet[2651]: I0910 23:42:00.105305 2651 container_manager_linux.go:300] "Creating device plugin manager" Sep 10 23:42:00.105357 kubelet[2651]: I0910 23:42:00.105339 2651 state_mem.go:36] "Initialized new in-memory state store" Sep 10 23:42:00.105486 kubelet[2651]: I0910 23:42:00.105459 2651 kubelet.go:408] "Attempting to sync node with API server" Sep 10 23:42:00.105515 kubelet[2651]: I0910 23:42:00.105507 2651 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 23:42:00.105540 kubelet[2651]: I0910 23:42:00.105528 2651 kubelet.go:314] "Adding apiserver pod source" Sep 10 23:42:00.105540 kubelet[2651]: I0910 23:42:00.105538 2651 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 23:42:00.106753 kubelet[2651]: I0910 23:42:00.106727 2651 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 10 23:42:00.107319 kubelet[2651]: I0910 23:42:00.107290 2651 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 10 23:42:00.107841 kubelet[2651]: I0910 23:42:00.107819 2651 server.go:1274] "Started kubelet" Sep 10 23:42:00.109580 kubelet[2651]: I0910 23:42:00.109509 2651 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 23:42:00.113571 kubelet[2651]: I0910 23:42:00.112772 2651 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 23:42:00.113571 kubelet[2651]: I0910 23:42:00.111069 2651 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 23:42:00.113571 kubelet[2651]: I0910 23:42:00.112998 2651 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 23:42:00.113571 kubelet[2651]: I0910 23:42:00.110189 2651 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 23:42:00.116869 kubelet[2651]: E0910 23:42:00.116847 2651 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 23:42:00.117415 kubelet[2651]: I0910 23:42:00.117374 2651 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 10 23:42:00.123846 kubelet[2651]: I0910 23:42:00.123755 2651 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 10 23:42:00.123924 kubelet[2651]: I0910 23:42:00.123918 2651 reconciler.go:26] "Reconciler: start to sync state" Sep 10 23:42:00.125599 kubelet[2651]: I0910 23:42:00.125536 2651 factory.go:221] Registration of the systemd container factory successfully Sep 10 23:42:00.125686 kubelet[2651]: I0910 23:42:00.125662 2651 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 23:42:00.127752 kubelet[2651]: I0910 23:42:00.127717 2651 server.go:449] "Adding debug handlers to kubelet server" Sep 10 23:42:00.132435 kubelet[2651]: I0910 23:42:00.132405 2651 factory.go:221] Registration of the containerd container factory successfully Sep 10 23:42:00.132435 kubelet[2651]: E0910 23:42:00.132420 2651 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 23:42:00.136101 kubelet[2651]: I0910 23:42:00.135950 2651 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 10 23:42:00.137540 kubelet[2651]: I0910 23:42:00.137246 2651 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 10 23:42:00.137647 kubelet[2651]: I0910 23:42:00.137632 2651 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 10 23:42:00.137741 kubelet[2651]: I0910 23:42:00.137731 2651 kubelet.go:2321] "Starting kubelet main sync loop" Sep 10 23:42:00.137884 kubelet[2651]: E0910 23:42:00.137866 2651 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 23:42:00.176105 kubelet[2651]: I0910 23:42:00.176073 2651 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 10 23:42:00.176105 kubelet[2651]: I0910 23:42:00.176094 2651 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 10 23:42:00.176105 kubelet[2651]: I0910 23:42:00.176116 2651 state_mem.go:36] "Initialized new in-memory state store" Sep 10 23:42:00.176299 kubelet[2651]: I0910 23:42:00.176269 2651 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 10 23:42:00.176299 kubelet[2651]: I0910 23:42:00.176284 2651 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 10 23:42:00.176395 kubelet[2651]: I0910 23:42:00.176304 2651 policy_none.go:49] "None policy: Start" Sep 10 23:42:00.177053 kubelet[2651]: I0910 23:42:00.177025 2651 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 10 23:42:00.177100 kubelet[2651]: I0910 23:42:00.177062 2651 state_mem.go:35] "Initializing new in-memory state store" Sep 10 23:42:00.177244 kubelet[2651]: I0910 23:42:00.177229 2651 state_mem.go:75] "Updated machine memory state" Sep 10 23:42:00.181803 kubelet[2651]: I0910 23:42:00.181781 2651 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 10 23:42:00.182186 kubelet[2651]: I0910 23:42:00.182159 2651 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 23:42:00.182247 kubelet[2651]: I0910 23:42:00.182182 2651 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 23:42:00.182458 kubelet[2651]: I0910 23:42:00.182438 2651 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 23:42:00.284450 kubelet[2651]: I0910 23:42:00.284407 2651 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 23:42:00.309593 kubelet[2651]: I0910 23:42:00.309542 2651 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 10 23:42:00.309906 kubelet[2651]: I0910 23:42:00.309694 2651 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 10 23:42:00.325223 kubelet[2651]: I0910 23:42:00.325185 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:42:00.325223 kubelet[2651]: I0910 23:42:00.325226 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:42:00.325373 kubelet[2651]: I0910 23:42:00.325258 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:42:00.325373 kubelet[2651]: I0910 23:42:00.325278 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:42:00.325373 kubelet[2651]: I0910 23:42:00.325296 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7dbf5d4262c971b769ec472652e24952-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7dbf5d4262c971b769ec472652e24952\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:42:00.325373 kubelet[2651]: I0910 23:42:00.325311 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7dbf5d4262c971b769ec472652e24952-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7dbf5d4262c971b769ec472652e24952\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:42:00.325373 kubelet[2651]: I0910 23:42:00.325326 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7dbf5d4262c971b769ec472652e24952-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7dbf5d4262c971b769ec472652e24952\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:42:00.325472 kubelet[2651]: I0910 23:42:00.325341 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:42:00.325472 kubelet[2651]: I0910 23:42:00.325357 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 10 23:42:00.546123 sudo[2684]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 10 23:42:00.546373 sudo[2684]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 10 23:42:01.023225 sudo[2684]: pam_unix(sudo:session): session closed for user root Sep 10 23:42:01.106766 kubelet[2651]: I0910 23:42:01.106715 2651 apiserver.go:52] "Watching apiserver" Sep 10 23:42:01.124891 kubelet[2651]: I0910 23:42:01.124839 2651 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 10 23:42:01.173069 kubelet[2651]: E0910 23:42:01.173018 2651 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 10 23:42:01.199410 kubelet[2651]: I0910 23:42:01.197876 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.197153216 podStartE2EDuration="1.197153216s" podCreationTimestamp="2025-09-10 23:42:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:42:01.194394636 +0000 UTC m=+1.156236718" watchObservedRunningTime="2025-09-10 23:42:01.197153216 +0000 UTC m=+1.158995338" Sep 10 23:42:01.217624 kubelet[2651]: I0910 23:42:01.217504 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.217485785 podStartE2EDuration="1.217485785s" podCreationTimestamp="2025-09-10 23:42:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:42:01.208677055 +0000 UTC m=+1.170519137" watchObservedRunningTime="2025-09-10 23:42:01.217485785 +0000 UTC m=+1.179327867" Sep 10 23:42:01.227591 kubelet[2651]: I0910 23:42:01.227514 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.227500358 podStartE2EDuration="1.227500358s" podCreationTimestamp="2025-09-10 23:42:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:42:01.217866904 +0000 UTC m=+1.179708986" watchObservedRunningTime="2025-09-10 23:42:01.227500358 +0000 UTC m=+1.189342441" Sep 10 23:42:03.460493 sudo[1742]: pam_unix(sudo:session): session closed for user root Sep 10 23:42:03.461917 sshd[1741]: Connection closed by 10.0.0.1 port 38592 Sep 10 23:42:03.462430 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Sep 10 23:42:03.465301 systemd[1]: sshd@6-10.0.0.21:22-10.0.0.1:38592.service: Deactivated successfully. Sep 10 23:42:03.467802 systemd[1]: session-7.scope: Deactivated successfully. Sep 10 23:42:03.468188 systemd[1]: session-7.scope: Consumed 7.209s CPU time, 262.2M memory peak. Sep 10 23:42:03.469871 systemd-logind[1522]: Session 7 logged out. Waiting for processes to exit. Sep 10 23:42:03.471432 systemd-logind[1522]: Removed session 7. Sep 10 23:42:05.977000 kubelet[2651]: I0910 23:42:05.976968 2651 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 10 23:42:05.977472 containerd[1539]: time="2025-09-10T23:42:05.977278529Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 10 23:42:05.977706 kubelet[2651]: I0910 23:42:05.977465 2651 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 10 23:42:06.766095 kubelet[2651]: I0910 23:42:06.765270 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c2bf680-901d-4f69-9373-fd14da34e063-xtables-lock\") pod \"kube-proxy-t4sbd\" (UID: \"6c2bf680-901d-4f69-9373-fd14da34e063\") " pod="kube-system/kube-proxy-t4sbd" Sep 10 23:42:06.767932 kubelet[2651]: I0910 23:42:06.767842 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtc68\" (UniqueName: \"kubernetes.io/projected/6c2bf680-901d-4f69-9373-fd14da34e063-kube-api-access-gtc68\") pod \"kube-proxy-t4sbd\" (UID: \"6c2bf680-901d-4f69-9373-fd14da34e063\") " pod="kube-system/kube-proxy-t4sbd" Sep 10 23:42:06.767932 kubelet[2651]: I0910 23:42:06.767891 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c2bf680-901d-4f69-9373-fd14da34e063-lib-modules\") pod \"kube-proxy-t4sbd\" (UID: \"6c2bf680-901d-4f69-9373-fd14da34e063\") " pod="kube-system/kube-proxy-t4sbd" Sep 10 23:42:06.767932 kubelet[2651]: I0910 23:42:06.767913 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6c2bf680-901d-4f69-9373-fd14da34e063-kube-proxy\") pod \"kube-proxy-t4sbd\" (UID: \"6c2bf680-901d-4f69-9373-fd14da34e063\") " pod="kube-system/kube-proxy-t4sbd" Sep 10 23:42:06.777856 systemd[1]: Created slice kubepods-besteffort-pod6c2bf680_901d_4f69_9373_fd14da34e063.slice - libcontainer container kubepods-besteffort-pod6c2bf680_901d_4f69_9373_fd14da34e063.slice. Sep 10 23:42:06.801912 systemd[1]: Created slice kubepods-burstable-pod42f6e2c1_7613_46a9_9de6_dd85a28c0449.slice - libcontainer container kubepods-burstable-pod42f6e2c1_7613_46a9_9de6_dd85a28c0449.slice. Sep 10 23:42:06.870456 kubelet[2651]: I0910 23:42:06.870371 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/42f6e2c1-7613-46a9-9de6-dd85a28c0449-bpf-maps\") pod \"cilium-vpgxt\" (UID: \"42f6e2c1-7613-46a9-9de6-dd85a28c0449\") " pod="kube-system/cilium-vpgxt" Sep 10 23:42:06.870456 kubelet[2651]: I0910 23:42:06.870440 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/42f6e2c1-7613-46a9-9de6-dd85a28c0449-hostproc\") pod \"cilium-vpgxt\" (UID: \"42f6e2c1-7613-46a9-9de6-dd85a28c0449\") " pod="kube-system/cilium-vpgxt" Sep 10 23:42:06.870456 kubelet[2651]: I0910 23:42:06.870464 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/42f6e2c1-7613-46a9-9de6-dd85a28c0449-cni-path\") pod \"cilium-vpgxt\" (UID: \"42f6e2c1-7613-46a9-9de6-dd85a28c0449\") " pod="kube-system/cilium-vpgxt" Sep 10 23:42:06.870776 kubelet[2651]: I0910 23:42:06.870480 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/42f6e2c1-7613-46a9-9de6-dd85a28c0449-etc-cni-netd\") pod \"cilium-vpgxt\" (UID: \"42f6e2c1-7613-46a9-9de6-dd85a28c0449\") " pod="kube-system/cilium-vpgxt" Sep 10 23:42:06.870776 kubelet[2651]: I0910 23:42:06.870496 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42f6e2c1-7613-46a9-9de6-dd85a28c0449-xtables-lock\") pod \"cilium-vpgxt\" (UID: \"42f6e2c1-7613-46a9-9de6-dd85a28c0449\") " pod="kube-system/cilium-vpgxt" Sep 10 23:42:06.870776 kubelet[2651]: I0910 23:42:06.870522 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42f6e2c1-7613-46a9-9de6-dd85a28c0449-lib-modules\") pod \"cilium-vpgxt\" (UID: \"42f6e2c1-7613-46a9-9de6-dd85a28c0449\") " pod="kube-system/cilium-vpgxt" Sep 10 23:42:06.870776 kubelet[2651]: I0910 23:42:06.870601 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/42f6e2c1-7613-46a9-9de6-dd85a28c0449-hubble-tls\") pod \"cilium-vpgxt\" (UID: \"42f6e2c1-7613-46a9-9de6-dd85a28c0449\") " pod="kube-system/cilium-vpgxt" Sep 10 23:42:06.870776 kubelet[2651]: I0910 23:42:06.870757 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/42f6e2c1-7613-46a9-9de6-dd85a28c0449-cilium-config-path\") pod \"cilium-vpgxt\" (UID: \"42f6e2c1-7613-46a9-9de6-dd85a28c0449\") " pod="kube-system/cilium-vpgxt" Sep 10 23:42:06.870879 kubelet[2651]: I0910 23:42:06.870793 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/42f6e2c1-7613-46a9-9de6-dd85a28c0449-clustermesh-secrets\") pod \"cilium-vpgxt\" (UID: \"42f6e2c1-7613-46a9-9de6-dd85a28c0449\") " pod="kube-system/cilium-vpgxt" Sep 10 23:42:06.870879 kubelet[2651]: I0910 23:42:06.870812 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmmnw\" (UniqueName: \"kubernetes.io/projected/42f6e2c1-7613-46a9-9de6-dd85a28c0449-kube-api-access-gmmnw\") pod \"cilium-vpgxt\" (UID: \"42f6e2c1-7613-46a9-9de6-dd85a28c0449\") " pod="kube-system/cilium-vpgxt" Sep 10 23:42:06.870879 kubelet[2651]: I0910 23:42:06.870828 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/42f6e2c1-7613-46a9-9de6-dd85a28c0449-cilium-cgroup\") pod \"cilium-vpgxt\" (UID: \"42f6e2c1-7613-46a9-9de6-dd85a28c0449\") " pod="kube-system/cilium-vpgxt" Sep 10 23:42:06.870879 kubelet[2651]: I0910 23:42:06.870868 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/42f6e2c1-7613-46a9-9de6-dd85a28c0449-cilium-run\") pod \"cilium-vpgxt\" (UID: \"42f6e2c1-7613-46a9-9de6-dd85a28c0449\") " pod="kube-system/cilium-vpgxt" Sep 10 23:42:06.870958 kubelet[2651]: I0910 23:42:06.870883 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/42f6e2c1-7613-46a9-9de6-dd85a28c0449-host-proc-sys-net\") pod \"cilium-vpgxt\" (UID: \"42f6e2c1-7613-46a9-9de6-dd85a28c0449\") " pod="kube-system/cilium-vpgxt" Sep 10 23:42:06.870958 kubelet[2651]: I0910 23:42:06.870900 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/42f6e2c1-7613-46a9-9de6-dd85a28c0449-host-proc-sys-kernel\") pod \"cilium-vpgxt\" (UID: \"42f6e2c1-7613-46a9-9de6-dd85a28c0449\") " pod="kube-system/cilium-vpgxt" Sep 10 23:42:07.025237 systemd[1]: Created slice kubepods-besteffort-pod3722ae0c_e28d_4c03_8b33_759a49414044.slice - libcontainer container kubepods-besteffort-pod3722ae0c_e28d_4c03_8b33_759a49414044.slice. Sep 10 23:42:07.071851 kubelet[2651]: I0910 23:42:07.071802 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3722ae0c-e28d-4c03-8b33-759a49414044-cilium-config-path\") pod \"cilium-operator-5d85765b45-rg787\" (UID: \"3722ae0c-e28d-4c03-8b33-759a49414044\") " pod="kube-system/cilium-operator-5d85765b45-rg787" Sep 10 23:42:07.071851 kubelet[2651]: I0910 23:42:07.071857 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqrs7\" (UniqueName: \"kubernetes.io/projected/3722ae0c-e28d-4c03-8b33-759a49414044-kube-api-access-jqrs7\") pod \"cilium-operator-5d85765b45-rg787\" (UID: \"3722ae0c-e28d-4c03-8b33-759a49414044\") " pod="kube-system/cilium-operator-5d85765b45-rg787" Sep 10 23:42:07.099607 containerd[1539]: time="2025-09-10T23:42:07.099537705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t4sbd,Uid:6c2bf680-901d-4f69-9373-fd14da34e063,Namespace:kube-system,Attempt:0,}" Sep 10 23:42:07.105438 containerd[1539]: time="2025-09-10T23:42:07.105394799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vpgxt,Uid:42f6e2c1-7613-46a9-9de6-dd85a28c0449,Namespace:kube-system,Attempt:0,}" Sep 10 23:42:07.329623 containerd[1539]: time="2025-09-10T23:42:07.329369259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-rg787,Uid:3722ae0c-e28d-4c03-8b33-759a49414044,Namespace:kube-system,Attempt:0,}" Sep 10 23:42:07.620270 containerd[1539]: time="2025-09-10T23:42:07.620094113Z" level=info msg="connecting to shim 4013c533fcd97ec3e26eac2a3f23822befdd5b28ff6321f297bb6d447c562119" address="unix:///run/containerd/s/479edddaa0011f14f60595e49ce06a211cb565f1417b7642e9059c995843e7fc" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:42:07.630161 containerd[1539]: time="2025-09-10T23:42:07.630117378Z" level=info msg="connecting to shim 7f84565d01200f74e309417d989797cc5629d90bbeb30b1be0a69e5dd4cc1d12" address="unix:///run/containerd/s/94e23d9f5aa9859584dc890ba94f55c7bf3104ce4b364dffa73fccc39dabc960" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:42:07.648810 systemd[1]: Started cri-containerd-4013c533fcd97ec3e26eac2a3f23822befdd5b28ff6321f297bb6d447c562119.scope - libcontainer container 4013c533fcd97ec3e26eac2a3f23822befdd5b28ff6321f297bb6d447c562119. Sep 10 23:42:07.653344 systemd[1]: Started cri-containerd-7f84565d01200f74e309417d989797cc5629d90bbeb30b1be0a69e5dd4cc1d12.scope - libcontainer container 7f84565d01200f74e309417d989797cc5629d90bbeb30b1be0a69e5dd4cc1d12. Sep 10 23:42:07.654596 containerd[1539]: time="2025-09-10T23:42:07.654524360Z" level=info msg="connecting to shim 5de3a3a1cdf36ec3f5eac2d0188dd4a1b0da64ba3c298d72565e54a4bce4fe59" address="unix:///run/containerd/s/f5862983afaf79224dcd9ca8f46f45cca4f123142d158c3a9159324117fabcf0" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:42:07.687740 containerd[1539]: time="2025-09-10T23:42:07.687695391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t4sbd,Uid:6c2bf680-901d-4f69-9373-fd14da34e063,Namespace:kube-system,Attempt:0,} returns sandbox id \"4013c533fcd97ec3e26eac2a3f23822befdd5b28ff6321f297bb6d447c562119\"" Sep 10 23:42:07.693746 systemd[1]: Started cri-containerd-5de3a3a1cdf36ec3f5eac2d0188dd4a1b0da64ba3c298d72565e54a4bce4fe59.scope - libcontainer container 5de3a3a1cdf36ec3f5eac2d0188dd4a1b0da64ba3c298d72565e54a4bce4fe59. Sep 10 23:42:07.698469 containerd[1539]: time="2025-09-10T23:42:07.698433898Z" level=info msg="CreateContainer within sandbox \"4013c533fcd97ec3e26eac2a3f23822befdd5b28ff6321f297bb6d447c562119\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 10 23:42:07.702869 containerd[1539]: time="2025-09-10T23:42:07.702818804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vpgxt,Uid:42f6e2c1-7613-46a9-9de6-dd85a28c0449,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f84565d01200f74e309417d989797cc5629d90bbeb30b1be0a69e5dd4cc1d12\"" Sep 10 23:42:07.704927 containerd[1539]: time="2025-09-10T23:42:07.704894925Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 10 23:42:07.731118 containerd[1539]: time="2025-09-10T23:42:07.731066156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-rg787,Uid:3722ae0c-e28d-4c03-8b33-759a49414044,Namespace:kube-system,Attempt:0,} returns sandbox id \"5de3a3a1cdf36ec3f5eac2d0188dd4a1b0da64ba3c298d72565e54a4bce4fe59\"" Sep 10 23:42:07.733724 containerd[1539]: time="2025-09-10T23:42:07.733682050Z" level=info msg="Container 10e98774b2989ca5cba1b0777c656d298483ad8de77b49a7ec7a5e304ac6e099: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:42:07.767303 containerd[1539]: time="2025-09-10T23:42:07.767248885Z" level=info msg="CreateContainer within sandbox \"4013c533fcd97ec3e26eac2a3f23822befdd5b28ff6321f297bb6d447c562119\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"10e98774b2989ca5cba1b0777c656d298483ad8de77b49a7ec7a5e304ac6e099\"" Sep 10 23:42:07.770389 containerd[1539]: time="2025-09-10T23:42:07.770347998Z" level=info msg="StartContainer for \"10e98774b2989ca5cba1b0777c656d298483ad8de77b49a7ec7a5e304ac6e099\"" Sep 10 23:42:07.776276 containerd[1539]: time="2025-09-10T23:42:07.776226426Z" level=info msg="connecting to shim 10e98774b2989ca5cba1b0777c656d298483ad8de77b49a7ec7a5e304ac6e099" address="unix:///run/containerd/s/479edddaa0011f14f60595e49ce06a211cb565f1417b7642e9059c995843e7fc" protocol=ttrpc version=3 Sep 10 23:42:07.800998 systemd[1]: Started cri-containerd-10e98774b2989ca5cba1b0777c656d298483ad8de77b49a7ec7a5e304ac6e099.scope - libcontainer container 10e98774b2989ca5cba1b0777c656d298483ad8de77b49a7ec7a5e304ac6e099. Sep 10 23:42:07.841200 containerd[1539]: time="2025-09-10T23:42:07.841161018Z" level=info msg="StartContainer for \"10e98774b2989ca5cba1b0777c656d298483ad8de77b49a7ec7a5e304ac6e099\" returns successfully" Sep 10 23:42:08.319629 kubelet[2651]: I0910 23:42:08.319528 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t4sbd" podStartSLOduration=2.319509797 podStartE2EDuration="2.319509797s" podCreationTimestamp="2025-09-10 23:42:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:42:08.247317761 +0000 UTC m=+8.209159883" watchObservedRunningTime="2025-09-10 23:42:08.319509797 +0000 UTC m=+8.281351879" Sep 10 23:42:14.911752 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2551378196.mount: Deactivated successfully. Sep 10 23:42:15.520008 update_engine[1527]: I20250910 23:42:15.519949 1527 update_attempter.cc:509] Updating boot flags... Sep 10 23:42:16.281055 containerd[1539]: time="2025-09-10T23:42:16.281005136Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:42:16.281895 containerd[1539]: time="2025-09-10T23:42:16.281498325Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 10 23:42:16.282514 containerd[1539]: time="2025-09-10T23:42:16.282461614Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:42:16.284048 containerd[1539]: time="2025-09-10T23:42:16.284019491Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.579091108s" Sep 10 23:42:16.284095 containerd[1539]: time="2025-09-10T23:42:16.284054784Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 10 23:42:16.287889 containerd[1539]: time="2025-09-10T23:42:16.287848958Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 10 23:42:16.289003 containerd[1539]: time="2025-09-10T23:42:16.288952621Z" level=info msg="CreateContainer within sandbox \"7f84565d01200f74e309417d989797cc5629d90bbeb30b1be0a69e5dd4cc1d12\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 10 23:42:16.323071 containerd[1539]: time="2025-09-10T23:42:16.323027519Z" level=info msg="Container 4f00e0aa7420bf43312318ca904c56067284743715542c4a007051e036db0f37: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:42:16.328100 containerd[1539]: time="2025-09-10T23:42:16.328057286Z" level=info msg="CreateContainer within sandbox \"7f84565d01200f74e309417d989797cc5629d90bbeb30b1be0a69e5dd4cc1d12\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4f00e0aa7420bf43312318ca904c56067284743715542c4a007051e036db0f37\"" Sep 10 23:42:16.328723 containerd[1539]: time="2025-09-10T23:42:16.328535469Z" level=info msg="StartContainer for \"4f00e0aa7420bf43312318ca904c56067284743715542c4a007051e036db0f37\"" Sep 10 23:42:16.329377 containerd[1539]: time="2025-09-10T23:42:16.329351062Z" level=info msg="connecting to shim 4f00e0aa7420bf43312318ca904c56067284743715542c4a007051e036db0f37" address="unix:///run/containerd/s/94e23d9f5aa9859584dc890ba94f55c7bf3104ce4b364dffa73fccc39dabc960" protocol=ttrpc version=3 Sep 10 23:42:16.379781 systemd[1]: Started cri-containerd-4f00e0aa7420bf43312318ca904c56067284743715542c4a007051e036db0f37.scope - libcontainer container 4f00e0aa7420bf43312318ca904c56067284743715542c4a007051e036db0f37. Sep 10 23:42:16.407841 containerd[1539]: time="2025-09-10T23:42:16.407806405Z" level=info msg="StartContainer for \"4f00e0aa7420bf43312318ca904c56067284743715542c4a007051e036db0f37\" returns successfully" Sep 10 23:42:16.419602 systemd[1]: cri-containerd-4f00e0aa7420bf43312318ca904c56067284743715542c4a007051e036db0f37.scope: Deactivated successfully. Sep 10 23:42:16.445250 containerd[1539]: time="2025-09-10T23:42:16.445189650Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4f00e0aa7420bf43312318ca904c56067284743715542c4a007051e036db0f37\" id:\"4f00e0aa7420bf43312318ca904c56067284743715542c4a007051e036db0f37\" pid:3084 exited_at:{seconds:1757547736 nanos:430703539}" Sep 10 23:42:16.446187 containerd[1539]: time="2025-09-10T23:42:16.446148618Z" level=info msg="received exit event container_id:\"4f00e0aa7420bf43312318ca904c56067284743715542c4a007051e036db0f37\" id:\"4f00e0aa7420bf43312318ca904c56067284743715542c4a007051e036db0f37\" pid:3084 exited_at:{seconds:1757547736 nanos:430703539}" Sep 10 23:42:17.213063 containerd[1539]: time="2025-09-10T23:42:17.213018846Z" level=info msg="CreateContainer within sandbox \"7f84565d01200f74e309417d989797cc5629d90bbeb30b1be0a69e5dd4cc1d12\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 10 23:42:17.220744 containerd[1539]: time="2025-09-10T23:42:17.220704088Z" level=info msg="Container 7ff0beb4a3b0fde7f08c8edef213f1d0d8895d0e7e3e147a5d083db01e21c244: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:42:17.230309 containerd[1539]: time="2025-09-10T23:42:17.230247087Z" level=info msg="CreateContainer within sandbox \"7f84565d01200f74e309417d989797cc5629d90bbeb30b1be0a69e5dd4cc1d12\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7ff0beb4a3b0fde7f08c8edef213f1d0d8895d0e7e3e147a5d083db01e21c244\"" Sep 10 23:42:17.234744 containerd[1539]: time="2025-09-10T23:42:17.234711155Z" level=info msg="StartContainer for \"7ff0beb4a3b0fde7f08c8edef213f1d0d8895d0e7e3e147a5d083db01e21c244\"" Sep 10 23:42:17.235856 containerd[1539]: time="2025-09-10T23:42:17.235831804Z" level=info msg="connecting to shim 7ff0beb4a3b0fde7f08c8edef213f1d0d8895d0e7e3e147a5d083db01e21c244" address="unix:///run/containerd/s/94e23d9f5aa9859584dc890ba94f55c7bf3104ce4b364dffa73fccc39dabc960" protocol=ttrpc version=3 Sep 10 23:42:17.259751 systemd[1]: Started cri-containerd-7ff0beb4a3b0fde7f08c8edef213f1d0d8895d0e7e3e147a5d083db01e21c244.scope - libcontainer container 7ff0beb4a3b0fde7f08c8edef213f1d0d8895d0e7e3e147a5d083db01e21c244. Sep 10 23:42:17.299859 containerd[1539]: time="2025-09-10T23:42:17.299801929Z" level=info msg="StartContainer for \"7ff0beb4a3b0fde7f08c8edef213f1d0d8895d0e7e3e147a5d083db01e21c244\" returns successfully" Sep 10 23:42:17.312094 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 10 23:42:17.312488 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 10 23:42:17.313039 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 10 23:42:17.315212 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 23:42:17.315424 systemd[1]: cri-containerd-7ff0beb4a3b0fde7f08c8edef213f1d0d8895d0e7e3e147a5d083db01e21c244.scope: Deactivated successfully. Sep 10 23:42:17.315693 systemd[1]: cri-containerd-7ff0beb4a3b0fde7f08c8edef213f1d0d8895d0e7e3e147a5d083db01e21c244.scope: Consumed 33ms CPU time, 5.1M memory peak, 2M read from disk, 4K written to disk. Sep 10 23:42:17.322080 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f00e0aa7420bf43312318ca904c56067284743715542c4a007051e036db0f37-rootfs.mount: Deactivated successfully. Sep 10 23:42:17.322168 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 10 23:42:17.327602 containerd[1539]: time="2025-09-10T23:42:17.327542764Z" level=info msg="received exit event container_id:\"7ff0beb4a3b0fde7f08c8edef213f1d0d8895d0e7e3e147a5d083db01e21c244\" id:\"7ff0beb4a3b0fde7f08c8edef213f1d0d8895d0e7e3e147a5d083db01e21c244\" pid:3132 exited_at:{seconds:1757547737 nanos:327262862}" Sep 10 23:42:17.327680 containerd[1539]: time="2025-09-10T23:42:17.327642160Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7ff0beb4a3b0fde7f08c8edef213f1d0d8895d0e7e3e147a5d083db01e21c244\" id:\"7ff0beb4a3b0fde7f08c8edef213f1d0d8895d0e7e3e147a5d083db01e21c244\" pid:3132 exited_at:{seconds:1757547737 nanos:327262862}" Sep 10 23:42:17.347359 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 23:42:17.351909 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ff0beb4a3b0fde7f08c8edef213f1d0d8895d0e7e3e147a5d083db01e21c244-rootfs.mount: Deactivated successfully. Sep 10 23:42:17.534610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount278843601.mount: Deactivated successfully. Sep 10 23:42:18.229306 containerd[1539]: time="2025-09-10T23:42:18.222634880Z" level=info msg="CreateContainer within sandbox \"7f84565d01200f74e309417d989797cc5629d90bbeb30b1be0a69e5dd4cc1d12\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 10 23:42:18.240097 containerd[1539]: time="2025-09-10T23:42:18.240037882Z" level=info msg="Container 96e862733f952eeac784574e35fbff0212b0ca0a6860abcc71ede631215047b5: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:42:18.247008 containerd[1539]: time="2025-09-10T23:42:18.246960166Z" level=info msg="CreateContainer within sandbox \"7f84565d01200f74e309417d989797cc5629d90bbeb30b1be0a69e5dd4cc1d12\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"96e862733f952eeac784574e35fbff0212b0ca0a6860abcc71ede631215047b5\"" Sep 10 23:42:18.247540 containerd[1539]: time="2025-09-10T23:42:18.247493071Z" level=info msg="StartContainer for \"96e862733f952eeac784574e35fbff0212b0ca0a6860abcc71ede631215047b5\"" Sep 10 23:42:18.249147 containerd[1539]: time="2025-09-10T23:42:18.249083463Z" level=info msg="connecting to shim 96e862733f952eeac784574e35fbff0212b0ca0a6860abcc71ede631215047b5" address="unix:///run/containerd/s/94e23d9f5aa9859584dc890ba94f55c7bf3104ce4b364dffa73fccc39dabc960" protocol=ttrpc version=3 Sep 10 23:42:18.273774 systemd[1]: Started cri-containerd-96e862733f952eeac784574e35fbff0212b0ca0a6860abcc71ede631215047b5.scope - libcontainer container 96e862733f952eeac784574e35fbff0212b0ca0a6860abcc71ede631215047b5. Sep 10 23:42:18.307892 containerd[1539]: time="2025-09-10T23:42:18.307852148Z" level=info msg="StartContainer for \"96e862733f952eeac784574e35fbff0212b0ca0a6860abcc71ede631215047b5\" returns successfully" Sep 10 23:42:18.309831 systemd[1]: cri-containerd-96e862733f952eeac784574e35fbff0212b0ca0a6860abcc71ede631215047b5.scope: Deactivated successfully. Sep 10 23:42:18.313263 containerd[1539]: time="2025-09-10T23:42:18.313166594Z" level=info msg="received exit event container_id:\"96e862733f952eeac784574e35fbff0212b0ca0a6860abcc71ede631215047b5\" id:\"96e862733f952eeac784574e35fbff0212b0ca0a6860abcc71ede631215047b5\" pid:3187 exited_at:{seconds:1757547738 nanos:312736604}" Sep 10 23:42:18.313368 containerd[1539]: time="2025-09-10T23:42:18.313179518Z" level=info msg="TaskExit event in podsandbox handler container_id:\"96e862733f952eeac784574e35fbff0212b0ca0a6860abcc71ede631215047b5\" id:\"96e862733f952eeac784574e35fbff0212b0ca0a6860abcc71ede631215047b5\" pid:3187 exited_at:{seconds:1757547738 nanos:312736604}" Sep 10 23:42:18.336272 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96e862733f952eeac784574e35fbff0212b0ca0a6860abcc71ede631215047b5-rootfs.mount: Deactivated successfully. Sep 10 23:42:18.713642 containerd[1539]: time="2025-09-10T23:42:18.713587025Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:42:18.714001 containerd[1539]: time="2025-09-10T23:42:18.713968518Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 10 23:42:18.714900 containerd[1539]: time="2025-09-10T23:42:18.714870471Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:42:18.719801 containerd[1539]: time="2025-09-10T23:42:18.719658133Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.431756875s" Sep 10 23:42:18.719801 containerd[1539]: time="2025-09-10T23:42:18.719705990Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 10 23:42:18.722969 containerd[1539]: time="2025-09-10T23:42:18.722533692Z" level=info msg="CreateContainer within sandbox \"5de3a3a1cdf36ec3f5eac2d0188dd4a1b0da64ba3c298d72565e54a4bce4fe59\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 10 23:42:18.729637 containerd[1539]: time="2025-09-10T23:42:18.728447025Z" level=info msg="Container 291237a4906e54bd0b573094a50bf8dad5bc7c9b89888e59624e9f5d67c7d541: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:42:18.735303 containerd[1539]: time="2025-09-10T23:42:18.735164917Z" level=info msg="CreateContainer within sandbox \"5de3a3a1cdf36ec3f5eac2d0188dd4a1b0da64ba3c298d72565e54a4bce4fe59\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"291237a4906e54bd0b573094a50bf8dad5bc7c9b89888e59624e9f5d67c7d541\"" Sep 10 23:42:18.735666 containerd[1539]: time="2025-09-10T23:42:18.735627798Z" level=info msg="StartContainer for \"291237a4906e54bd0b573094a50bf8dad5bc7c9b89888e59624e9f5d67c7d541\"" Sep 10 23:42:18.736629 containerd[1539]: time="2025-09-10T23:42:18.736603217Z" level=info msg="connecting to shim 291237a4906e54bd0b573094a50bf8dad5bc7c9b89888e59624e9f5d67c7d541" address="unix:///run/containerd/s/f5862983afaf79224dcd9ca8f46f45cca4f123142d158c3a9159324117fabcf0" protocol=ttrpc version=3 Sep 10 23:42:18.762759 systemd[1]: Started cri-containerd-291237a4906e54bd0b573094a50bf8dad5bc7c9b89888e59624e9f5d67c7d541.scope - libcontainer container 291237a4906e54bd0b573094a50bf8dad5bc7c9b89888e59624e9f5d67c7d541. Sep 10 23:42:18.793963 containerd[1539]: time="2025-09-10T23:42:18.793917157Z" level=info msg="StartContainer for \"291237a4906e54bd0b573094a50bf8dad5bc7c9b89888e59624e9f5d67c7d541\" returns successfully" Sep 10 23:42:19.235602 containerd[1539]: time="2025-09-10T23:42:19.235548902Z" level=info msg="CreateContainer within sandbox \"7f84565d01200f74e309417d989797cc5629d90bbeb30b1be0a69e5dd4cc1d12\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 10 23:42:19.261179 kubelet[2651]: I0910 23:42:19.261121 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-rg787" podStartSLOduration=2.273174499 podStartE2EDuration="13.261104478s" podCreationTimestamp="2025-09-10 23:42:06 +0000 UTC" firstStartedPulling="2025-09-10 23:42:07.73257901 +0000 UTC m=+7.694421092" lastFinishedPulling="2025-09-10 23:42:18.720508989 +0000 UTC m=+18.682351071" observedRunningTime="2025-09-10 23:42:19.237519394 +0000 UTC m=+19.199361476" watchObservedRunningTime="2025-09-10 23:42:19.261104478 +0000 UTC m=+19.222946560" Sep 10 23:42:19.261855 containerd[1539]: time="2025-09-10T23:42:19.261387652Z" level=info msg="Container d3a4dcd0025a07f6312acf3ef7298ee9735610c7cbaf9a4bf8a297c6266689b0: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:42:19.267920 containerd[1539]: time="2025-09-10T23:42:19.267867036Z" level=info msg="CreateContainer within sandbox \"7f84565d01200f74e309417d989797cc5629d90bbeb30b1be0a69e5dd4cc1d12\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d3a4dcd0025a07f6312acf3ef7298ee9735610c7cbaf9a4bf8a297c6266689b0\"" Sep 10 23:42:19.268632 containerd[1539]: time="2025-09-10T23:42:19.268598558Z" level=info msg="StartContainer for \"d3a4dcd0025a07f6312acf3ef7298ee9735610c7cbaf9a4bf8a297c6266689b0\"" Sep 10 23:42:19.269474 containerd[1539]: time="2025-09-10T23:42:19.269446078Z" level=info msg="connecting to shim d3a4dcd0025a07f6312acf3ef7298ee9735610c7cbaf9a4bf8a297c6266689b0" address="unix:///run/containerd/s/94e23d9f5aa9859584dc890ba94f55c7bf3104ce4b364dffa73fccc39dabc960" protocol=ttrpc version=3 Sep 10 23:42:19.304769 systemd[1]: Started cri-containerd-d3a4dcd0025a07f6312acf3ef7298ee9735610c7cbaf9a4bf8a297c6266689b0.scope - libcontainer container d3a4dcd0025a07f6312acf3ef7298ee9735610c7cbaf9a4bf8a297c6266689b0. Sep 10 23:42:19.351031 systemd[1]: cri-containerd-d3a4dcd0025a07f6312acf3ef7298ee9735610c7cbaf9a4bf8a297c6266689b0.scope: Deactivated successfully. Sep 10 23:42:19.351613 containerd[1539]: time="2025-09-10T23:42:19.351445771Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d3a4dcd0025a07f6312acf3ef7298ee9735610c7cbaf9a4bf8a297c6266689b0\" id:\"d3a4dcd0025a07f6312acf3ef7298ee9735610c7cbaf9a4bf8a297c6266689b0\" pid:3268 exited_at:{seconds:1757547739 nanos:351180364}" Sep 10 23:42:19.352587 containerd[1539]: time="2025-09-10T23:42:19.352211665Z" level=info msg="received exit event container_id:\"d3a4dcd0025a07f6312acf3ef7298ee9735610c7cbaf9a4bf8a297c6266689b0\" id:\"d3a4dcd0025a07f6312acf3ef7298ee9735610c7cbaf9a4bf8a297c6266689b0\" pid:3268 exited_at:{seconds:1757547739 nanos:351180364}" Sep 10 23:42:19.361022 containerd[1539]: time="2025-09-10T23:42:19.360983647Z" level=info msg="StartContainer for \"d3a4dcd0025a07f6312acf3ef7298ee9735610c7cbaf9a4bf8a297c6266689b0\" returns successfully" Sep 10 23:42:19.372543 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3a4dcd0025a07f6312acf3ef7298ee9735610c7cbaf9a4bf8a297c6266689b0-rootfs.mount: Deactivated successfully. Sep 10 23:42:20.242720 containerd[1539]: time="2025-09-10T23:42:20.242668321Z" level=info msg="CreateContainer within sandbox \"7f84565d01200f74e309417d989797cc5629d90bbeb30b1be0a69e5dd4cc1d12\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 10 23:42:20.265323 containerd[1539]: time="2025-09-10T23:42:20.265264732Z" level=info msg="Container 269989e070a5005de3c674df215fdfbed88c5883356d88236e000ce6ed14d37e: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:42:20.274698 containerd[1539]: time="2025-09-10T23:42:20.274659137Z" level=info msg="CreateContainer within sandbox \"7f84565d01200f74e309417d989797cc5629d90bbeb30b1be0a69e5dd4cc1d12\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"269989e070a5005de3c674df215fdfbed88c5883356d88236e000ce6ed14d37e\"" Sep 10 23:42:20.276122 containerd[1539]: time="2025-09-10T23:42:20.275686621Z" level=info msg="StartContainer for \"269989e070a5005de3c674df215fdfbed88c5883356d88236e000ce6ed14d37e\"" Sep 10 23:42:20.277733 containerd[1539]: time="2025-09-10T23:42:20.277683211Z" level=info msg="connecting to shim 269989e070a5005de3c674df215fdfbed88c5883356d88236e000ce6ed14d37e" address="unix:///run/containerd/s/94e23d9f5aa9859584dc890ba94f55c7bf3104ce4b364dffa73fccc39dabc960" protocol=ttrpc version=3 Sep 10 23:42:20.301735 systemd[1]: Started cri-containerd-269989e070a5005de3c674df215fdfbed88c5883356d88236e000ce6ed14d37e.scope - libcontainer container 269989e070a5005de3c674df215fdfbed88c5883356d88236e000ce6ed14d37e. Sep 10 23:42:20.340168 containerd[1539]: time="2025-09-10T23:42:20.340131119Z" level=info msg="StartContainer for \"269989e070a5005de3c674df215fdfbed88c5883356d88236e000ce6ed14d37e\" returns successfully" Sep 10 23:42:20.433825 containerd[1539]: time="2025-09-10T23:42:20.433780994Z" level=info msg="TaskExit event in podsandbox handler container_id:\"269989e070a5005de3c674df215fdfbed88c5883356d88236e000ce6ed14d37e\" id:\"87abcb224f8092db8de17de9d9952394bac637d09d5ed5f026247b5e22b0d38d\" pid:3338 exited_at:{seconds:1757547740 nanos:433316927}" Sep 10 23:42:20.477248 kubelet[2651]: I0910 23:42:20.477164 2651 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 10 23:42:20.530295 systemd[1]: Created slice kubepods-burstable-pod5a4c4a52_1428_437e_8829_e9c74ea77d76.slice - libcontainer container kubepods-burstable-pod5a4c4a52_1428_437e_8829_e9c74ea77d76.slice. Sep 10 23:42:20.535276 systemd[1]: Created slice kubepods-burstable-pod810e873c_98bc_4a69_bd71_eea488d6f8bd.slice - libcontainer container kubepods-burstable-pod810e873c_98bc_4a69_bd71_eea488d6f8bd.slice. Sep 10 23:42:20.573731 kubelet[2651]: I0910 23:42:20.573588 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/810e873c-98bc-4a69-bd71-eea488d6f8bd-config-volume\") pod \"coredns-7c65d6cfc9-vcsj8\" (UID: \"810e873c-98bc-4a69-bd71-eea488d6f8bd\") " pod="kube-system/coredns-7c65d6cfc9-vcsj8" Sep 10 23:42:20.573731 kubelet[2651]: I0910 23:42:20.573636 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pnf7\" (UniqueName: \"kubernetes.io/projected/810e873c-98bc-4a69-bd71-eea488d6f8bd-kube-api-access-7pnf7\") pod \"coredns-7c65d6cfc9-vcsj8\" (UID: \"810e873c-98bc-4a69-bd71-eea488d6f8bd\") " pod="kube-system/coredns-7c65d6cfc9-vcsj8" Sep 10 23:42:20.573731 kubelet[2651]: I0910 23:42:20.573660 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a4c4a52-1428-437e-8829-e9c74ea77d76-config-volume\") pod \"coredns-7c65d6cfc9-cnd2m\" (UID: \"5a4c4a52-1428-437e-8829-e9c74ea77d76\") " pod="kube-system/coredns-7c65d6cfc9-cnd2m" Sep 10 23:42:20.573731 kubelet[2651]: I0910 23:42:20.573678 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ltk9\" (UniqueName: \"kubernetes.io/projected/5a4c4a52-1428-437e-8829-e9c74ea77d76-kube-api-access-8ltk9\") pod \"coredns-7c65d6cfc9-cnd2m\" (UID: \"5a4c4a52-1428-437e-8829-e9c74ea77d76\") " pod="kube-system/coredns-7c65d6cfc9-cnd2m" Sep 10 23:42:20.834806 containerd[1539]: time="2025-09-10T23:42:20.834671791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-cnd2m,Uid:5a4c4a52-1428-437e-8829-e9c74ea77d76,Namespace:kube-system,Attempt:0,}" Sep 10 23:42:20.839910 containerd[1539]: time="2025-09-10T23:42:20.839871111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vcsj8,Uid:810e873c-98bc-4a69-bd71-eea488d6f8bd,Namespace:kube-system,Attempt:0,}" Sep 10 23:42:21.266854 kubelet[2651]: I0910 23:42:21.266550 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vpgxt" podStartSLOduration=6.683082606 podStartE2EDuration="15.266521582s" podCreationTimestamp="2025-09-10 23:42:06 +0000 UTC" firstStartedPulling="2025-09-10 23:42:07.704225552 +0000 UTC m=+7.666067594" lastFinishedPulling="2025-09-10 23:42:16.287664488 +0000 UTC m=+16.249506570" observedRunningTime="2025-09-10 23:42:21.266395144 +0000 UTC m=+21.228237226" watchObservedRunningTime="2025-09-10 23:42:21.266521582 +0000 UTC m=+21.228363624" Sep 10 23:42:22.389808 systemd-networkd[1449]: cilium_host: Link UP Sep 10 23:42:22.390273 systemd-networkd[1449]: cilium_net: Link UP Sep 10 23:42:22.390484 systemd-networkd[1449]: cilium_net: Gained carrier Sep 10 23:42:22.390630 systemd-networkd[1449]: cilium_host: Gained carrier Sep 10 23:42:22.470254 systemd-networkd[1449]: cilium_vxlan: Link UP Sep 10 23:42:22.470404 systemd-networkd[1449]: cilium_vxlan: Gained carrier Sep 10 23:42:22.740592 kernel: NET: Registered PF_ALG protocol family Sep 10 23:42:23.310745 systemd-networkd[1449]: cilium_host: Gained IPv6LL Sep 10 23:42:23.364492 systemd-networkd[1449]: lxc_health: Link UP Sep 10 23:42:23.365942 systemd-networkd[1449]: lxc_health: Gained carrier Sep 10 23:42:23.374831 systemd-networkd[1449]: cilium_net: Gained IPv6LL Sep 10 23:42:23.931612 kernel: eth0: renamed from tmp3ec21 Sep 10 23:42:23.934367 systemd-networkd[1449]: lxc31c8c4a543c6: Link UP Sep 10 23:42:23.947721 kernel: eth0: renamed from tmp9284b Sep 10 23:42:23.947765 systemd-networkd[1449]: tmp9284b: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 23:42:23.947835 systemd-networkd[1449]: tmp9284b: Cannot enable IPv6, ignoring: No such file or directory Sep 10 23:42:23.947847 systemd-networkd[1449]: tmp9284b: Cannot configure IPv6 privacy extensions for interface, ignoring: No such file or directory Sep 10 23:42:23.947855 systemd-networkd[1449]: tmp9284b: Cannot disable kernel IPv6 accept_ra for interface, ignoring: No such file or directory Sep 10 23:42:23.947864 systemd-networkd[1449]: tmp9284b: Cannot set IPv6 proxy NDP, ignoring: No such file or directory Sep 10 23:42:23.947877 systemd-networkd[1449]: tmp9284b: Cannot enable promote_secondaries for interface, ignoring: No such file or directory Sep 10 23:42:23.948945 systemd-networkd[1449]: lxc31c8c4a543c6: Gained carrier Sep 10 23:42:23.950039 systemd-networkd[1449]: lxcdd7d50f15b5c: Link UP Sep 10 23:42:23.950689 systemd-networkd[1449]: lxcdd7d50f15b5c: Gained carrier Sep 10 23:42:24.462763 systemd-networkd[1449]: cilium_vxlan: Gained IPv6LL Sep 10 23:42:25.358854 systemd-networkd[1449]: lxc31c8c4a543c6: Gained IPv6LL Sep 10 23:42:25.359575 systemd-networkd[1449]: lxc_health: Gained IPv6LL Sep 10 23:42:25.550807 systemd-networkd[1449]: lxcdd7d50f15b5c: Gained IPv6LL Sep 10 23:42:27.638599 containerd[1539]: time="2025-09-10T23:42:27.638262208Z" level=info msg="connecting to shim 3ec21d89821c6a96c21856a96370bc784261d0967480ac55271013de050693e1" address="unix:///run/containerd/s/93935bf7743f483147e1599c332eeb369418c61fa54a404c5bc4052f5619419d" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:42:27.666088 containerd[1539]: time="2025-09-10T23:42:27.666041417Z" level=info msg="connecting to shim 9284b5e266b75c4910058885257d4f2781401205a60b58517017a6ba4157865e" address="unix:///run/containerd/s/33d130952d6662cb2aa1b1278ae5c9f16bd8c16e6e91bb2ab5a2e137d8a5a8ec" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:42:27.668807 systemd[1]: Started cri-containerd-3ec21d89821c6a96c21856a96370bc784261d0967480ac55271013de050693e1.scope - libcontainer container 3ec21d89821c6a96c21856a96370bc784261d0967480ac55271013de050693e1. Sep 10 23:42:27.691837 systemd[1]: Started cri-containerd-9284b5e266b75c4910058885257d4f2781401205a60b58517017a6ba4157865e.scope - libcontainer container 9284b5e266b75c4910058885257d4f2781401205a60b58517017a6ba4157865e. Sep 10 23:42:27.701716 systemd-resolved[1357]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 23:42:27.708838 systemd-resolved[1357]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 23:42:27.812576 containerd[1539]: time="2025-09-10T23:42:27.812515980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-cnd2m,Uid:5a4c4a52-1428-437e-8829-e9c74ea77d76,Namespace:kube-system,Attempt:0,} returns sandbox id \"9284b5e266b75c4910058885257d4f2781401205a60b58517017a6ba4157865e\"" Sep 10 23:42:27.823300 containerd[1539]: time="2025-09-10T23:42:27.823239190Z" level=info msg="CreateContainer within sandbox \"9284b5e266b75c4910058885257d4f2781401205a60b58517017a6ba4157865e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 23:42:27.895374 containerd[1539]: time="2025-09-10T23:42:27.894977924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vcsj8,Uid:810e873c-98bc-4a69-bd71-eea488d6f8bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ec21d89821c6a96c21856a96370bc784261d0967480ac55271013de050693e1\"" Sep 10 23:42:27.904485 containerd[1539]: time="2025-09-10T23:42:27.904448282Z" level=info msg="CreateContainer within sandbox \"3ec21d89821c6a96c21856a96370bc784261d0967480ac55271013de050693e1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 23:42:27.956934 containerd[1539]: time="2025-09-10T23:42:27.956754585Z" level=info msg="Container 4c7ee6a2fe0aef8a0ff60e8d205ab63f9aa3ab9d5269e30f8b0bbe4dc5af0704: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:42:27.958410 containerd[1539]: time="2025-09-10T23:42:27.958367959Z" level=info msg="Container f7cfe296822cdcabd7af17ee019644e1904de8e139f2d72a66ef017e17367aa4: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:42:27.985427 containerd[1539]: time="2025-09-10T23:42:27.985210671Z" level=info msg="CreateContainer within sandbox \"3ec21d89821c6a96c21856a96370bc784261d0967480ac55271013de050693e1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f7cfe296822cdcabd7af17ee019644e1904de8e139f2d72a66ef017e17367aa4\"" Sep 10 23:42:27.986146 containerd[1539]: time="2025-09-10T23:42:27.986083713Z" level=info msg="StartContainer for \"f7cfe296822cdcabd7af17ee019644e1904de8e139f2d72a66ef017e17367aa4\"" Sep 10 23:42:27.987136 containerd[1539]: time="2025-09-10T23:42:27.987056419Z" level=info msg="connecting to shim f7cfe296822cdcabd7af17ee019644e1904de8e139f2d72a66ef017e17367aa4" address="unix:///run/containerd/s/93935bf7743f483147e1599c332eeb369418c61fa54a404c5bc4052f5619419d" protocol=ttrpc version=3 Sep 10 23:42:27.995823 containerd[1539]: time="2025-09-10T23:42:27.995422601Z" level=info msg="CreateContainer within sandbox \"9284b5e266b75c4910058885257d4f2781401205a60b58517017a6ba4157865e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4c7ee6a2fe0aef8a0ff60e8d205ab63f9aa3ab9d5269e30f8b0bbe4dc5af0704\"" Sep 10 23:42:28.000461 containerd[1539]: time="2025-09-10T23:42:27.997793192Z" level=info msg="StartContainer for \"4c7ee6a2fe0aef8a0ff60e8d205ab63f9aa3ab9d5269e30f8b0bbe4dc5af0704\"" Sep 10 23:42:28.000914 containerd[1539]: time="2025-09-10T23:42:28.000859463Z" level=info msg="connecting to shim 4c7ee6a2fe0aef8a0ff60e8d205ab63f9aa3ab9d5269e30f8b0bbe4dc5af0704" address="unix:///run/containerd/s/33d130952d6662cb2aa1b1278ae5c9f16bd8c16e6e91bb2ab5a2e137d8a5a8ec" protocol=ttrpc version=3 Sep 10 23:42:28.013810 systemd[1]: Started cri-containerd-f7cfe296822cdcabd7af17ee019644e1904de8e139f2d72a66ef017e17367aa4.scope - libcontainer container f7cfe296822cdcabd7af17ee019644e1904de8e139f2d72a66ef017e17367aa4. Sep 10 23:42:28.030759 systemd[1]: Started cri-containerd-4c7ee6a2fe0aef8a0ff60e8d205ab63f9aa3ab9d5269e30f8b0bbe4dc5af0704.scope - libcontainer container 4c7ee6a2fe0aef8a0ff60e8d205ab63f9aa3ab9d5269e30f8b0bbe4dc5af0704. Sep 10 23:42:28.077223 containerd[1539]: time="2025-09-10T23:42:28.077177970Z" level=info msg="StartContainer for \"f7cfe296822cdcabd7af17ee019644e1904de8e139f2d72a66ef017e17367aa4\" returns successfully" Sep 10 23:42:28.077453 containerd[1539]: time="2025-09-10T23:42:28.077429866Z" level=info msg="StartContainer for \"4c7ee6a2fe0aef8a0ff60e8d205ab63f9aa3ab9d5269e30f8b0bbe4dc5af0704\" returns successfully" Sep 10 23:42:28.283090 kubelet[2651]: I0910 23:42:28.283026 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-vcsj8" podStartSLOduration=22.283008433 podStartE2EDuration="22.283008433s" podCreationTimestamp="2025-09-10 23:42:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:42:28.282979586 +0000 UTC m=+28.244821628" watchObservedRunningTime="2025-09-10 23:42:28.283008433 +0000 UTC m=+28.244850475" Sep 10 23:42:28.298177 kubelet[2651]: I0910 23:42:28.297971 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-cnd2m" podStartSLOduration=22.297955246 podStartE2EDuration="22.297955246s" podCreationTimestamp="2025-09-10 23:42:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:42:28.297944484 +0000 UTC m=+28.259786566" watchObservedRunningTime="2025-09-10 23:42:28.297955246 +0000 UTC m=+28.259797328" Sep 10 23:42:28.621316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3675405148.mount: Deactivated successfully. Sep 10 23:42:29.286296 kubelet[2651]: I0910 23:42:29.286239 2651 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:42:30.437499 systemd[1]: Started sshd@7-10.0.0.21:22-10.0.0.1:48072.service - OpenSSH per-connection server daemon (10.0.0.1:48072). Sep 10 23:42:30.509629 sshd[3996]: Accepted publickey for core from 10.0.0.1 port 48072 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:42:30.511662 sshd-session[3996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:42:30.517487 systemd-logind[1522]: New session 8 of user core. Sep 10 23:42:30.538839 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 10 23:42:30.692440 sshd[3998]: Connection closed by 10.0.0.1 port 48072 Sep 10 23:42:30.693225 sshd-session[3996]: pam_unix(sshd:session): session closed for user core Sep 10 23:42:30.699718 systemd[1]: sshd@7-10.0.0.21:22-10.0.0.1:48072.service: Deactivated successfully. Sep 10 23:42:30.702348 systemd[1]: session-8.scope: Deactivated successfully. Sep 10 23:42:30.703961 systemd-logind[1522]: Session 8 logged out. Waiting for processes to exit. Sep 10 23:42:30.706505 systemd-logind[1522]: Removed session 8. Sep 10 23:42:35.713407 systemd[1]: Started sshd@8-10.0.0.21:22-10.0.0.1:48088.service - OpenSSH per-connection server daemon (10.0.0.1:48088). Sep 10 23:42:35.787977 sshd[4023]: Accepted publickey for core from 10.0.0.1 port 48088 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:42:35.791468 sshd-session[4023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:42:35.798270 systemd-logind[1522]: New session 9 of user core. Sep 10 23:42:35.808800 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 10 23:42:35.951425 sshd[4025]: Connection closed by 10.0.0.1 port 48088 Sep 10 23:42:35.952086 sshd-session[4023]: pam_unix(sshd:session): session closed for user core Sep 10 23:42:35.955884 systemd[1]: sshd@8-10.0.0.21:22-10.0.0.1:48088.service: Deactivated successfully. Sep 10 23:42:35.957436 systemd[1]: session-9.scope: Deactivated successfully. Sep 10 23:42:35.962543 systemd-logind[1522]: Session 9 logged out. Waiting for processes to exit. Sep 10 23:42:35.964282 systemd-logind[1522]: Removed session 9. Sep 10 23:42:40.971874 systemd[1]: Started sshd@9-10.0.0.21:22-10.0.0.1:37906.service - OpenSSH per-connection server daemon (10.0.0.1:37906). Sep 10 23:42:41.034447 sshd[4044]: Accepted publickey for core from 10.0.0.1 port 37906 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:42:41.035976 sshd-session[4044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:42:41.042349 systemd-logind[1522]: New session 10 of user core. Sep 10 23:42:41.051806 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 10 23:42:41.186397 sshd[4046]: Connection closed by 10.0.0.1 port 37906 Sep 10 23:42:41.187144 sshd-session[4044]: pam_unix(sshd:session): session closed for user core Sep 10 23:42:41.191391 systemd-logind[1522]: Session 10 logged out. Waiting for processes to exit. Sep 10 23:42:41.191485 systemd[1]: sshd@9-10.0.0.21:22-10.0.0.1:37906.service: Deactivated successfully. Sep 10 23:42:41.193284 systemd[1]: session-10.scope: Deactivated successfully. Sep 10 23:42:41.196231 systemd-logind[1522]: Removed session 10. Sep 10 23:42:46.205206 systemd[1]: Started sshd@10-10.0.0.21:22-10.0.0.1:37908.service - OpenSSH per-connection server daemon (10.0.0.1:37908). Sep 10 23:42:46.286939 sshd[4062]: Accepted publickey for core from 10.0.0.1 port 37908 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:42:46.288398 sshd-session[4062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:42:46.295687 systemd-logind[1522]: New session 11 of user core. Sep 10 23:42:46.302796 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 10 23:42:46.434978 sshd[4064]: Connection closed by 10.0.0.1 port 37908 Sep 10 23:42:46.435643 sshd-session[4062]: pam_unix(sshd:session): session closed for user core Sep 10 23:42:46.449648 systemd[1]: sshd@10-10.0.0.21:22-10.0.0.1:37908.service: Deactivated successfully. Sep 10 23:42:46.454710 systemd[1]: session-11.scope: Deactivated successfully. Sep 10 23:42:46.458508 systemd-logind[1522]: Session 11 logged out. Waiting for processes to exit. Sep 10 23:42:46.462505 systemd[1]: Started sshd@11-10.0.0.21:22-10.0.0.1:37918.service - OpenSSH per-connection server daemon (10.0.0.1:37918). Sep 10 23:42:46.463518 systemd-logind[1522]: Removed session 11. Sep 10 23:42:46.523908 sshd[4079]: Accepted publickey for core from 10.0.0.1 port 37918 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:42:46.525911 sshd-session[4079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:42:46.531294 systemd-logind[1522]: New session 12 of user core. Sep 10 23:42:46.542820 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 10 23:42:46.707700 sshd[4081]: Connection closed by 10.0.0.1 port 37918 Sep 10 23:42:46.708587 sshd-session[4079]: pam_unix(sshd:session): session closed for user core Sep 10 23:42:46.723687 systemd[1]: sshd@11-10.0.0.21:22-10.0.0.1:37918.service: Deactivated successfully. Sep 10 23:42:46.725533 systemd[1]: session-12.scope: Deactivated successfully. Sep 10 23:42:46.728925 systemd-logind[1522]: Session 12 logged out. Waiting for processes to exit. Sep 10 23:42:46.733840 systemd[1]: Started sshd@12-10.0.0.21:22-10.0.0.1:37926.service - OpenSSH per-connection server daemon (10.0.0.1:37926). Sep 10 23:42:46.735221 systemd-logind[1522]: Removed session 12. Sep 10 23:42:46.798177 sshd[4093]: Accepted publickey for core from 10.0.0.1 port 37926 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:42:46.799838 sshd-session[4093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:42:46.804860 systemd-logind[1522]: New session 13 of user core. Sep 10 23:42:46.814774 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 10 23:42:46.931361 sshd[4095]: Connection closed by 10.0.0.1 port 37926 Sep 10 23:42:46.931810 sshd-session[4093]: pam_unix(sshd:session): session closed for user core Sep 10 23:42:46.935617 systemd[1]: sshd@12-10.0.0.21:22-10.0.0.1:37926.service: Deactivated successfully. Sep 10 23:42:46.939214 systemd[1]: session-13.scope: Deactivated successfully. Sep 10 23:42:46.940240 systemd-logind[1522]: Session 13 logged out. Waiting for processes to exit. Sep 10 23:42:46.941842 systemd-logind[1522]: Removed session 13. Sep 10 23:42:51.948379 systemd[1]: Started sshd@13-10.0.0.21:22-10.0.0.1:35356.service - OpenSSH per-connection server daemon (10.0.0.1:35356). Sep 10 23:42:52.019631 sshd[4110]: Accepted publickey for core from 10.0.0.1 port 35356 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:42:52.021077 sshd-session[4110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:42:52.024940 systemd-logind[1522]: New session 14 of user core. Sep 10 23:42:52.035774 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 10 23:42:52.150982 sshd[4112]: Connection closed by 10.0.0.1 port 35356 Sep 10 23:42:52.152134 sshd-session[4110]: pam_unix(sshd:session): session closed for user core Sep 10 23:42:52.159167 systemd[1]: sshd@13-10.0.0.21:22-10.0.0.1:35356.service: Deactivated successfully. Sep 10 23:42:52.162331 systemd[1]: session-14.scope: Deactivated successfully. Sep 10 23:42:52.163198 systemd-logind[1522]: Session 14 logged out. Waiting for processes to exit. Sep 10 23:42:52.164696 systemd-logind[1522]: Removed session 14. Sep 10 23:42:57.164355 systemd[1]: Started sshd@14-10.0.0.21:22-10.0.0.1:35370.service - OpenSSH per-connection server daemon (10.0.0.1:35370). Sep 10 23:42:57.229717 sshd[4126]: Accepted publickey for core from 10.0.0.1 port 35370 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:42:57.231133 sshd-session[4126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:42:57.237536 systemd-logind[1522]: New session 15 of user core. Sep 10 23:42:57.247760 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 10 23:42:57.379636 sshd[4128]: Connection closed by 10.0.0.1 port 35370 Sep 10 23:42:57.380716 sshd-session[4126]: pam_unix(sshd:session): session closed for user core Sep 10 23:42:57.392155 systemd[1]: sshd@14-10.0.0.21:22-10.0.0.1:35370.service: Deactivated successfully. Sep 10 23:42:57.393806 systemd[1]: session-15.scope: Deactivated successfully. Sep 10 23:42:57.396007 systemd-logind[1522]: Session 15 logged out. Waiting for processes to exit. Sep 10 23:42:57.399543 systemd[1]: Started sshd@15-10.0.0.21:22-10.0.0.1:35382.service - OpenSSH per-connection server daemon (10.0.0.1:35382). Sep 10 23:42:57.404086 systemd-logind[1522]: Removed session 15. Sep 10 23:42:57.453508 sshd[4141]: Accepted publickey for core from 10.0.0.1 port 35382 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:42:57.454861 sshd-session[4141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:42:57.460192 systemd-logind[1522]: New session 16 of user core. Sep 10 23:42:57.473784 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 10 23:42:57.685164 sshd[4143]: Connection closed by 10.0.0.1 port 35382 Sep 10 23:42:57.686286 sshd-session[4141]: pam_unix(sshd:session): session closed for user core Sep 10 23:42:57.696342 systemd[1]: sshd@15-10.0.0.21:22-10.0.0.1:35382.service: Deactivated successfully. Sep 10 23:42:57.698031 systemd[1]: session-16.scope: Deactivated successfully. Sep 10 23:42:57.699481 systemd-logind[1522]: Session 16 logged out. Waiting for processes to exit. Sep 10 23:42:57.704095 systemd[1]: Started sshd@16-10.0.0.21:22-10.0.0.1:35384.service - OpenSSH per-connection server daemon (10.0.0.1:35384). Sep 10 23:42:57.704936 systemd-logind[1522]: Removed session 16. Sep 10 23:42:57.761844 sshd[4156]: Accepted publickey for core from 10.0.0.1 port 35384 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:42:57.763362 sshd-session[4156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:42:57.768778 systemd-logind[1522]: New session 17 of user core. Sep 10 23:42:57.778781 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 10 23:42:59.066069 sshd[4158]: Connection closed by 10.0.0.1 port 35384 Sep 10 23:42:59.066653 sshd-session[4156]: pam_unix(sshd:session): session closed for user core Sep 10 23:42:59.080096 systemd[1]: sshd@16-10.0.0.21:22-10.0.0.1:35384.service: Deactivated successfully. Sep 10 23:42:59.083109 systemd[1]: session-17.scope: Deactivated successfully. Sep 10 23:42:59.085055 systemd-logind[1522]: Session 17 logged out. Waiting for processes to exit. Sep 10 23:42:59.094854 systemd[1]: Started sshd@17-10.0.0.21:22-10.0.0.1:35386.service - OpenSSH per-connection server daemon (10.0.0.1:35386). Sep 10 23:42:59.096429 systemd-logind[1522]: Removed session 17. Sep 10 23:42:59.153329 sshd[4177]: Accepted publickey for core from 10.0.0.1 port 35386 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:42:59.154985 sshd-session[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:42:59.158924 systemd-logind[1522]: New session 18 of user core. Sep 10 23:42:59.165744 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 10 23:42:59.386596 sshd[4179]: Connection closed by 10.0.0.1 port 35386 Sep 10 23:42:59.387302 sshd-session[4177]: pam_unix(sshd:session): session closed for user core Sep 10 23:42:59.406177 systemd[1]: sshd@17-10.0.0.21:22-10.0.0.1:35386.service: Deactivated successfully. Sep 10 23:42:59.408656 systemd[1]: session-18.scope: Deactivated successfully. Sep 10 23:42:59.409885 systemd-logind[1522]: Session 18 logged out. Waiting for processes to exit. Sep 10 23:42:59.411962 systemd-logind[1522]: Removed session 18. Sep 10 23:42:59.414025 systemd[1]: Started sshd@18-10.0.0.21:22-10.0.0.1:35394.service - OpenSSH per-connection server daemon (10.0.0.1:35394). Sep 10 23:42:59.472775 sshd[4191]: Accepted publickey for core from 10.0.0.1 port 35394 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:42:59.474215 sshd-session[4191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:42:59.478392 systemd-logind[1522]: New session 19 of user core. Sep 10 23:42:59.489772 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 10 23:42:59.601753 sshd[4193]: Connection closed by 10.0.0.1 port 35394 Sep 10 23:42:59.602094 sshd-session[4191]: pam_unix(sshd:session): session closed for user core Sep 10 23:42:59.605914 systemd[1]: sshd@18-10.0.0.21:22-10.0.0.1:35394.service: Deactivated successfully. Sep 10 23:42:59.608783 systemd[1]: session-19.scope: Deactivated successfully. Sep 10 23:42:59.610889 systemd-logind[1522]: Session 19 logged out. Waiting for processes to exit. Sep 10 23:42:59.612711 systemd-logind[1522]: Removed session 19. Sep 10 23:43:04.615617 systemd[1]: Started sshd@19-10.0.0.21:22-10.0.0.1:46004.service - OpenSSH per-connection server daemon (10.0.0.1:46004). Sep 10 23:43:04.687873 sshd[4211]: Accepted publickey for core from 10.0.0.1 port 46004 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:43:04.689446 sshd-session[4211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:43:04.696435 systemd-logind[1522]: New session 20 of user core. Sep 10 23:43:04.702798 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 10 23:43:04.850364 sshd[4213]: Connection closed by 10.0.0.1 port 46004 Sep 10 23:43:04.850910 sshd-session[4211]: pam_unix(sshd:session): session closed for user core Sep 10 23:43:04.857123 systemd[1]: sshd@19-10.0.0.21:22-10.0.0.1:46004.service: Deactivated successfully. Sep 10 23:43:04.859667 systemd[1]: session-20.scope: Deactivated successfully. Sep 10 23:43:04.860928 systemd-logind[1522]: Session 20 logged out. Waiting for processes to exit. Sep 10 23:43:04.862485 systemd-logind[1522]: Removed session 20. Sep 10 23:43:09.866070 systemd[1]: Started sshd@20-10.0.0.21:22-10.0.0.1:46010.service - OpenSSH per-connection server daemon (10.0.0.1:46010). Sep 10 23:43:09.929576 sshd[4228]: Accepted publickey for core from 10.0.0.1 port 46010 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:43:09.930966 sshd-session[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:43:09.935782 systemd-logind[1522]: New session 21 of user core. Sep 10 23:43:09.945747 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 10 23:43:10.058644 sshd[4230]: Connection closed by 10.0.0.1 port 46010 Sep 10 23:43:10.058965 sshd-session[4228]: pam_unix(sshd:session): session closed for user core Sep 10 23:43:10.062479 systemd[1]: sshd@20-10.0.0.21:22-10.0.0.1:46010.service: Deactivated successfully. Sep 10 23:43:10.064011 systemd[1]: session-21.scope: Deactivated successfully. Sep 10 23:43:10.065330 systemd-logind[1522]: Session 21 logged out. Waiting for processes to exit. Sep 10 23:43:10.066532 systemd-logind[1522]: Removed session 21. Sep 10 23:43:15.075322 systemd[1]: Started sshd@21-10.0.0.21:22-10.0.0.1:40288.service - OpenSSH per-connection server daemon (10.0.0.1:40288). Sep 10 23:43:15.125732 sshd[4244]: Accepted publickey for core from 10.0.0.1 port 40288 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:43:15.127238 sshd-session[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:43:15.131443 systemd-logind[1522]: New session 22 of user core. Sep 10 23:43:15.142810 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 10 23:43:15.259951 sshd[4246]: Connection closed by 10.0.0.1 port 40288 Sep 10 23:43:15.260473 sshd-session[4244]: pam_unix(sshd:session): session closed for user core Sep 10 23:43:15.274275 systemd[1]: sshd@21-10.0.0.21:22-10.0.0.1:40288.service: Deactivated successfully. Sep 10 23:43:15.276861 systemd[1]: session-22.scope: Deactivated successfully. Sep 10 23:43:15.281185 systemd-logind[1522]: Session 22 logged out. Waiting for processes to exit. Sep 10 23:43:15.283781 systemd[1]: Started sshd@22-10.0.0.21:22-10.0.0.1:40298.service - OpenSSH per-connection server daemon (10.0.0.1:40298). Sep 10 23:43:15.285084 systemd-logind[1522]: Removed session 22. Sep 10 23:43:15.339476 sshd[4260]: Accepted publickey for core from 10.0.0.1 port 40298 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:43:15.340907 sshd-session[4260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:43:15.345642 systemd-logind[1522]: New session 23 of user core. Sep 10 23:43:15.359830 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 10 23:43:17.357655 containerd[1539]: time="2025-09-10T23:43:17.357529869Z" level=info msg="StopContainer for \"291237a4906e54bd0b573094a50bf8dad5bc7c9b89888e59624e9f5d67c7d541\" with timeout 30 (s)" Sep 10 23:43:17.360847 containerd[1539]: time="2025-09-10T23:43:17.359689924Z" level=info msg="Stop container \"291237a4906e54bd0b573094a50bf8dad5bc7c9b89888e59624e9f5d67c7d541\" with signal terminated" Sep 10 23:43:17.372848 systemd[1]: cri-containerd-291237a4906e54bd0b573094a50bf8dad5bc7c9b89888e59624e9f5d67c7d541.scope: Deactivated successfully. Sep 10 23:43:17.377488 containerd[1539]: time="2025-09-10T23:43:17.377443580Z" level=info msg="TaskExit event in podsandbox handler container_id:\"291237a4906e54bd0b573094a50bf8dad5bc7c9b89888e59624e9f5d67c7d541\" id:\"291237a4906e54bd0b573094a50bf8dad5bc7c9b89888e59624e9f5d67c7d541\" pid:3235 exited_at:{seconds:1757547797 nanos:376064367}" Sep 10 23:43:17.377624 containerd[1539]: time="2025-09-10T23:43:17.377524456Z" level=info msg="received exit event container_id:\"291237a4906e54bd0b573094a50bf8dad5bc7c9b89888e59624e9f5d67c7d541\" id:\"291237a4906e54bd0b573094a50bf8dad5bc7c9b89888e59624e9f5d67c7d541\" pid:3235 exited_at:{seconds:1757547797 nanos:376064367}" Sep 10 23:43:17.387853 containerd[1539]: time="2025-09-10T23:43:17.387800075Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 23:43:17.392769 containerd[1539]: time="2025-09-10T23:43:17.392714076Z" level=info msg="TaskExit event in podsandbox handler container_id:\"269989e070a5005de3c674df215fdfbed88c5883356d88236e000ce6ed14d37e\" id:\"55bbf1426ffe8b1101cd881e16ae5dd7e13b2782c0e1fdd9ab3ebcdb18eee60f\" pid:4289 exited_at:{seconds:1757547797 nanos:392173263}" Sep 10 23:43:17.395949 containerd[1539]: time="2025-09-10T23:43:17.395897561Z" level=info msg="StopContainer for \"269989e070a5005de3c674df215fdfbed88c5883356d88236e000ce6ed14d37e\" with timeout 2 (s)" Sep 10 23:43:17.396431 containerd[1539]: time="2025-09-10T23:43:17.396405616Z" level=info msg="Stop container \"269989e070a5005de3c674df215fdfbed88c5883356d88236e000ce6ed14d37e\" with signal terminated" Sep 10 23:43:17.403832 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-291237a4906e54bd0b573094a50bf8dad5bc7c9b89888e59624e9f5d67c7d541-rootfs.mount: Deactivated successfully. Sep 10 23:43:17.409617 systemd-networkd[1449]: lxc_health: Link DOWN Sep 10 23:43:17.409647 systemd-networkd[1449]: lxc_health: Lost carrier Sep 10 23:43:17.424324 containerd[1539]: time="2025-09-10T23:43:17.424032271Z" level=info msg="StopContainer for \"291237a4906e54bd0b573094a50bf8dad5bc7c9b89888e59624e9f5d67c7d541\" returns successfully" Sep 10 23:43:17.429254 containerd[1539]: time="2025-09-10T23:43:17.428934713Z" level=info msg="received exit event container_id:\"269989e070a5005de3c674df215fdfbed88c5883356d88236e000ce6ed14d37e\" id:\"269989e070a5005de3c674df215fdfbed88c5883356d88236e000ce6ed14d37e\" pid:3305 exited_at:{seconds:1757547797 nanos:427976119}" Sep 10 23:43:17.429425 systemd[1]: cri-containerd-269989e070a5005de3c674df215fdfbed88c5883356d88236e000ce6ed14d37e.scope: Deactivated successfully. Sep 10 23:43:17.429802 containerd[1539]: time="2025-09-10T23:43:17.429773432Z" level=info msg="TaskExit event in podsandbox handler container_id:\"269989e070a5005de3c674df215fdfbed88c5883356d88236e000ce6ed14d37e\" id:\"269989e070a5005de3c674df215fdfbed88c5883356d88236e000ce6ed14d37e\" pid:3305 exited_at:{seconds:1757547797 nanos:427976119}" Sep 10 23:43:17.429819 systemd[1]: cri-containerd-269989e070a5005de3c674df215fdfbed88c5883356d88236e000ce6ed14d37e.scope: Consumed 6.426s CPU time, 121.6M memory peak, 132K read from disk, 14.2M written to disk. Sep 10 23:43:17.432613 containerd[1539]: time="2025-09-10T23:43:17.432380305Z" level=info msg="StopPodSandbox for \"5de3a3a1cdf36ec3f5eac2d0188dd4a1b0da64ba3c298d72565e54a4bce4fe59\"" Sep 10 23:43:17.432613 containerd[1539]: time="2025-09-10T23:43:17.432478260Z" level=info msg="Container to stop \"291237a4906e54bd0b573094a50bf8dad5bc7c9b89888e59624e9f5d67c7d541\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 23:43:17.443041 systemd[1]: cri-containerd-5de3a3a1cdf36ec3f5eac2d0188dd4a1b0da64ba3c298d72565e54a4bce4fe59.scope: Deactivated successfully. Sep 10 23:43:17.443990 containerd[1539]: time="2025-09-10T23:43:17.443946782Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5de3a3a1cdf36ec3f5eac2d0188dd4a1b0da64ba3c298d72565e54a4bce4fe59\" id:\"5de3a3a1cdf36ec3f5eac2d0188dd4a1b0da64ba3c298d72565e54a4bce4fe59\" pid:2854 exit_status:137 exited_at:{seconds:1757547797 nanos:443639757}" Sep 10 23:43:17.462877 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-269989e070a5005de3c674df215fdfbed88c5883356d88236e000ce6ed14d37e-rootfs.mount: Deactivated successfully. Sep 10 23:43:17.473460 containerd[1539]: time="2025-09-10T23:43:17.473337511Z" level=info msg="StopContainer for \"269989e070a5005de3c674df215fdfbed88c5883356d88236e000ce6ed14d37e\" returns successfully" Sep 10 23:43:17.474061 containerd[1539]: time="2025-09-10T23:43:17.474023917Z" level=info msg="StopPodSandbox for \"7f84565d01200f74e309417d989797cc5629d90bbeb30b1be0a69e5dd4cc1d12\"" Sep 10 23:43:17.474220 containerd[1539]: time="2025-09-10T23:43:17.474203429Z" level=info msg="Container to stop \"7ff0beb4a3b0fde7f08c8edef213f1d0d8895d0e7e3e147a5d083db01e21c244\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 23:43:17.474289 containerd[1539]: time="2025-09-10T23:43:17.474275785Z" level=info msg="Container to stop \"96e862733f952eeac784574e35fbff0212b0ca0a6860abcc71ede631215047b5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 23:43:17.474340 containerd[1539]: time="2025-09-10T23:43:17.474328383Z" level=info msg="Container to stop \"d3a4dcd0025a07f6312acf3ef7298ee9735610c7cbaf9a4bf8a297c6266689b0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 23:43:17.474394 containerd[1539]: time="2025-09-10T23:43:17.474382620Z" level=info msg="Container to stop \"269989e070a5005de3c674df215fdfbed88c5883356d88236e000ce6ed14d37e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 23:43:17.474448 containerd[1539]: time="2025-09-10T23:43:17.474437417Z" level=info msg="Container to stop \"4f00e0aa7420bf43312318ca904c56067284743715542c4a007051e036db0f37\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 23:43:17.478885 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5de3a3a1cdf36ec3f5eac2d0188dd4a1b0da64ba3c298d72565e54a4bce4fe59-rootfs.mount: Deactivated successfully. Sep 10 23:43:17.483460 systemd[1]: cri-containerd-7f84565d01200f74e309417d989797cc5629d90bbeb30b1be0a69e5dd4cc1d12.scope: Deactivated successfully. Sep 10 23:43:17.492375 containerd[1539]: time="2025-09-10T23:43:17.492278349Z" level=info msg="shim disconnected" id=5de3a3a1cdf36ec3f5eac2d0188dd4a1b0da64ba3c298d72565e54a4bce4fe59 namespace=k8s.io Sep 10 23:43:17.493372 containerd[1539]: time="2025-09-10T23:43:17.492321467Z" level=warning msg="cleaning up after shim disconnected" id=5de3a3a1cdf36ec3f5eac2d0188dd4a1b0da64ba3c298d72565e54a4bce4fe59 namespace=k8s.io Sep 10 23:43:17.493372 containerd[1539]: time="2025-09-10T23:43:17.493202144Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 23:43:17.509651 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f84565d01200f74e309417d989797cc5629d90bbeb30b1be0a69e5dd4cc1d12-rootfs.mount: Deactivated successfully. Sep 10 23:43:17.515590 containerd[1539]: time="2025-09-10T23:43:17.515359545Z" level=info msg="shim disconnected" id=7f84565d01200f74e309417d989797cc5629d90bbeb30b1be0a69e5dd4cc1d12 namespace=k8s.io Sep 10 23:43:17.515590 containerd[1539]: time="2025-09-10T23:43:17.515521137Z" level=warning msg="cleaning up after shim disconnected" id=7f84565d01200f74e309417d989797cc5629d90bbeb30b1be0a69e5dd4cc1d12 namespace=k8s.io Sep 10 23:43:17.515750 containerd[1539]: time="2025-09-10T23:43:17.515598293Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 23:43:17.525885 containerd[1539]: time="2025-09-10T23:43:17.525831275Z" level=info msg="received exit event sandbox_id:\"5de3a3a1cdf36ec3f5eac2d0188dd4a1b0da64ba3c298d72565e54a4bce4fe59\" exit_status:137 exited_at:{seconds:1757547797 nanos:443639757}" Sep 10 23:43:17.526701 containerd[1539]: time="2025-09-10T23:43:17.526648515Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7f84565d01200f74e309417d989797cc5629d90bbeb30b1be0a69e5dd4cc1d12\" id:\"7f84565d01200f74e309417d989797cc5629d90bbeb30b1be0a69e5dd4cc1d12\" pid:2814 exit_status:137 exited_at:{seconds:1757547797 nanos:483660808}" Sep 10 23:43:17.526836 containerd[1539]: time="2025-09-10T23:43:17.526817147Z" level=info msg="received exit event sandbox_id:\"7f84565d01200f74e309417d989797cc5629d90bbeb30b1be0a69e5dd4cc1d12\" exit_status:137 exited_at:{seconds:1757547797 nanos:483660808}" Sep 10 23:43:17.527715 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5de3a3a1cdf36ec3f5eac2d0188dd4a1b0da64ba3c298d72565e54a4bce4fe59-shm.mount: Deactivated successfully. Sep 10 23:43:17.528571 containerd[1539]: time="2025-09-10T23:43:17.527364320Z" level=info msg="TearDown network for sandbox \"5de3a3a1cdf36ec3f5eac2d0188dd4a1b0da64ba3c298d72565e54a4bce4fe59\" successfully" Sep 10 23:43:17.528571 containerd[1539]: time="2025-09-10T23:43:17.528009849Z" level=info msg="StopPodSandbox for \"5de3a3a1cdf36ec3f5eac2d0188dd4a1b0da64ba3c298d72565e54a4bce4fe59\" returns successfully" Sep 10 23:43:17.528571 containerd[1539]: time="2025-09-10T23:43:17.527481715Z" level=info msg="TearDown network for sandbox \"7f84565d01200f74e309417d989797cc5629d90bbeb30b1be0a69e5dd4cc1d12\" successfully" Sep 10 23:43:17.528571 containerd[1539]: time="2025-09-10T23:43:17.528190000Z" level=info msg="StopPodSandbox for \"7f84565d01200f74e309417d989797cc5629d90bbeb30b1be0a69e5dd4cc1d12\" returns successfully" Sep 10 23:43:17.566961 kubelet[2651]: I0910 23:43:17.566919 2651 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3722ae0c-e28d-4c03-8b33-759a49414044-cilium-config-path\") pod \"3722ae0c-e28d-4c03-8b33-759a49414044\" (UID: \"3722ae0c-e28d-4c03-8b33-759a49414044\") " Sep 10 23:43:17.567598 kubelet[2651]: I0910 23:43:17.567430 2651 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqrs7\" (UniqueName: \"kubernetes.io/projected/3722ae0c-e28d-4c03-8b33-759a49414044-kube-api-access-jqrs7\") pod \"3722ae0c-e28d-4c03-8b33-759a49414044\" (UID: \"3722ae0c-e28d-4c03-8b33-759a49414044\") " Sep 10 23:43:17.583090 kubelet[2651]: I0910 23:43:17.582838 2651 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3722ae0c-e28d-4c03-8b33-759a49414044-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3722ae0c-e28d-4c03-8b33-759a49414044" (UID: "3722ae0c-e28d-4c03-8b33-759a49414044"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 10 23:43:17.586367 kubelet[2651]: I0910 23:43:17.585723 2651 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3722ae0c-e28d-4c03-8b33-759a49414044-kube-api-access-jqrs7" (OuterVolumeSpecName: "kube-api-access-jqrs7") pod "3722ae0c-e28d-4c03-8b33-759a49414044" (UID: "3722ae0c-e28d-4c03-8b33-759a49414044"). InnerVolumeSpecName "kube-api-access-jqrs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 10 23:43:17.668813 kubelet[2651]: I0910 23:43:17.668687 2651 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42f6e2c1-7613-46a9-9de6-dd85a28c0449-xtables-lock\") pod \"42f6e2c1-7613-46a9-9de6-dd85a28c0449\" (UID: \"42f6e2c1-7613-46a9-9de6-dd85a28c0449\") " Sep 10 23:43:17.668813 kubelet[2651]: I0910 23:43:17.668729 2651 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42f6e2c1-7613-46a9-9de6-dd85a28c0449-lib-modules\") pod \"42f6e2c1-7613-46a9-9de6-dd85a28c0449\" (UID: \"42f6e2c1-7613-46a9-9de6-dd85a28c0449\") " Sep 10 23:43:17.668813 kubelet[2651]: I0910 23:43:17.668754 2651 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmmnw\" (UniqueName: \"kubernetes.io/projected/42f6e2c1-7613-46a9-9de6-dd85a28c0449-kube-api-access-gmmnw\") pod \"42f6e2c1-7613-46a9-9de6-dd85a28c0449\" (UID: \"42f6e2c1-7613-46a9-9de6-dd85a28c0449\") " Sep 10 23:43:17.668813 kubelet[2651]: I0910 23:43:17.668771 2651 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/42f6e2c1-7613-46a9-9de6-dd85a28c0449-host-proc-sys-net\") pod \"42f6e2c1-7613-46a9-9de6-dd85a28c0449\" (UID: \"42f6e2c1-7613-46a9-9de6-dd85a28c0449\") " Sep 10 23:43:17.668813 kubelet[2651]: I0910 23:43:17.668792 2651 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/42f6e2c1-7613-46a9-9de6-dd85a28c0449-cni-path\") pod \"42f6e2c1-7613-46a9-9de6-dd85a28c0449\" (UID: \"42f6e2c1-7613-46a9-9de6-dd85a28c0449\") " Sep 10 23:43:17.668813 kubelet[2651]: I0910 23:43:17.668807 2651 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/42f6e2c1-7613-46a9-9de6-dd85a28c0449-bpf-maps\") pod \"42f6e2c1-7613-46a9-9de6-dd85a28c0449\" (UID: \"42f6e2c1-7613-46a9-9de6-dd85a28c0449\") " Sep 10 23:43:17.669046 kubelet[2651]: I0910 23:43:17.668826 2651 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/42f6e2c1-7613-46a9-9de6-dd85a28c0449-hubble-tls\") pod \"42f6e2c1-7613-46a9-9de6-dd85a28c0449\" (UID: \"42f6e2c1-7613-46a9-9de6-dd85a28c0449\") " Sep 10 23:43:17.669046 kubelet[2651]: I0910 23:43:17.668841 2651 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/42f6e2c1-7613-46a9-9de6-dd85a28c0449-host-proc-sys-kernel\") pod \"42f6e2c1-7613-46a9-9de6-dd85a28c0449\" (UID: \"42f6e2c1-7613-46a9-9de6-dd85a28c0449\") " Sep 10 23:43:17.669046 kubelet[2651]: I0910 23:43:17.668820 2651 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42f6e2c1-7613-46a9-9de6-dd85a28c0449-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "42f6e2c1-7613-46a9-9de6-dd85a28c0449" (UID: "42f6e2c1-7613-46a9-9de6-dd85a28c0449"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 23:43:17.669046 kubelet[2651]: I0910 23:43:17.668880 2651 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42f6e2c1-7613-46a9-9de6-dd85a28c0449-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "42f6e2c1-7613-46a9-9de6-dd85a28c0449" (UID: "42f6e2c1-7613-46a9-9de6-dd85a28c0449"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 23:43:17.669046 kubelet[2651]: I0910 23:43:17.668857 2651 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/42f6e2c1-7613-46a9-9de6-dd85a28c0449-cilium-run\") pod \"42f6e2c1-7613-46a9-9de6-dd85a28c0449\" (UID: \"42f6e2c1-7613-46a9-9de6-dd85a28c0449\") " Sep 10 23:43:17.669215 kubelet[2651]: I0910 23:43:17.668904 2651 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42f6e2c1-7613-46a9-9de6-dd85a28c0449-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "42f6e2c1-7613-46a9-9de6-dd85a28c0449" (UID: "42f6e2c1-7613-46a9-9de6-dd85a28c0449"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 23:43:17.669215 kubelet[2651]: I0910 23:43:17.668919 2651 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42f6e2c1-7613-46a9-9de6-dd85a28c0449-cni-path" (OuterVolumeSpecName: "cni-path") pod "42f6e2c1-7613-46a9-9de6-dd85a28c0449" (UID: "42f6e2c1-7613-46a9-9de6-dd85a28c0449"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 23:43:17.669215 kubelet[2651]: I0910 23:43:17.668928 2651 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/42f6e2c1-7613-46a9-9de6-dd85a28c0449-cilium-config-path\") pod \"42f6e2c1-7613-46a9-9de6-dd85a28c0449\" (UID: \"42f6e2c1-7613-46a9-9de6-dd85a28c0449\") " Sep 10 23:43:17.669215 kubelet[2651]: I0910 23:43:17.668932 2651 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42f6e2c1-7613-46a9-9de6-dd85a28c0449-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "42f6e2c1-7613-46a9-9de6-dd85a28c0449" (UID: "42f6e2c1-7613-46a9-9de6-dd85a28c0449"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 23:43:17.669215 kubelet[2651]: I0910 23:43:17.668950 2651 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/42f6e2c1-7613-46a9-9de6-dd85a28c0449-hostproc\") pod \"42f6e2c1-7613-46a9-9de6-dd85a28c0449\" (UID: \"42f6e2c1-7613-46a9-9de6-dd85a28c0449\") " Sep 10 23:43:17.669319 kubelet[2651]: I0910 23:43:17.668971 2651 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/42f6e2c1-7613-46a9-9de6-dd85a28c0449-etc-cni-netd\") pod \"42f6e2c1-7613-46a9-9de6-dd85a28c0449\" (UID: \"42f6e2c1-7613-46a9-9de6-dd85a28c0449\") " Sep 10 23:43:17.669319 kubelet[2651]: I0910 23:43:17.668990 2651 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/42f6e2c1-7613-46a9-9de6-dd85a28c0449-clustermesh-secrets\") pod \"42f6e2c1-7613-46a9-9de6-dd85a28c0449\" (UID: \"42f6e2c1-7613-46a9-9de6-dd85a28c0449\") " Sep 10 23:43:17.669319 kubelet[2651]: I0910 23:43:17.669005 2651 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/42f6e2c1-7613-46a9-9de6-dd85a28c0449-cilium-cgroup\") pod \"42f6e2c1-7613-46a9-9de6-dd85a28c0449\" (UID: \"42f6e2c1-7613-46a9-9de6-dd85a28c0449\") " Sep 10 23:43:17.669319 kubelet[2651]: I0910 23:43:17.669047 2651 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/42f6e2c1-7613-46a9-9de6-dd85a28c0449-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 10 23:43:17.669319 kubelet[2651]: I0910 23:43:17.669058 2651 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/42f6e2c1-7613-46a9-9de6-dd85a28c0449-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 10 23:43:17.669319 kubelet[2651]: I0910 23:43:17.669066 2651 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/42f6e2c1-7613-46a9-9de6-dd85a28c0449-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 10 23:43:17.669319 kubelet[2651]: I0910 23:43:17.669077 2651 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqrs7\" (UniqueName: \"kubernetes.io/projected/3722ae0c-e28d-4c03-8b33-759a49414044-kube-api-access-jqrs7\") on node \"localhost\" DevicePath \"\"" Sep 10 23:43:17.669469 kubelet[2651]: I0910 23:43:17.669086 2651 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42f6e2c1-7613-46a9-9de6-dd85a28c0449-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 10 23:43:17.669469 kubelet[2651]: I0910 23:43:17.669094 2651 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3722ae0c-e28d-4c03-8b33-759a49414044-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 10 23:43:17.669469 kubelet[2651]: I0910 23:43:17.669101 2651 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/42f6e2c1-7613-46a9-9de6-dd85a28c0449-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 10 23:43:17.669469 kubelet[2651]: I0910 23:43:17.669123 2651 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42f6e2c1-7613-46a9-9de6-dd85a28c0449-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "42f6e2c1-7613-46a9-9de6-dd85a28c0449" (UID: "42f6e2c1-7613-46a9-9de6-dd85a28c0449"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 23:43:17.671027 kubelet[2651]: I0910 23:43:17.670932 2651 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42f6e2c1-7613-46a9-9de6-dd85a28c0449-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "42f6e2c1-7613-46a9-9de6-dd85a28c0449" (UID: "42f6e2c1-7613-46a9-9de6-dd85a28c0449"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 10 23:43:17.671027 kubelet[2651]: I0910 23:43:17.671000 2651 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42f6e2c1-7613-46a9-9de6-dd85a28c0449-hostproc" (OuterVolumeSpecName: "hostproc") pod "42f6e2c1-7613-46a9-9de6-dd85a28c0449" (UID: "42f6e2c1-7613-46a9-9de6-dd85a28c0449"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 23:43:17.671027 kubelet[2651]: I0910 23:43:17.671023 2651 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42f6e2c1-7613-46a9-9de6-dd85a28c0449-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "42f6e2c1-7613-46a9-9de6-dd85a28c0449" (UID: "42f6e2c1-7613-46a9-9de6-dd85a28c0449"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 23:43:17.671319 kubelet[2651]: I0910 23:43:17.671260 2651 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42f6e2c1-7613-46a9-9de6-dd85a28c0449-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "42f6e2c1-7613-46a9-9de6-dd85a28c0449" (UID: "42f6e2c1-7613-46a9-9de6-dd85a28c0449"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 23:43:17.671358 kubelet[2651]: I0910 23:43:17.671329 2651 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42f6e2c1-7613-46a9-9de6-dd85a28c0449-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "42f6e2c1-7613-46a9-9de6-dd85a28c0449" (UID: "42f6e2c1-7613-46a9-9de6-dd85a28c0449"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 23:43:17.671454 kubelet[2651]: I0910 23:43:17.671415 2651 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42f6e2c1-7613-46a9-9de6-dd85a28c0449-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "42f6e2c1-7613-46a9-9de6-dd85a28c0449" (UID: "42f6e2c1-7613-46a9-9de6-dd85a28c0449"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 10 23:43:17.671861 kubelet[2651]: I0910 23:43:17.671804 2651 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42f6e2c1-7613-46a9-9de6-dd85a28c0449-kube-api-access-gmmnw" (OuterVolumeSpecName: "kube-api-access-gmmnw") pod "42f6e2c1-7613-46a9-9de6-dd85a28c0449" (UID: "42f6e2c1-7613-46a9-9de6-dd85a28c0449"). InnerVolumeSpecName "kube-api-access-gmmnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 10 23:43:17.676764 kubelet[2651]: I0910 23:43:17.676691 2651 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42f6e2c1-7613-46a9-9de6-dd85a28c0449-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "42f6e2c1-7613-46a9-9de6-dd85a28c0449" (UID: "42f6e2c1-7613-46a9-9de6-dd85a28c0449"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 10 23:43:17.769832 kubelet[2651]: I0910 23:43:17.769774 2651 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/42f6e2c1-7613-46a9-9de6-dd85a28c0449-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 10 23:43:17.769832 kubelet[2651]: I0910 23:43:17.769811 2651 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/42f6e2c1-7613-46a9-9de6-dd85a28c0449-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 10 23:43:17.769832 kubelet[2651]: I0910 23:43:17.769823 2651 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/42f6e2c1-7613-46a9-9de6-dd85a28c0449-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 10 23:43:17.769832 kubelet[2651]: I0910 23:43:17.769835 2651 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/42f6e2c1-7613-46a9-9de6-dd85a28c0449-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 10 23:43:17.769832 kubelet[2651]: I0910 23:43:17.769843 2651 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/42f6e2c1-7613-46a9-9de6-dd85a28c0449-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 10 23:43:17.770081 kubelet[2651]: I0910 23:43:17.769852 2651 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/42f6e2c1-7613-46a9-9de6-dd85a28c0449-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 10 23:43:17.770081 kubelet[2651]: I0910 23:43:17.769861 2651 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/42f6e2c1-7613-46a9-9de6-dd85a28c0449-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 10 23:43:17.770081 kubelet[2651]: I0910 23:43:17.769869 2651 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42f6e2c1-7613-46a9-9de6-dd85a28c0449-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 10 23:43:17.770081 kubelet[2651]: I0910 23:43:17.769877 2651 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmmnw\" (UniqueName: \"kubernetes.io/projected/42f6e2c1-7613-46a9-9de6-dd85a28c0449-kube-api-access-gmmnw\") on node \"localhost\" DevicePath \"\"" Sep 10 23:43:18.146310 systemd[1]: Removed slice kubepods-besteffort-pod3722ae0c_e28d_4c03_8b33_759a49414044.slice - libcontainer container kubepods-besteffort-pod3722ae0c_e28d_4c03_8b33_759a49414044.slice. Sep 10 23:43:18.147577 systemd[1]: Removed slice kubepods-burstable-pod42f6e2c1_7613_46a9_9de6_dd85a28c0449.slice - libcontainer container kubepods-burstable-pod42f6e2c1_7613_46a9_9de6_dd85a28c0449.slice. Sep 10 23:43:18.147689 systemd[1]: kubepods-burstable-pod42f6e2c1_7613_46a9_9de6_dd85a28c0449.slice: Consumed 6.531s CPU time, 121.9M memory peak, 2.9M read from disk, 14.3M written to disk. Sep 10 23:43:18.375401 kubelet[2651]: I0910 23:43:18.375359 2651 scope.go:117] "RemoveContainer" containerID="291237a4906e54bd0b573094a50bf8dad5bc7c9b89888e59624e9f5d67c7d541" Sep 10 23:43:18.377656 containerd[1539]: time="2025-09-10T23:43:18.377614302Z" level=info msg="RemoveContainer for \"291237a4906e54bd0b573094a50bf8dad5bc7c9b89888e59624e9f5d67c7d541\"" Sep 10 23:43:18.403713 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7f84565d01200f74e309417d989797cc5629d90bbeb30b1be0a69e5dd4cc1d12-shm.mount: Deactivated successfully. Sep 10 23:43:18.403827 systemd[1]: var-lib-kubelet-pods-3722ae0c\x2de28d\x2d4c03\x2d8b33\x2d759a49414044-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djqrs7.mount: Deactivated successfully. Sep 10 23:43:18.403879 systemd[1]: var-lib-kubelet-pods-42f6e2c1\x2d7613\x2d46a9\x2d9de6\x2ddd85a28c0449-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgmmnw.mount: Deactivated successfully. Sep 10 23:43:18.403928 systemd[1]: var-lib-kubelet-pods-42f6e2c1\x2d7613\x2d46a9\x2d9de6\x2ddd85a28c0449-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 10 23:43:18.403986 systemd[1]: var-lib-kubelet-pods-42f6e2c1\x2d7613\x2d46a9\x2d9de6\x2ddd85a28c0449-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 10 23:43:18.418324 containerd[1539]: time="2025-09-10T23:43:18.418267189Z" level=info msg="RemoveContainer for \"291237a4906e54bd0b573094a50bf8dad5bc7c9b89888e59624e9f5d67c7d541\" returns successfully" Sep 10 23:43:18.421757 kubelet[2651]: I0910 23:43:18.421727 2651 scope.go:117] "RemoveContainer" containerID="291237a4906e54bd0b573094a50bf8dad5bc7c9b89888e59624e9f5d67c7d541" Sep 10 23:43:18.423118 containerd[1539]: time="2025-09-10T23:43:18.423042733Z" level=error msg="ContainerStatus for \"291237a4906e54bd0b573094a50bf8dad5bc7c9b89888e59624e9f5d67c7d541\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"291237a4906e54bd0b573094a50bf8dad5bc7c9b89888e59624e9f5d67c7d541\": not found" Sep 10 23:43:18.424598 kubelet[2651]: E0910 23:43:18.424476 2651 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"291237a4906e54bd0b573094a50bf8dad5bc7c9b89888e59624e9f5d67c7d541\": not found" containerID="291237a4906e54bd0b573094a50bf8dad5bc7c9b89888e59624e9f5d67c7d541" Sep 10 23:43:18.424957 kubelet[2651]: I0910 23:43:18.424819 2651 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"291237a4906e54bd0b573094a50bf8dad5bc7c9b89888e59624e9f5d67c7d541"} err="failed to get container status \"291237a4906e54bd0b573094a50bf8dad5bc7c9b89888e59624e9f5d67c7d541\": rpc error: code = NotFound desc = an error occurred when try to find container \"291237a4906e54bd0b573094a50bf8dad5bc7c9b89888e59624e9f5d67c7d541\": not found" Sep 10 23:43:18.425033 kubelet[2651]: I0910 23:43:18.425021 2651 scope.go:117] "RemoveContainer" containerID="269989e070a5005de3c674df215fdfbed88c5883356d88236e000ce6ed14d37e" Sep 10 23:43:18.427885 containerd[1539]: time="2025-09-10T23:43:18.427848317Z" level=info msg="RemoveContainer for \"269989e070a5005de3c674df215fdfbed88c5883356d88236e000ce6ed14d37e\"" Sep 10 23:43:18.443109 containerd[1539]: time="2025-09-10T23:43:18.443061351Z" level=info msg="RemoveContainer for \"269989e070a5005de3c674df215fdfbed88c5883356d88236e000ce6ed14d37e\" returns successfully" Sep 10 23:43:18.443464 kubelet[2651]: I0910 23:43:18.443434 2651 scope.go:117] "RemoveContainer" containerID="d3a4dcd0025a07f6312acf3ef7298ee9735610c7cbaf9a4bf8a297c6266689b0" Sep 10 23:43:18.445020 containerd[1539]: time="2025-09-10T23:43:18.444986664Z" level=info msg="RemoveContainer for \"d3a4dcd0025a07f6312acf3ef7298ee9735610c7cbaf9a4bf8a297c6266689b0\"" Sep 10 23:43:18.449961 containerd[1539]: time="2025-09-10T23:43:18.449909242Z" level=info msg="RemoveContainer for \"d3a4dcd0025a07f6312acf3ef7298ee9735610c7cbaf9a4bf8a297c6266689b0\" returns successfully" Sep 10 23:43:18.450247 kubelet[2651]: I0910 23:43:18.450217 2651 scope.go:117] "RemoveContainer" containerID="96e862733f952eeac784574e35fbff0212b0ca0a6860abcc71ede631215047b5" Sep 10 23:43:18.454944 containerd[1539]: time="2025-09-10T23:43:18.454883978Z" level=info msg="RemoveContainer for \"96e862733f952eeac784574e35fbff0212b0ca0a6860abcc71ede631215047b5\"" Sep 10 23:43:18.481316 containerd[1539]: time="2025-09-10T23:43:18.481264668Z" level=info msg="RemoveContainer for \"96e862733f952eeac784574e35fbff0212b0ca0a6860abcc71ede631215047b5\" returns successfully" Sep 10 23:43:18.481571 kubelet[2651]: I0910 23:43:18.481523 2651 scope.go:117] "RemoveContainer" containerID="7ff0beb4a3b0fde7f08c8edef213f1d0d8895d0e7e3e147a5d083db01e21c244" Sep 10 23:43:18.483242 containerd[1539]: time="2025-09-10T23:43:18.483190021Z" level=info msg="RemoveContainer for \"7ff0beb4a3b0fde7f08c8edef213f1d0d8895d0e7e3e147a5d083db01e21c244\"" Sep 10 23:43:18.486782 containerd[1539]: time="2025-09-10T23:43:18.486731381Z" level=info msg="RemoveContainer for \"7ff0beb4a3b0fde7f08c8edef213f1d0d8895d0e7e3e147a5d083db01e21c244\" returns successfully" Sep 10 23:43:18.487003 kubelet[2651]: I0910 23:43:18.486977 2651 scope.go:117] "RemoveContainer" containerID="4f00e0aa7420bf43312318ca904c56067284743715542c4a007051e036db0f37" Sep 10 23:43:18.489733 containerd[1539]: time="2025-09-10T23:43:18.489689248Z" level=info msg="RemoveContainer for \"4f00e0aa7420bf43312318ca904c56067284743715542c4a007051e036db0f37\"" Sep 10 23:43:18.492896 containerd[1539]: time="2025-09-10T23:43:18.492858705Z" level=info msg="RemoveContainer for \"4f00e0aa7420bf43312318ca904c56067284743715542c4a007051e036db0f37\" returns successfully" Sep 10 23:43:18.493147 kubelet[2651]: I0910 23:43:18.493120 2651 scope.go:117] "RemoveContainer" containerID="269989e070a5005de3c674df215fdfbed88c5883356d88236e000ce6ed14d37e" Sep 10 23:43:18.494973 containerd[1539]: time="2025-09-10T23:43:18.493421440Z" level=error msg="ContainerStatus for \"269989e070a5005de3c674df215fdfbed88c5883356d88236e000ce6ed14d37e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"269989e070a5005de3c674df215fdfbed88c5883356d88236e000ce6ed14d37e\": not found" Sep 10 23:43:18.495167 kubelet[2651]: E0910 23:43:18.495059 2651 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"269989e070a5005de3c674df215fdfbed88c5883356d88236e000ce6ed14d37e\": not found" containerID="269989e070a5005de3c674df215fdfbed88c5883356d88236e000ce6ed14d37e" Sep 10 23:43:18.495167 kubelet[2651]: I0910 23:43:18.495094 2651 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"269989e070a5005de3c674df215fdfbed88c5883356d88236e000ce6ed14d37e"} err="failed to get container status \"269989e070a5005de3c674df215fdfbed88c5883356d88236e000ce6ed14d37e\": rpc error: code = NotFound desc = an error occurred when try to find container \"269989e070a5005de3c674df215fdfbed88c5883356d88236e000ce6ed14d37e\": not found" Sep 10 23:43:18.495167 kubelet[2651]: I0910 23:43:18.495128 2651 scope.go:117] "RemoveContainer" containerID="d3a4dcd0025a07f6312acf3ef7298ee9735610c7cbaf9a4bf8a297c6266689b0" Sep 10 23:43:18.495424 containerd[1539]: time="2025-09-10T23:43:18.495386071Z" level=error msg="ContainerStatus for \"d3a4dcd0025a07f6312acf3ef7298ee9735610c7cbaf9a4bf8a297c6266689b0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d3a4dcd0025a07f6312acf3ef7298ee9735610c7cbaf9a4bf8a297c6266689b0\": not found" Sep 10 23:43:18.495604 kubelet[2651]: E0910 23:43:18.495551 2651 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d3a4dcd0025a07f6312acf3ef7298ee9735610c7cbaf9a4bf8a297c6266689b0\": not found" containerID="d3a4dcd0025a07f6312acf3ef7298ee9735610c7cbaf9a4bf8a297c6266689b0" Sep 10 23:43:18.495955 kubelet[2651]: I0910 23:43:18.495609 2651 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d3a4dcd0025a07f6312acf3ef7298ee9735610c7cbaf9a4bf8a297c6266689b0"} err="failed to get container status \"d3a4dcd0025a07f6312acf3ef7298ee9735610c7cbaf9a4bf8a297c6266689b0\": rpc error: code = NotFound desc = an error occurred when try to find container \"d3a4dcd0025a07f6312acf3ef7298ee9735610c7cbaf9a4bf8a297c6266689b0\": not found" Sep 10 23:43:18.495955 kubelet[2651]: I0910 23:43:18.495636 2651 scope.go:117] "RemoveContainer" containerID="96e862733f952eeac784574e35fbff0212b0ca0a6860abcc71ede631215047b5" Sep 10 23:43:18.496027 containerd[1539]: time="2025-09-10T23:43:18.495862090Z" level=error msg="ContainerStatus for \"96e862733f952eeac784574e35fbff0212b0ca0a6860abcc71ede631215047b5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"96e862733f952eeac784574e35fbff0212b0ca0a6860abcc71ede631215047b5\": not found" Sep 10 23:43:18.496052 kubelet[2651]: E0910 23:43:18.495960 2651 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"96e862733f952eeac784574e35fbff0212b0ca0a6860abcc71ede631215047b5\": not found" containerID="96e862733f952eeac784574e35fbff0212b0ca0a6860abcc71ede631215047b5" Sep 10 23:43:18.496052 kubelet[2651]: I0910 23:43:18.495977 2651 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"96e862733f952eeac784574e35fbff0212b0ca0a6860abcc71ede631215047b5"} err="failed to get container status \"96e862733f952eeac784574e35fbff0212b0ca0a6860abcc71ede631215047b5\": rpc error: code = NotFound desc = an error occurred when try to find container \"96e862733f952eeac784574e35fbff0212b0ca0a6860abcc71ede631215047b5\": not found" Sep 10 23:43:18.496052 kubelet[2651]: I0910 23:43:18.495990 2651 scope.go:117] "RemoveContainer" containerID="7ff0beb4a3b0fde7f08c8edef213f1d0d8895d0e7e3e147a5d083db01e21c244" Sep 10 23:43:18.496161 containerd[1539]: time="2025-09-10T23:43:18.496125478Z" level=error msg="ContainerStatus for \"7ff0beb4a3b0fde7f08c8edef213f1d0d8895d0e7e3e147a5d083db01e21c244\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7ff0beb4a3b0fde7f08c8edef213f1d0d8895d0e7e3e147a5d083db01e21c244\": not found" Sep 10 23:43:18.496252 kubelet[2651]: E0910 23:43:18.496228 2651 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7ff0beb4a3b0fde7f08c8edef213f1d0d8895d0e7e3e147a5d083db01e21c244\": not found" containerID="7ff0beb4a3b0fde7f08c8edef213f1d0d8895d0e7e3e147a5d083db01e21c244" Sep 10 23:43:18.496299 kubelet[2651]: I0910 23:43:18.496249 2651 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7ff0beb4a3b0fde7f08c8edef213f1d0d8895d0e7e3e147a5d083db01e21c244"} err="failed to get container status \"7ff0beb4a3b0fde7f08c8edef213f1d0d8895d0e7e3e147a5d083db01e21c244\": rpc error: code = NotFound desc = an error occurred when try to find container \"7ff0beb4a3b0fde7f08c8edef213f1d0d8895d0e7e3e147a5d083db01e21c244\": not found" Sep 10 23:43:18.496299 kubelet[2651]: I0910 23:43:18.496262 2651 scope.go:117] "RemoveContainer" containerID="4f00e0aa7420bf43312318ca904c56067284743715542c4a007051e036db0f37" Sep 10 23:43:18.496474 containerd[1539]: time="2025-09-10T23:43:18.496445743Z" level=error msg="ContainerStatus for \"4f00e0aa7420bf43312318ca904c56067284743715542c4a007051e036db0f37\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4f00e0aa7420bf43312318ca904c56067284743715542c4a007051e036db0f37\": not found" Sep 10 23:43:18.496599 kubelet[2651]: E0910 23:43:18.496580 2651 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4f00e0aa7420bf43312318ca904c56067284743715542c4a007051e036db0f37\": not found" containerID="4f00e0aa7420bf43312318ca904c56067284743715542c4a007051e036db0f37" Sep 10 23:43:18.496646 kubelet[2651]: I0910 23:43:18.496599 2651 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4f00e0aa7420bf43312318ca904c56067284743715542c4a007051e036db0f37"} err="failed to get container status \"4f00e0aa7420bf43312318ca904c56067284743715542c4a007051e036db0f37\": rpc error: code = NotFound desc = an error occurred when try to find container \"4f00e0aa7420bf43312318ca904c56067284743715542c4a007051e036db0f37\": not found" Sep 10 23:43:19.294331 sshd[4262]: Connection closed by 10.0.0.1 port 40298 Sep 10 23:43:19.294811 sshd-session[4260]: pam_unix(sshd:session): session closed for user core Sep 10 23:43:19.307053 systemd[1]: sshd@22-10.0.0.21:22-10.0.0.1:40298.service: Deactivated successfully. Sep 10 23:43:19.309059 systemd[1]: session-23.scope: Deactivated successfully. Sep 10 23:43:19.309281 systemd[1]: session-23.scope: Consumed 1.320s CPU time, 24.7M memory peak. Sep 10 23:43:19.309996 systemd-logind[1522]: Session 23 logged out. Waiting for processes to exit. Sep 10 23:43:19.312761 systemd[1]: Started sshd@23-10.0.0.21:22-10.0.0.1:40308.service - OpenSSH per-connection server daemon (10.0.0.1:40308). Sep 10 23:43:19.313466 systemd-logind[1522]: Removed session 23. Sep 10 23:43:19.369373 sshd[4413]: Accepted publickey for core from 10.0.0.1 port 40308 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:43:19.370769 sshd-session[4413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:43:19.375601 systemd-logind[1522]: New session 24 of user core. Sep 10 23:43:19.384796 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 10 23:43:20.141241 kubelet[2651]: I0910 23:43:20.141197 2651 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3722ae0c-e28d-4c03-8b33-759a49414044" path="/var/lib/kubelet/pods/3722ae0c-e28d-4c03-8b33-759a49414044/volumes" Sep 10 23:43:20.141948 kubelet[2651]: I0910 23:43:20.141684 2651 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42f6e2c1-7613-46a9-9de6-dd85a28c0449" path="/var/lib/kubelet/pods/42f6e2c1-7613-46a9-9de6-dd85a28c0449/volumes" Sep 10 23:43:20.238130 kubelet[2651]: E0910 23:43:20.238064 2651 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 10 23:43:20.823572 sshd[4415]: Connection closed by 10.0.0.1 port 40308 Sep 10 23:43:20.824071 sshd-session[4413]: pam_unix(sshd:session): session closed for user core Sep 10 23:43:20.835664 systemd[1]: sshd@23-10.0.0.21:22-10.0.0.1:40308.service: Deactivated successfully. Sep 10 23:43:20.837324 systemd[1]: session-24.scope: Deactivated successfully. Sep 10 23:43:20.837544 systemd[1]: session-24.scope: Consumed 1.350s CPU time, 23.8M memory peak. Sep 10 23:43:20.842697 systemd-logind[1522]: Session 24 logged out. Waiting for processes to exit. Sep 10 23:43:20.846193 systemd[1]: Started sshd@24-10.0.0.21:22-10.0.0.1:52886.service - OpenSSH per-connection server daemon (10.0.0.1:52886). Sep 10 23:43:20.848457 systemd-logind[1522]: Removed session 24. Sep 10 23:43:20.859085 kubelet[2651]: E0910 23:43:20.857104 2651 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="42f6e2c1-7613-46a9-9de6-dd85a28c0449" containerName="apply-sysctl-overwrites" Sep 10 23:43:20.859201 kubelet[2651]: E0910 23:43:20.859095 2651 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="42f6e2c1-7613-46a9-9de6-dd85a28c0449" containerName="cilium-agent" Sep 10 23:43:20.859201 kubelet[2651]: E0910 23:43:20.859105 2651 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="42f6e2c1-7613-46a9-9de6-dd85a28c0449" containerName="mount-cgroup" Sep 10 23:43:20.859201 kubelet[2651]: E0910 23:43:20.859110 2651 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="42f6e2c1-7613-46a9-9de6-dd85a28c0449" containerName="mount-bpf-fs" Sep 10 23:43:20.859201 kubelet[2651]: E0910 23:43:20.859116 2651 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3722ae0c-e28d-4c03-8b33-759a49414044" containerName="cilium-operator" Sep 10 23:43:20.859201 kubelet[2651]: E0910 23:43:20.859121 2651 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="42f6e2c1-7613-46a9-9de6-dd85a28c0449" containerName="clean-cilium-state" Sep 10 23:43:20.859201 kubelet[2651]: I0910 23:43:20.859177 2651 memory_manager.go:354] "RemoveStaleState removing state" podUID="42f6e2c1-7613-46a9-9de6-dd85a28c0449" containerName="cilium-agent" Sep 10 23:43:20.859201 kubelet[2651]: I0910 23:43:20.859188 2651 memory_manager.go:354] "RemoveStaleState removing state" podUID="3722ae0c-e28d-4c03-8b33-759a49414044" containerName="cilium-operator" Sep 10 23:43:20.877869 systemd[1]: Created slice kubepods-burstable-pod6ff1956d_3f8d_4a40_9626_8a08cd822c58.slice - libcontainer container kubepods-burstable-pod6ff1956d_3f8d_4a40_9626_8a08cd822c58.slice. Sep 10 23:43:20.917708 sshd[4427]: Accepted publickey for core from 10.0.0.1 port 52886 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:43:20.919114 sshd-session[4427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:43:20.924653 systemd-logind[1522]: New session 25 of user core. Sep 10 23:43:20.931791 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 10 23:43:20.982289 sshd[4429]: Connection closed by 10.0.0.1 port 52886 Sep 10 23:43:20.982805 sshd-session[4427]: pam_unix(sshd:session): session closed for user core Sep 10 23:43:20.989969 kubelet[2651]: I0910 23:43:20.989922 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ff1956d-3f8d-4a40-9626-8a08cd822c58-lib-modules\") pod \"cilium-zlbh4\" (UID: \"6ff1956d-3f8d-4a40-9626-8a08cd822c58\") " pod="kube-system/cilium-zlbh4" Sep 10 23:43:20.989969 kubelet[2651]: I0910 23:43:20.989969 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6ff1956d-3f8d-4a40-9626-8a08cd822c58-hubble-tls\") pod \"cilium-zlbh4\" (UID: \"6ff1956d-3f8d-4a40-9626-8a08cd822c58\") " pod="kube-system/cilium-zlbh4" Sep 10 23:43:20.990088 kubelet[2651]: I0910 23:43:20.989992 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6ff1956d-3f8d-4a40-9626-8a08cd822c58-xtables-lock\") pod \"cilium-zlbh4\" (UID: \"6ff1956d-3f8d-4a40-9626-8a08cd822c58\") " pod="kube-system/cilium-zlbh4" Sep 10 23:43:20.990088 kubelet[2651]: I0910 23:43:20.990012 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p7bv\" (UniqueName: \"kubernetes.io/projected/6ff1956d-3f8d-4a40-9626-8a08cd822c58-kube-api-access-8p7bv\") pod \"cilium-zlbh4\" (UID: \"6ff1956d-3f8d-4a40-9626-8a08cd822c58\") " pod="kube-system/cilium-zlbh4" Sep 10 23:43:20.990088 kubelet[2651]: I0910 23:43:20.990028 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6ff1956d-3f8d-4a40-9626-8a08cd822c58-cni-path\") pod \"cilium-zlbh4\" (UID: \"6ff1956d-3f8d-4a40-9626-8a08cd822c58\") " pod="kube-system/cilium-zlbh4" Sep 10 23:43:20.990088 kubelet[2651]: I0910 23:43:20.990043 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6ff1956d-3f8d-4a40-9626-8a08cd822c58-cilium-run\") pod \"cilium-zlbh4\" (UID: \"6ff1956d-3f8d-4a40-9626-8a08cd822c58\") " pod="kube-system/cilium-zlbh4" Sep 10 23:43:20.990088 kubelet[2651]: I0910 23:43:20.990060 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6ff1956d-3f8d-4a40-9626-8a08cd822c58-bpf-maps\") pod \"cilium-zlbh4\" (UID: \"6ff1956d-3f8d-4a40-9626-8a08cd822c58\") " pod="kube-system/cilium-zlbh4" Sep 10 23:43:20.990088 kubelet[2651]: I0910 23:43:20.990086 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6ff1956d-3f8d-4a40-9626-8a08cd822c58-etc-cni-netd\") pod \"cilium-zlbh4\" (UID: \"6ff1956d-3f8d-4a40-9626-8a08cd822c58\") " pod="kube-system/cilium-zlbh4" Sep 10 23:43:20.990224 kubelet[2651]: I0910 23:43:20.990102 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6ff1956d-3f8d-4a40-9626-8a08cd822c58-hostproc\") pod \"cilium-zlbh4\" (UID: \"6ff1956d-3f8d-4a40-9626-8a08cd822c58\") " pod="kube-system/cilium-zlbh4" Sep 10 23:43:20.990224 kubelet[2651]: I0910 23:43:20.990118 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6ff1956d-3f8d-4a40-9626-8a08cd822c58-host-proc-sys-kernel\") pod \"cilium-zlbh4\" (UID: \"6ff1956d-3f8d-4a40-9626-8a08cd822c58\") " pod="kube-system/cilium-zlbh4" Sep 10 23:43:20.990224 kubelet[2651]: I0910 23:43:20.990132 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6ff1956d-3f8d-4a40-9626-8a08cd822c58-cilium-cgroup\") pod \"cilium-zlbh4\" (UID: \"6ff1956d-3f8d-4a40-9626-8a08cd822c58\") " pod="kube-system/cilium-zlbh4" Sep 10 23:43:20.990224 kubelet[2651]: I0910 23:43:20.990150 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6ff1956d-3f8d-4a40-9626-8a08cd822c58-cilium-config-path\") pod \"cilium-zlbh4\" (UID: \"6ff1956d-3f8d-4a40-9626-8a08cd822c58\") " pod="kube-system/cilium-zlbh4" Sep 10 23:43:20.990224 kubelet[2651]: I0910 23:43:20.990165 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6ff1956d-3f8d-4a40-9626-8a08cd822c58-host-proc-sys-net\") pod \"cilium-zlbh4\" (UID: \"6ff1956d-3f8d-4a40-9626-8a08cd822c58\") " pod="kube-system/cilium-zlbh4" Sep 10 23:43:20.990320 kubelet[2651]: I0910 23:43:20.990185 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6ff1956d-3f8d-4a40-9626-8a08cd822c58-cilium-ipsec-secrets\") pod \"cilium-zlbh4\" (UID: \"6ff1956d-3f8d-4a40-9626-8a08cd822c58\") " pod="kube-system/cilium-zlbh4" Sep 10 23:43:20.990320 kubelet[2651]: I0910 23:43:20.990201 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6ff1956d-3f8d-4a40-9626-8a08cd822c58-clustermesh-secrets\") pod \"cilium-zlbh4\" (UID: \"6ff1956d-3f8d-4a40-9626-8a08cd822c58\") " pod="kube-system/cilium-zlbh4" Sep 10 23:43:20.994931 systemd[1]: sshd@24-10.0.0.21:22-10.0.0.1:52886.service: Deactivated successfully. Sep 10 23:43:20.996643 systemd[1]: session-25.scope: Deactivated successfully. Sep 10 23:43:20.997370 systemd-logind[1522]: Session 25 logged out. Waiting for processes to exit. Sep 10 23:43:20.999773 systemd[1]: Started sshd@25-10.0.0.21:22-10.0.0.1:52896.service - OpenSSH per-connection server daemon (10.0.0.1:52896). Sep 10 23:43:21.001790 systemd-logind[1522]: Removed session 25. Sep 10 23:43:21.062540 sshd[4436]: Accepted publickey for core from 10.0.0.1 port 52896 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:43:21.064012 sshd-session[4436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:43:21.068597 systemd-logind[1522]: New session 26 of user core. Sep 10 23:43:21.078763 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 10 23:43:21.185179 containerd[1539]: time="2025-09-10T23:43:21.185130373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zlbh4,Uid:6ff1956d-3f8d-4a40-9626-8a08cd822c58,Namespace:kube-system,Attempt:0,}" Sep 10 23:43:21.207612 containerd[1539]: time="2025-09-10T23:43:21.207551029Z" level=info msg="connecting to shim f890a6e781861416c42bd3b75acf5652be56d0cdb428be425443e5e9ffcebdd7" address="unix:///run/containerd/s/3623dff0bc75bdc6da723a64399399ab4be0964acd4a50d8f8c9104fba01a03d" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:43:21.230796 systemd[1]: Started cri-containerd-f890a6e781861416c42bd3b75acf5652be56d0cdb428be425443e5e9ffcebdd7.scope - libcontainer container f890a6e781861416c42bd3b75acf5652be56d0cdb428be425443e5e9ffcebdd7. Sep 10 23:43:21.255390 containerd[1539]: time="2025-09-10T23:43:21.255341837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zlbh4,Uid:6ff1956d-3f8d-4a40-9626-8a08cd822c58,Namespace:kube-system,Attempt:0,} returns sandbox id \"f890a6e781861416c42bd3b75acf5652be56d0cdb428be425443e5e9ffcebdd7\"" Sep 10 23:43:21.265285 containerd[1539]: time="2025-09-10T23:43:21.265243691Z" level=info msg="CreateContainer within sandbox \"f890a6e781861416c42bd3b75acf5652be56d0cdb428be425443e5e9ffcebdd7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 10 23:43:21.272839 containerd[1539]: time="2025-09-10T23:43:21.272781947Z" level=info msg="Container 32f7c2d9988a1f6dab834ea313ed025b9b7ba33c3f17a074c73b6e374d9a4740: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:43:21.280276 containerd[1539]: time="2025-09-10T23:43:21.280230406Z" level=info msg="CreateContainer within sandbox \"f890a6e781861416c42bd3b75acf5652be56d0cdb428be425443e5e9ffcebdd7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"32f7c2d9988a1f6dab834ea313ed025b9b7ba33c3f17a074c73b6e374d9a4740\"" Sep 10 23:43:21.280807 containerd[1539]: time="2025-09-10T23:43:21.280783107Z" level=info msg="StartContainer for \"32f7c2d9988a1f6dab834ea313ed025b9b7ba33c3f17a074c73b6e374d9a4740\"" Sep 10 23:43:21.281688 containerd[1539]: time="2025-09-10T23:43:21.281647197Z" level=info msg="connecting to shim 32f7c2d9988a1f6dab834ea313ed025b9b7ba33c3f17a074c73b6e374d9a4740" address="unix:///run/containerd/s/3623dff0bc75bdc6da723a64399399ab4be0964acd4a50d8f8c9104fba01a03d" protocol=ttrpc version=3 Sep 10 23:43:21.306785 systemd[1]: Started cri-containerd-32f7c2d9988a1f6dab834ea313ed025b9b7ba33c3f17a074c73b6e374d9a4740.scope - libcontainer container 32f7c2d9988a1f6dab834ea313ed025b9b7ba33c3f17a074c73b6e374d9a4740. Sep 10 23:43:21.336917 containerd[1539]: time="2025-09-10T23:43:21.336801188Z" level=info msg="StartContainer for \"32f7c2d9988a1f6dab834ea313ed025b9b7ba33c3f17a074c73b6e374d9a4740\" returns successfully" Sep 10 23:43:21.345771 systemd[1]: cri-containerd-32f7c2d9988a1f6dab834ea313ed025b9b7ba33c3f17a074c73b6e374d9a4740.scope: Deactivated successfully. Sep 10 23:43:21.348235 containerd[1539]: time="2025-09-10T23:43:21.348180950Z" level=info msg="received exit event container_id:\"32f7c2d9988a1f6dab834ea313ed025b9b7ba33c3f17a074c73b6e374d9a4740\" id:\"32f7c2d9988a1f6dab834ea313ed025b9b7ba33c3f17a074c73b6e374d9a4740\" pid:4507 exited_at:{seconds:1757547801 nanos:347883440}" Sep 10 23:43:21.348488 containerd[1539]: time="2025-09-10T23:43:21.348460420Z" level=info msg="TaskExit event in podsandbox handler container_id:\"32f7c2d9988a1f6dab834ea313ed025b9b7ba33c3f17a074c73b6e374d9a4740\" id:\"32f7c2d9988a1f6dab834ea313ed025b9b7ba33c3f17a074c73b6e374d9a4740\" pid:4507 exited_at:{seconds:1757547801 nanos:347883440}" Sep 10 23:43:21.401420 containerd[1539]: time="2025-09-10T23:43:21.401348770Z" level=info msg="CreateContainer within sandbox \"f890a6e781861416c42bd3b75acf5652be56d0cdb428be425443e5e9ffcebdd7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 10 23:43:21.411375 containerd[1539]: time="2025-09-10T23:43:21.411228104Z" level=info msg="Container 1e03e0a0784b774c57639c694da7727301f6ae31a81137333cac476d248397f8: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:43:21.418612 containerd[1539]: time="2025-09-10T23:43:21.418528689Z" level=info msg="CreateContainer within sandbox \"f890a6e781861416c42bd3b75acf5652be56d0cdb428be425443e5e9ffcebdd7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1e03e0a0784b774c57639c694da7727301f6ae31a81137333cac476d248397f8\"" Sep 10 23:43:21.419375 containerd[1539]: time="2025-09-10T23:43:21.419347420Z" level=info msg="StartContainer for \"1e03e0a0784b774c57639c694da7727301f6ae31a81137333cac476d248397f8\"" Sep 10 23:43:21.420311 containerd[1539]: time="2025-09-10T23:43:21.420283148Z" level=info msg="connecting to shim 1e03e0a0784b774c57639c694da7727301f6ae31a81137333cac476d248397f8" address="unix:///run/containerd/s/3623dff0bc75bdc6da723a64399399ab4be0964acd4a50d8f8c9104fba01a03d" protocol=ttrpc version=3 Sep 10 23:43:21.439774 systemd[1]: Started cri-containerd-1e03e0a0784b774c57639c694da7727301f6ae31a81137333cac476d248397f8.scope - libcontainer container 1e03e0a0784b774c57639c694da7727301f6ae31a81137333cac476d248397f8. Sep 10 23:43:21.467414 containerd[1539]: time="2025-09-10T23:43:21.467357301Z" level=info msg="StartContainer for \"1e03e0a0784b774c57639c694da7727301f6ae31a81137333cac476d248397f8\" returns successfully" Sep 10 23:43:21.475223 systemd[1]: cri-containerd-1e03e0a0784b774c57639c694da7727301f6ae31a81137333cac476d248397f8.scope: Deactivated successfully. Sep 10 23:43:21.477781 containerd[1539]: time="2025-09-10T23:43:21.477732698Z" level=info msg="received exit event container_id:\"1e03e0a0784b774c57639c694da7727301f6ae31a81137333cac476d248397f8\" id:\"1e03e0a0784b774c57639c694da7727301f6ae31a81137333cac476d248397f8\" pid:4551 exited_at:{seconds:1757547801 nanos:477376831}" Sep 10 23:43:21.477781 containerd[1539]: time="2025-09-10T23:43:21.477776737Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1e03e0a0784b774c57639c694da7727301f6ae31a81137333cac476d248397f8\" id:\"1e03e0a0784b774c57639c694da7727301f6ae31a81137333cac476d248397f8\" pid:4551 exited_at:{seconds:1757547801 nanos:477376831}" Sep 10 23:43:21.819470 kubelet[2651]: I0910 23:43:21.819356 2651 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-10T23:43:21Z","lastTransitionTime":"2025-09-10T23:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 10 23:43:22.409851 containerd[1539]: time="2025-09-10T23:43:22.409803813Z" level=info msg="CreateContainer within sandbox \"f890a6e781861416c42bd3b75acf5652be56d0cdb428be425443e5e9ffcebdd7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 10 23:43:22.430721 containerd[1539]: time="2025-09-10T23:43:22.430675869Z" level=info msg="Container 568b572199cc72da4e4ec1a56242df061bd10d0d4d525045adab7d2742ae7d47: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:43:22.471590 containerd[1539]: time="2025-09-10T23:43:22.471051384Z" level=info msg="CreateContainer within sandbox \"f890a6e781861416c42bd3b75acf5652be56d0cdb428be425443e5e9ffcebdd7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"568b572199cc72da4e4ec1a56242df061bd10d0d4d525045adab7d2742ae7d47\"" Sep 10 23:43:22.472630 containerd[1539]: time="2025-09-10T23:43:22.472191948Z" level=info msg="StartContainer for \"568b572199cc72da4e4ec1a56242df061bd10d0d4d525045adab7d2742ae7d47\"" Sep 10 23:43:22.475062 containerd[1539]: time="2025-09-10T23:43:22.474976779Z" level=info msg="connecting to shim 568b572199cc72da4e4ec1a56242df061bd10d0d4d525045adab7d2742ae7d47" address="unix:///run/containerd/s/3623dff0bc75bdc6da723a64399399ab4be0964acd4a50d8f8c9104fba01a03d" protocol=ttrpc version=3 Sep 10 23:43:22.500058 systemd[1]: Started cri-containerd-568b572199cc72da4e4ec1a56242df061bd10d0d4d525045adab7d2742ae7d47.scope - libcontainer container 568b572199cc72da4e4ec1a56242df061bd10d0d4d525045adab7d2742ae7d47. Sep 10 23:43:22.571911 systemd[1]: cri-containerd-568b572199cc72da4e4ec1a56242df061bd10d0d4d525045adab7d2742ae7d47.scope: Deactivated successfully. Sep 10 23:43:22.572862 containerd[1539]: time="2025-09-10T23:43:22.572823987Z" level=info msg="TaskExit event in podsandbox handler container_id:\"568b572199cc72da4e4ec1a56242df061bd10d0d4d525045adab7d2742ae7d47\" id:\"568b572199cc72da4e4ec1a56242df061bd10d0d4d525045adab7d2742ae7d47\" pid:4594 exited_at:{seconds:1757547802 nanos:572503517}" Sep 10 23:43:22.572973 containerd[1539]: time="2025-09-10T23:43:22.572905224Z" level=info msg="received exit event container_id:\"568b572199cc72da4e4ec1a56242df061bd10d0d4d525045adab7d2742ae7d47\" id:\"568b572199cc72da4e4ec1a56242df061bd10d0d4d525045adab7d2742ae7d47\" pid:4594 exited_at:{seconds:1757547802 nanos:572503517}" Sep 10 23:43:22.574648 containerd[1539]: time="2025-09-10T23:43:22.574470734Z" level=info msg="StartContainer for \"568b572199cc72da4e4ec1a56242df061bd10d0d4d525045adab7d2742ae7d47\" returns successfully" Sep 10 23:43:23.103746 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-568b572199cc72da4e4ec1a56242df061bd10d0d4d525045adab7d2742ae7d47-rootfs.mount: Deactivated successfully. Sep 10 23:43:23.412304 containerd[1539]: time="2025-09-10T23:43:23.412049709Z" level=info msg="CreateContainer within sandbox \"f890a6e781861416c42bd3b75acf5652be56d0cdb428be425443e5e9ffcebdd7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 10 23:43:23.419031 containerd[1539]: time="2025-09-10T23:43:23.418984030Z" level=info msg="Container ac56ecf714e520d9846231c2c93542720191969d1a2c27769c380f399f0b3143: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:43:23.426623 containerd[1539]: time="2025-09-10T23:43:23.426580532Z" level=info msg="CreateContainer within sandbox \"f890a6e781861416c42bd3b75acf5652be56d0cdb428be425443e5e9ffcebdd7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ac56ecf714e520d9846231c2c93542720191969d1a2c27769c380f399f0b3143\"" Sep 10 23:43:23.427748 containerd[1539]: time="2025-09-10T23:43:23.427137475Z" level=info msg="StartContainer for \"ac56ecf714e520d9846231c2c93542720191969d1a2c27769c380f399f0b3143\"" Sep 10 23:43:23.428331 containerd[1539]: time="2025-09-10T23:43:23.428287722Z" level=info msg="connecting to shim ac56ecf714e520d9846231c2c93542720191969d1a2c27769c380f399f0b3143" address="unix:///run/containerd/s/3623dff0bc75bdc6da723a64399399ab4be0964acd4a50d8f8c9104fba01a03d" protocol=ttrpc version=3 Sep 10 23:43:23.450827 systemd[1]: Started cri-containerd-ac56ecf714e520d9846231c2c93542720191969d1a2c27769c380f399f0b3143.scope - libcontainer container ac56ecf714e520d9846231c2c93542720191969d1a2c27769c380f399f0b3143. Sep 10 23:43:23.476156 systemd[1]: cri-containerd-ac56ecf714e520d9846231c2c93542720191969d1a2c27769c380f399f0b3143.scope: Deactivated successfully. Sep 10 23:43:23.477366 containerd[1539]: time="2025-09-10T23:43:23.477308353Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ac56ecf714e520d9846231c2c93542720191969d1a2c27769c380f399f0b3143\" id:\"ac56ecf714e520d9846231c2c93542720191969d1a2c27769c380f399f0b3143\" pid:4634 exited_at:{seconds:1757547803 nanos:476233544}" Sep 10 23:43:23.478662 containerd[1539]: time="2025-09-10T23:43:23.477472348Z" level=info msg="received exit event container_id:\"ac56ecf714e520d9846231c2c93542720191969d1a2c27769c380f399f0b3143\" id:\"ac56ecf714e520d9846231c2c93542720191969d1a2c27769c380f399f0b3143\" pid:4634 exited_at:{seconds:1757547803 nanos:476233544}" Sep 10 23:43:23.490099 containerd[1539]: time="2025-09-10T23:43:23.490062427Z" level=info msg="StartContainer for \"ac56ecf714e520d9846231c2c93542720191969d1a2c27769c380f399f0b3143\" returns successfully" Sep 10 23:43:23.500840 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac56ecf714e520d9846231c2c93542720191969d1a2c27769c380f399f0b3143-rootfs.mount: Deactivated successfully. Sep 10 23:43:24.418277 containerd[1539]: time="2025-09-10T23:43:24.418236263Z" level=info msg="CreateContainer within sandbox \"f890a6e781861416c42bd3b75acf5652be56d0cdb428be425443e5e9ffcebdd7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 10 23:43:24.429405 containerd[1539]: time="2025-09-10T23:43:24.429329458Z" level=info msg="Container 838ae984f14e37a224124893495a1a99ad3704bfb5187548bb7e129426f93c84: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:43:24.438089 containerd[1539]: time="2025-09-10T23:43:24.437913516Z" level=info msg="CreateContainer within sandbox \"f890a6e781861416c42bd3b75acf5652be56d0cdb428be425443e5e9ffcebdd7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"838ae984f14e37a224124893495a1a99ad3704bfb5187548bb7e129426f93c84\"" Sep 10 23:43:24.439719 containerd[1539]: time="2025-09-10T23:43:24.439548634Z" level=info msg="StartContainer for \"838ae984f14e37a224124893495a1a99ad3704bfb5187548bb7e129426f93c84\"" Sep 10 23:43:24.442774 containerd[1539]: time="2025-09-10T23:43:24.442375161Z" level=info msg="connecting to shim 838ae984f14e37a224124893495a1a99ad3704bfb5187548bb7e129426f93c84" address="unix:///run/containerd/s/3623dff0bc75bdc6da723a64399399ab4be0964acd4a50d8f8c9104fba01a03d" protocol=ttrpc version=3 Sep 10 23:43:24.468776 systemd[1]: Started cri-containerd-838ae984f14e37a224124893495a1a99ad3704bfb5187548bb7e129426f93c84.scope - libcontainer container 838ae984f14e37a224124893495a1a99ad3704bfb5187548bb7e129426f93c84. Sep 10 23:43:24.505515 containerd[1539]: time="2025-09-10T23:43:24.505450415Z" level=info msg="StartContainer for \"838ae984f14e37a224124893495a1a99ad3704bfb5187548bb7e129426f93c84\" returns successfully" Sep 10 23:43:24.564707 containerd[1539]: time="2025-09-10T23:43:24.564662929Z" level=info msg="TaskExit event in podsandbox handler container_id:\"838ae984f14e37a224124893495a1a99ad3704bfb5187548bb7e129426f93c84\" id:\"81b0dc4976b03af4a29108f12d8cd3165452bea5101a35f319b61cda1ef0cf52\" pid:4702 exited_at:{seconds:1757547804 nanos:564338817}" Sep 10 23:43:24.780596 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 10 23:43:25.489413 kubelet[2651]: I0910 23:43:25.488974 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zlbh4" podStartSLOduration=5.488956209 podStartE2EDuration="5.488956209s" podCreationTimestamp="2025-09-10 23:43:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:43:25.488653616 +0000 UTC m=+85.450495738" watchObservedRunningTime="2025-09-10 23:43:25.488956209 +0000 UTC m=+85.450798291" Sep 10 23:43:27.436995 containerd[1539]: time="2025-09-10T23:43:27.436954577Z" level=info msg="TaskExit event in podsandbox handler container_id:\"838ae984f14e37a224124893495a1a99ad3704bfb5187548bb7e129426f93c84\" id:\"bfb09cfa7a06b224f8a8c52bacf4015b70d29c9e5cee44d8561d26e0b78993a1\" pid:5114 exit_status:1 exited_at:{seconds:1757547807 nanos:436609543}" Sep 10 23:43:27.741214 systemd-networkd[1449]: lxc_health: Link UP Sep 10 23:43:27.741444 systemd-networkd[1449]: lxc_health: Gained carrier Sep 10 23:43:29.358727 systemd-networkd[1449]: lxc_health: Gained IPv6LL Sep 10 23:43:29.557801 containerd[1539]: time="2025-09-10T23:43:29.557755885Z" level=info msg="TaskExit event in podsandbox handler container_id:\"838ae984f14e37a224124893495a1a99ad3704bfb5187548bb7e129426f93c84\" id:\"31db32029d7c02c24cf8ef43d09786fa436b5a0c6cacfa891180c5c45ccdd685\" pid:5246 exited_at:{seconds:1757547809 nanos:557324770}" Sep 10 23:43:31.674684 containerd[1539]: time="2025-09-10T23:43:31.674640427Z" level=info msg="TaskExit event in podsandbox handler container_id:\"838ae984f14e37a224124893495a1a99ad3704bfb5187548bb7e129426f93c84\" id:\"e190f0524c5ef055403575d62860a44c1c71226a76df477bfe2e8f44127c433d\" pid:5279 exited_at:{seconds:1757547811 nanos:674186431}" Sep 10 23:43:33.797587 containerd[1539]: time="2025-09-10T23:43:33.797520756Z" level=info msg="TaskExit event in podsandbox handler container_id:\"838ae984f14e37a224124893495a1a99ad3704bfb5187548bb7e129426f93c84\" id:\"f3b4e5ee863ab242b500cefcb11a9da9ffab312e5b970fcf09ed70e78dbbef9e\" pid:5304 exited_at:{seconds:1757547813 nanos:797074637}" Sep 10 23:43:33.801374 kubelet[2651]: E0910 23:43:33.801336 2651 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:35814->127.0.0.1:35841: write tcp 127.0.0.1:35814->127.0.0.1:35841: write: broken pipe Sep 10 23:43:33.818096 sshd[4438]: Connection closed by 10.0.0.1 port 52896 Sep 10 23:43:33.818510 sshd-session[4436]: pam_unix(sshd:session): session closed for user core Sep 10 23:43:33.822031 systemd[1]: sshd@25-10.0.0.21:22-10.0.0.1:52896.service: Deactivated successfully. Sep 10 23:43:33.826038 systemd[1]: session-26.scope: Deactivated successfully. Sep 10 23:43:33.828027 systemd-logind[1522]: Session 26 logged out. Waiting for processes to exit. Sep 10 23:43:33.829980 systemd-logind[1522]: Removed session 26.