Sep 4 23:58:06.770374 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 4 23:58:06.770394 kernel: Linux version 6.12.44-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Thu Sep 4 22:21:34 -00 2025 Sep 4 23:58:06.770404 kernel: KASLR enabled Sep 4 23:58:06.770410 kernel: efi: EFI v2.7 by EDK II Sep 4 23:58:06.770415 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Sep 4 23:58:06.770420 kernel: random: crng init done Sep 4 23:58:06.770427 kernel: secureboot: Secure boot disabled Sep 4 23:58:06.770432 kernel: ACPI: Early table checksum verification disabled Sep 4 23:58:06.770438 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Sep 4 23:58:06.770445 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 4 23:58:06.770451 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:58:06.770457 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:58:06.770462 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:58:06.770468 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:58:06.770475 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:58:06.770482 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:58:06.770488 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:58:06.770494 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:58:06.770500 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:58:06.770506 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 4 23:58:06.770512 kernel: ACPI: Use ACPI SPCR as default console: No Sep 4 23:58:06.770518 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 4 23:58:06.770524 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Sep 4 23:58:06.770530 kernel: Zone ranges: Sep 4 23:58:06.770536 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 4 23:58:06.770543 kernel: DMA32 empty Sep 4 23:58:06.770549 kernel: Normal empty Sep 4 23:58:06.770554 kernel: Device empty Sep 4 23:58:06.770560 kernel: Movable zone start for each node Sep 4 23:58:06.770566 kernel: Early memory node ranges Sep 4 23:58:06.770572 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Sep 4 23:58:06.770578 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Sep 4 23:58:06.770584 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Sep 4 23:58:06.770589 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Sep 4 23:58:06.770595 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Sep 4 23:58:06.770601 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Sep 4 23:58:06.770607 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Sep 4 23:58:06.770614 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Sep 4 23:58:06.770620 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Sep 4 23:58:06.770626 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 4 23:58:06.770634 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 4 23:58:06.770641 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 4 23:58:06.770647 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 4 23:58:06.770655 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 4 23:58:06.770671 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 4 23:58:06.770678 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Sep 4 23:58:06.770684 kernel: psci: probing for conduit method from ACPI. Sep 4 23:58:06.770690 kernel: psci: PSCIv1.1 detected in firmware. Sep 4 23:58:06.770696 kernel: psci: Using standard PSCI v0.2 function IDs Sep 4 23:58:06.770703 kernel: psci: Trusted OS migration not required Sep 4 23:58:06.770709 kernel: psci: SMC Calling Convention v1.1 Sep 4 23:58:06.770715 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 4 23:58:06.770721 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 4 23:58:06.770729 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 4 23:58:06.770736 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 4 23:58:06.770742 kernel: Detected PIPT I-cache on CPU0 Sep 4 23:58:06.770748 kernel: CPU features: detected: GIC system register CPU interface Sep 4 23:58:06.770755 kernel: CPU features: detected: Spectre-v4 Sep 4 23:58:06.770761 kernel: CPU features: detected: Spectre-BHB Sep 4 23:58:06.770767 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 4 23:58:06.770773 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 4 23:58:06.770780 kernel: CPU features: detected: ARM erratum 1418040 Sep 4 23:58:06.770786 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 4 23:58:06.770792 kernel: alternatives: applying boot alternatives Sep 4 23:58:06.770799 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=936dbc4ea592050e15794e1e6e7f70cd7cba0dbef72270410b4bbc6a29324de7 Sep 4 23:58:06.770807 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 23:58:06.770813 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 23:58:06.770820 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 23:58:06.770826 kernel: Fallback order for Node 0: 0 Sep 4 23:58:06.770832 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 4 23:58:06.770838 kernel: Policy zone: DMA Sep 4 23:58:06.770844 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 23:58:06.770851 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 4 23:58:06.770857 kernel: software IO TLB: area num 4. Sep 4 23:58:06.770863 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 4 23:58:06.770870 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Sep 4 23:58:06.770877 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 4 23:58:06.770884 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 23:58:06.770891 kernel: rcu: RCU event tracing is enabled. Sep 4 23:58:06.770897 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 4 23:58:06.770904 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 23:58:06.770910 kernel: Tracing variant of Tasks RCU enabled. Sep 4 23:58:06.770916 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 23:58:06.770923 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 4 23:58:06.770929 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 4 23:58:06.770935 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 4 23:58:06.770942 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 4 23:58:06.770949 kernel: GICv3: 256 SPIs implemented Sep 4 23:58:06.770956 kernel: GICv3: 0 Extended SPIs implemented Sep 4 23:58:06.770962 kernel: Root IRQ handler: gic_handle_irq Sep 4 23:58:06.770968 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 4 23:58:06.770974 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 4 23:58:06.770981 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 4 23:58:06.770987 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 4 23:58:06.770994 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 4 23:58:06.771000 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 4 23:58:06.771007 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 4 23:58:06.771013 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 4 23:58:06.771020 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 23:58:06.771028 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 23:58:06.771034 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 4 23:58:06.771041 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 4 23:58:06.771048 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 4 23:58:06.771054 kernel: arm-pv: using stolen time PV Sep 4 23:58:06.771061 kernel: Console: colour dummy device 80x25 Sep 4 23:58:06.771067 kernel: ACPI: Core revision 20240827 Sep 4 23:58:06.771074 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 4 23:58:06.771081 kernel: pid_max: default: 32768 minimum: 301 Sep 4 23:58:06.771088 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 4 23:58:06.771095 kernel: landlock: Up and running. Sep 4 23:58:06.771102 kernel: SELinux: Initializing. Sep 4 23:58:06.771108 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 23:58:06.771115 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 23:58:06.771121 kernel: rcu: Hierarchical SRCU implementation. Sep 4 23:58:06.771128 kernel: rcu: Max phase no-delay instances is 400. Sep 4 23:58:06.771144 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 4 23:58:06.771151 kernel: Remapping and enabling EFI services. Sep 4 23:58:06.771157 kernel: smp: Bringing up secondary CPUs ... Sep 4 23:58:06.771170 kernel: Detected PIPT I-cache on CPU1 Sep 4 23:58:06.771177 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 4 23:58:06.771184 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 4 23:58:06.771192 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 23:58:06.771198 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 4 23:58:06.771205 kernel: Detected PIPT I-cache on CPU2 Sep 4 23:58:06.771212 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 4 23:58:06.771219 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 4 23:58:06.771228 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 23:58:06.771234 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 4 23:58:06.771241 kernel: Detected PIPT I-cache on CPU3 Sep 4 23:58:06.771248 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 4 23:58:06.771255 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 4 23:58:06.771261 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 23:58:06.771268 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 4 23:58:06.771275 kernel: smp: Brought up 1 node, 4 CPUs Sep 4 23:58:06.771281 kernel: SMP: Total of 4 processors activated. Sep 4 23:58:06.771289 kernel: CPU: All CPU(s) started at EL1 Sep 4 23:58:06.771296 kernel: CPU features: detected: 32-bit EL0 Support Sep 4 23:58:06.771303 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 4 23:58:06.771310 kernel: CPU features: detected: Common not Private translations Sep 4 23:58:06.771316 kernel: CPU features: detected: CRC32 instructions Sep 4 23:58:06.771323 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 4 23:58:06.771330 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 4 23:58:06.771337 kernel: CPU features: detected: LSE atomic instructions Sep 4 23:58:06.771343 kernel: CPU features: detected: Privileged Access Never Sep 4 23:58:06.771350 kernel: CPU features: detected: RAS Extension Support Sep 4 23:58:06.771358 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 4 23:58:06.771365 kernel: alternatives: applying system-wide alternatives Sep 4 23:58:06.771372 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 4 23:58:06.771379 kernel: Memory: 2424480K/2572288K available (11136K kernel code, 2436K rwdata, 9076K rodata, 38976K init, 1038K bss, 125472K reserved, 16384K cma-reserved) Sep 4 23:58:06.771386 kernel: devtmpfs: initialized Sep 4 23:58:06.771393 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 23:58:06.771400 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 4 23:58:06.771406 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 4 23:58:06.771414 kernel: 0 pages in range for non-PLT usage Sep 4 23:58:06.771421 kernel: 508560 pages in range for PLT usage Sep 4 23:58:06.771428 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 23:58:06.771435 kernel: SMBIOS 3.0.0 present. Sep 4 23:58:06.771441 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 4 23:58:06.771448 kernel: DMI: Memory slots populated: 1/1 Sep 4 23:58:06.771455 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 23:58:06.771462 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 4 23:58:06.771469 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 4 23:58:06.771477 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 4 23:58:06.771484 kernel: audit: initializing netlink subsys (disabled) Sep 4 23:58:06.771490 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Sep 4 23:58:06.771497 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 23:58:06.771504 kernel: cpuidle: using governor menu Sep 4 23:58:06.771511 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 4 23:58:06.771518 kernel: ASID allocator initialised with 32768 entries Sep 4 23:58:06.771524 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 23:58:06.771531 kernel: Serial: AMBA PL011 UART driver Sep 4 23:58:06.771539 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 23:58:06.771546 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 23:58:06.771553 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 4 23:58:06.771560 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 4 23:58:06.771567 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 23:58:06.771573 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 23:58:06.771580 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 4 23:58:06.771587 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 4 23:58:06.771594 kernel: ACPI: Added _OSI(Module Device) Sep 4 23:58:06.771602 kernel: ACPI: Added _OSI(Processor Device) Sep 4 23:58:06.771609 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 23:58:06.771616 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 23:58:06.771622 kernel: ACPI: Interpreter enabled Sep 4 23:58:06.771629 kernel: ACPI: Using GIC for interrupt routing Sep 4 23:58:06.771636 kernel: ACPI: MCFG table detected, 1 entries Sep 4 23:58:06.771643 kernel: ACPI: CPU0 has been hot-added Sep 4 23:58:06.771649 kernel: ACPI: CPU1 has been hot-added Sep 4 23:58:06.771660 kernel: ACPI: CPU2 has been hot-added Sep 4 23:58:06.771668 kernel: ACPI: CPU3 has been hot-added Sep 4 23:58:06.771676 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 4 23:58:06.771683 kernel: printk: legacy console [ttyAMA0] enabled Sep 4 23:58:06.771690 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 23:58:06.771824 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 23:58:06.771887 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 4 23:58:06.771945 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 4 23:58:06.772002 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 4 23:58:06.772060 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 4 23:58:06.772069 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 4 23:58:06.772076 kernel: PCI host bridge to bus 0000:00 Sep 4 23:58:06.772170 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 4 23:58:06.772230 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 4 23:58:06.772282 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 4 23:58:06.772333 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 23:58:06.772411 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 4 23:58:06.772479 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 4 23:58:06.772539 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 4 23:58:06.772598 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 4 23:58:06.772664 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 4 23:58:06.772729 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 4 23:58:06.772790 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 4 23:58:06.772858 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 4 23:58:06.772915 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 4 23:58:06.772993 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 4 23:58:06.773046 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 4 23:58:06.773055 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 4 23:58:06.773062 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 4 23:58:06.773069 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 4 23:58:06.773078 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 4 23:58:06.773086 kernel: iommu: Default domain type: Translated Sep 4 23:58:06.773093 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 4 23:58:06.773100 kernel: efivars: Registered efivars operations Sep 4 23:58:06.773107 kernel: vgaarb: loaded Sep 4 23:58:06.773113 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 4 23:58:06.773120 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 23:58:06.773127 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 23:58:06.773200 kernel: pnp: PnP ACPI init Sep 4 23:58:06.773277 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 4 23:58:06.773288 kernel: pnp: PnP ACPI: found 1 devices Sep 4 23:58:06.773295 kernel: NET: Registered PF_INET protocol family Sep 4 23:58:06.773302 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 23:58:06.773309 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 23:58:06.773316 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 23:58:06.773323 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 23:58:06.773330 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 23:58:06.773338 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 23:58:06.773345 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 23:58:06.773352 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 23:58:06.773359 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 23:58:06.773367 kernel: PCI: CLS 0 bytes, default 64 Sep 4 23:58:06.773374 kernel: kvm [1]: HYP mode not available Sep 4 23:58:06.773381 kernel: Initialise system trusted keyrings Sep 4 23:58:06.773388 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 23:58:06.773394 kernel: Key type asymmetric registered Sep 4 23:58:06.773402 kernel: Asymmetric key parser 'x509' registered Sep 4 23:58:06.773410 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 4 23:58:06.773417 kernel: io scheduler mq-deadline registered Sep 4 23:58:06.773424 kernel: io scheduler kyber registered Sep 4 23:58:06.773431 kernel: io scheduler bfq registered Sep 4 23:58:06.773438 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 4 23:58:06.773446 kernel: ACPI: button: Power Button [PWRB] Sep 4 23:58:06.773453 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 4 23:58:06.773515 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 4 23:58:06.773527 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 23:58:06.773534 kernel: thunder_xcv, ver 1.0 Sep 4 23:58:06.773540 kernel: thunder_bgx, ver 1.0 Sep 4 23:58:06.773547 kernel: nicpf, ver 1.0 Sep 4 23:58:06.773554 kernel: nicvf, ver 1.0 Sep 4 23:58:06.773627 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 4 23:58:06.773695 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-04T23:58:06 UTC (1757030286) Sep 4 23:58:06.773705 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 4 23:58:06.773712 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 4 23:58:06.773720 kernel: watchdog: NMI not fully supported Sep 4 23:58:06.773727 kernel: watchdog: Hard watchdog permanently disabled Sep 4 23:58:06.773734 kernel: NET: Registered PF_INET6 protocol family Sep 4 23:58:06.773741 kernel: Segment Routing with IPv6 Sep 4 23:58:06.773748 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 23:58:06.773755 kernel: NET: Registered PF_PACKET protocol family Sep 4 23:58:06.773762 kernel: Key type dns_resolver registered Sep 4 23:58:06.773768 kernel: registered taskstats version 1 Sep 4 23:58:06.773775 kernel: Loading compiled-in X.509 certificates Sep 4 23:58:06.773784 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.44-flatcar: 076c0e39153760a09e2827c98096964655099fd6' Sep 4 23:58:06.773791 kernel: Demotion targets for Node 0: null Sep 4 23:58:06.773797 kernel: Key type .fscrypt registered Sep 4 23:58:06.773804 kernel: Key type fscrypt-provisioning registered Sep 4 23:58:06.773811 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 23:58:06.773818 kernel: ima: Allocated hash algorithm: sha1 Sep 4 23:58:06.773825 kernel: ima: No architecture policies found Sep 4 23:58:06.773831 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 4 23:58:06.773840 kernel: clk: Disabling unused clocks Sep 4 23:58:06.773846 kernel: PM: genpd: Disabling unused power domains Sep 4 23:58:06.773853 kernel: Warning: unable to open an initial console. Sep 4 23:58:06.773860 kernel: Freeing unused kernel memory: 38976K Sep 4 23:58:06.773867 kernel: Run /init as init process Sep 4 23:58:06.773874 kernel: with arguments: Sep 4 23:58:06.773881 kernel: /init Sep 4 23:58:06.773887 kernel: with environment: Sep 4 23:58:06.773894 kernel: HOME=/ Sep 4 23:58:06.773901 kernel: TERM=linux Sep 4 23:58:06.773909 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 23:58:06.773917 systemd[1]: Successfully made /usr/ read-only. Sep 4 23:58:06.773927 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 23:58:06.773935 systemd[1]: Detected virtualization kvm. Sep 4 23:58:06.773942 systemd[1]: Detected architecture arm64. Sep 4 23:58:06.773950 systemd[1]: Running in initrd. Sep 4 23:58:06.773957 systemd[1]: No hostname configured, using default hostname. Sep 4 23:58:06.773966 systemd[1]: Hostname set to . Sep 4 23:58:06.773974 systemd[1]: Initializing machine ID from VM UUID. Sep 4 23:58:06.773982 systemd[1]: Queued start job for default target initrd.target. Sep 4 23:58:06.773989 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:58:06.773997 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:58:06.774005 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 23:58:06.774013 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 23:58:06.774020 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 23:58:06.774030 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 23:58:06.774039 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 23:58:06.774047 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 23:58:06.774054 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:58:06.774062 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:58:06.774070 systemd[1]: Reached target paths.target - Path Units. Sep 4 23:58:06.774077 systemd[1]: Reached target slices.target - Slice Units. Sep 4 23:58:06.774086 systemd[1]: Reached target swap.target - Swaps. Sep 4 23:58:06.774094 systemd[1]: Reached target timers.target - Timer Units. Sep 4 23:58:06.774101 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 23:58:06.774109 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 23:58:06.774116 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 23:58:06.774124 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 4 23:58:06.774140 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:58:06.774149 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 23:58:06.774161 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:58:06.774182 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 23:58:06.774191 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 23:58:06.774199 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 23:58:06.774207 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 23:58:06.774215 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 4 23:58:06.774222 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 23:58:06.774230 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 23:58:06.774238 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 23:58:06.774247 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:58:06.774255 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 23:58:06.774281 systemd-journald[244]: Collecting audit messages is disabled. Sep 4 23:58:06.774302 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:58:06.774311 systemd-journald[244]: Journal started Sep 4 23:58:06.774329 systemd-journald[244]: Runtime Journal (/run/log/journal/a7f7aac55ead4e1a93623c95c66d4217) is 6M, max 48.5M, 42.4M free. Sep 4 23:58:06.765212 systemd-modules-load[245]: Inserted module 'overlay' Sep 4 23:58:06.776856 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 23:58:06.778197 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 23:58:06.780806 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 23:58:06.782179 kernel: Bridge firewalling registered Sep 4 23:58:06.781422 systemd-modules-load[245]: Inserted module 'br_netfilter' Sep 4 23:58:06.782338 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 23:58:06.784553 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 23:58:06.785672 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 23:58:06.795693 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:58:06.798446 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:58:06.801472 systemd-tmpfiles[259]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 4 23:58:06.803027 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 23:58:06.805965 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 23:58:06.808435 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:58:06.817811 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 23:58:06.819001 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:58:06.822212 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 23:58:06.829698 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:58:06.830973 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:58:06.834204 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 23:58:06.854255 systemd-resolved[282]: Positive Trust Anchors: Sep 4 23:58:06.854264 systemd-resolved[282]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 23:58:06.854297 systemd-resolved[282]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 23:58:06.859009 systemd-resolved[282]: Defaulting to hostname 'linux'. Sep 4 23:58:06.860261 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 23:58:06.862117 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:58:06.867319 dracut-cmdline[292]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=936dbc4ea592050e15794e1e6e7f70cd7cba0dbef72270410b4bbc6a29324de7 Sep 4 23:58:06.941146 kernel: SCSI subsystem initialized Sep 4 23:58:06.945160 kernel: Loading iSCSI transport class v2.0-870. Sep 4 23:58:06.952170 kernel: iscsi: registered transport (tcp) Sep 4 23:58:06.969153 kernel: iscsi: registered transport (qla4xxx) Sep 4 23:58:06.969185 kernel: QLogic iSCSI HBA Driver Sep 4 23:58:06.986205 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 23:58:07.004716 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 23:58:07.006160 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 23:58:07.053349 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 23:58:07.055497 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 23:58:07.114168 kernel: raid6: neonx8 gen() 15780 MB/s Sep 4 23:58:07.131151 kernel: raid6: neonx4 gen() 15799 MB/s Sep 4 23:58:07.148152 kernel: raid6: neonx2 gen() 13198 MB/s Sep 4 23:58:07.165161 kernel: raid6: neonx1 gen() 10543 MB/s Sep 4 23:58:07.182151 kernel: raid6: int64x8 gen() 6899 MB/s Sep 4 23:58:07.199152 kernel: raid6: int64x4 gen() 7353 MB/s Sep 4 23:58:07.216150 kernel: raid6: int64x2 gen() 6102 MB/s Sep 4 23:58:07.233155 kernel: raid6: int64x1 gen() 5030 MB/s Sep 4 23:58:07.233176 kernel: raid6: using algorithm neonx4 gen() 15799 MB/s Sep 4 23:58:07.250159 kernel: raid6: .... xor() 12331 MB/s, rmw enabled Sep 4 23:58:07.250174 kernel: raid6: using neon recovery algorithm Sep 4 23:58:07.255155 kernel: xor: measuring software checksum speed Sep 4 23:58:07.256342 kernel: 8regs : 17780 MB/sec Sep 4 23:58:07.256360 kernel: 32regs : 21205 MB/sec Sep 4 23:58:07.257366 kernel: arm64_neon : 28022 MB/sec Sep 4 23:58:07.257377 kernel: xor: using function: arm64_neon (28022 MB/sec) Sep 4 23:58:07.309160 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 23:58:07.316478 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 23:58:07.319615 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:58:07.347050 systemd-udevd[500]: Using default interface naming scheme 'v255'. Sep 4 23:58:07.351083 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:58:07.353263 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 23:58:07.376679 dracut-pre-trigger[508]: rd.md=0: removing MD RAID activation Sep 4 23:58:07.398328 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 23:58:07.402244 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 23:58:07.455888 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:58:07.459403 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 23:58:07.506886 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 4 23:58:07.507041 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 4 23:58:07.514683 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:58:07.517144 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 23:58:07.517166 kernel: GPT:9289727 != 19775487 Sep 4 23:58:07.517175 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 23:58:07.517184 kernel: GPT:9289727 != 19775487 Sep 4 23:58:07.517192 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 23:58:07.517200 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 23:58:07.517060 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:58:07.519568 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:58:07.527907 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:58:07.560306 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:58:07.568715 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 4 23:58:07.575205 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 23:58:07.583354 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 4 23:58:07.596313 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 23:58:07.602415 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 4 23:58:07.603374 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 4 23:58:07.607773 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 23:58:07.610019 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:58:07.613777 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 23:58:07.616230 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 23:58:07.617823 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 23:58:07.636684 disk-uuid[594]: Primary Header is updated. Sep 4 23:58:07.636684 disk-uuid[594]: Secondary Entries is updated. Sep 4 23:58:07.636684 disk-uuid[594]: Secondary Header is updated. Sep 4 23:58:07.640149 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 23:58:07.640743 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 23:58:08.649179 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 23:58:08.649236 disk-uuid[599]: The operation has completed successfully. Sep 4 23:58:08.675811 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 23:58:08.676807 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 23:58:08.701677 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 23:58:08.730172 sh[615]: Success Sep 4 23:58:08.741538 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 23:58:08.741591 kernel: device-mapper: uevent: version 1.0.3 Sep 4 23:58:08.742655 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 4 23:58:08.750163 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 4 23:58:08.781222 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 23:58:08.783901 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 23:58:08.800500 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 23:58:08.813387 kernel: BTRFS: device fsid 7cf88bee-c029-4534-8152-24a8f9f8db3f devid 1 transid 40 /dev/mapper/usr (253:0) scanned by mount (628) Sep 4 23:58:08.813437 kernel: BTRFS info (device dm-0): first mount of filesystem 7cf88bee-c029-4534-8152-24a8f9f8db3f Sep 4 23:58:08.813447 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 4 23:58:08.820153 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 23:58:08.820200 kernel: BTRFS info (device dm-0): enabling free space tree Sep 4 23:58:08.821096 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 23:58:08.822200 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 4 23:58:08.823203 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 23:58:08.824021 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 23:58:08.826795 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 23:58:08.849047 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (657) Sep 4 23:58:08.849094 kernel: BTRFS info (device vda6): first mount of filesystem 6c344b23-2ce1-4a61-81ba-a1268f9a3fe2 Sep 4 23:58:08.849105 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 23:58:08.851655 kernel: BTRFS info (device vda6): turning on async discard Sep 4 23:58:08.851709 kernel: BTRFS info (device vda6): enabling free space tree Sep 4 23:58:08.856183 kernel: BTRFS info (device vda6): last unmount of filesystem 6c344b23-2ce1-4a61-81ba-a1268f9a3fe2 Sep 4 23:58:08.857856 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 23:58:08.861532 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 23:58:08.936187 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 23:58:08.939677 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 23:58:08.981214 systemd-networkd[810]: lo: Link UP Sep 4 23:58:08.981226 systemd-networkd[810]: lo: Gained carrier Sep 4 23:58:08.982042 systemd-networkd[810]: Enumeration completed Sep 4 23:58:08.982372 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 23:58:08.982873 ignition[704]: Ignition 2.21.0 Sep 4 23:58:08.982444 systemd-networkd[810]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:58:08.982880 ignition[704]: Stage: fetch-offline Sep 4 23:58:08.982447 systemd-networkd[810]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 23:58:08.983230 ignition[704]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:58:08.983292 systemd-networkd[810]: eth0: Link UP Sep 4 23:58:08.983243 ignition[704]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 23:58:08.983440 systemd-networkd[810]: eth0: Gained carrier Sep 4 23:58:08.983425 ignition[704]: parsed url from cmdline: "" Sep 4 23:58:08.983448 systemd-networkd[810]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:58:08.983428 ignition[704]: no config URL provided Sep 4 23:58:08.983764 systemd[1]: Reached target network.target - Network. Sep 4 23:58:08.983434 ignition[704]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 23:58:08.983441 ignition[704]: no config at "/usr/lib/ignition/user.ign" Sep 4 23:58:08.983462 ignition[704]: op(1): [started] loading QEMU firmware config module Sep 4 23:58:08.983466 ignition[704]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 4 23:58:08.992292 ignition[704]: op(1): [finished] loading QEMU firmware config module Sep 4 23:58:09.003214 systemd-networkd[810]: eth0: DHCPv4 address 10.0.0.113/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 23:58:09.037206 ignition[704]: parsing config with SHA512: 20f206fdcb72ef481f953488b3be66a8a02bfa96c104bcf2298c7936ebf35a458efb3622546082e2e1a9813ebe10415ba11af8dbc99d7bb8ca05e0192ccbd21c Sep 4 23:58:09.041413 unknown[704]: fetched base config from "system" Sep 4 23:58:09.041427 unknown[704]: fetched user config from "qemu" Sep 4 23:58:09.041882 ignition[704]: fetch-offline: fetch-offline passed Sep 4 23:58:09.041935 ignition[704]: Ignition finished successfully Sep 4 23:58:09.044480 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 23:58:09.045508 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 4 23:58:09.047285 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 23:58:09.086889 ignition[819]: Ignition 2.21.0 Sep 4 23:58:09.086908 ignition[819]: Stage: kargs Sep 4 23:58:09.087070 ignition[819]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:58:09.087079 ignition[819]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 23:58:09.089024 ignition[819]: kargs: kargs passed Sep 4 23:58:09.089373 ignition[819]: Ignition finished successfully Sep 4 23:58:09.093607 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 23:58:09.095457 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 23:58:09.116262 ignition[827]: Ignition 2.21.0 Sep 4 23:58:09.116280 ignition[827]: Stage: disks Sep 4 23:58:09.116414 ignition[827]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:58:09.116422 ignition[827]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 23:58:09.118571 ignition[827]: disks: disks passed Sep 4 23:58:09.120304 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 23:58:09.118627 ignition[827]: Ignition finished successfully Sep 4 23:58:09.121279 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 23:58:09.122719 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 23:58:09.123988 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 23:58:09.125427 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 23:58:09.126834 systemd[1]: Reached target basic.target - Basic System. Sep 4 23:58:09.129000 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 23:58:09.155289 systemd-fsck[837]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 4 23:58:09.159070 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 23:58:09.161612 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 23:58:09.225155 kernel: EXT4-fs (vda9): mounted filesystem c1aea666-7bbc-4a3b-a66d-c37ebbad8baa r/w with ordered data mode. Quota mode: none. Sep 4 23:58:09.226011 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 23:58:09.227198 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 23:58:09.229225 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 23:58:09.240734 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 23:58:09.241622 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 23:58:09.241677 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 23:58:09.241703 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 23:58:09.251027 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (845) Sep 4 23:58:09.251049 kernel: BTRFS info (device vda6): first mount of filesystem 6c344b23-2ce1-4a61-81ba-a1268f9a3fe2 Sep 4 23:58:09.251060 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 23:58:09.253203 kernel: BTRFS info (device vda6): turning on async discard Sep 4 23:58:09.253226 kernel: BTRFS info (device vda6): enabling free space tree Sep 4 23:58:09.254327 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 23:58:09.258374 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 23:58:09.260055 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 23:58:09.307559 initrd-setup-root[869]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 23:58:09.311801 initrd-setup-root[876]: cut: /sysroot/etc/group: No such file or directory Sep 4 23:58:09.315310 initrd-setup-root[883]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 23:58:09.319169 initrd-setup-root[890]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 23:58:09.391220 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 23:58:09.393067 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 23:58:09.394716 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 23:58:09.410196 kernel: BTRFS info (device vda6): last unmount of filesystem 6c344b23-2ce1-4a61-81ba-a1268f9a3fe2 Sep 4 23:58:09.435543 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 23:58:09.447410 ignition[958]: INFO : Ignition 2.21.0 Sep 4 23:58:09.447410 ignition[958]: INFO : Stage: mount Sep 4 23:58:09.449437 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:58:09.449437 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 23:58:09.449437 ignition[958]: INFO : mount: mount passed Sep 4 23:58:09.449437 ignition[958]: INFO : Ignition finished successfully Sep 4 23:58:09.450971 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 23:58:09.452968 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 23:58:09.810663 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 23:58:09.812302 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 23:58:09.843805 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (971) Sep 4 23:58:09.846294 kernel: BTRFS info (device vda6): first mount of filesystem 6c344b23-2ce1-4a61-81ba-a1268f9a3fe2 Sep 4 23:58:09.846344 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 23:58:09.850914 kernel: BTRFS info (device vda6): turning on async discard Sep 4 23:58:09.850967 kernel: BTRFS info (device vda6): enabling free space tree Sep 4 23:58:09.851952 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 23:58:09.908538 ignition[988]: INFO : Ignition 2.21.0 Sep 4 23:58:09.908538 ignition[988]: INFO : Stage: files Sep 4 23:58:09.910700 ignition[988]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:58:09.910700 ignition[988]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 23:58:09.914854 ignition[988]: DEBUG : files: compiled without relabeling support, skipping Sep 4 23:58:09.914854 ignition[988]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 23:58:09.914854 ignition[988]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 23:58:09.919191 ignition[988]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 23:58:09.919191 ignition[988]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 23:58:09.919191 ignition[988]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 23:58:09.919191 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 4 23:58:09.919191 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 4 23:58:09.916481 unknown[988]: wrote ssh authorized keys file for user: core Sep 4 23:58:09.980893 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 23:58:10.235511 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 4 23:58:10.235511 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 23:58:10.238984 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 4 23:58:10.335398 systemd-networkd[810]: eth0: Gained IPv6LL Sep 4 23:58:10.437213 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 4 23:58:10.563756 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 23:58:10.563756 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 4 23:58:10.566475 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 23:58:10.566475 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 23:58:10.566475 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 23:58:10.566475 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 23:58:10.566475 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 23:58:10.566475 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 23:58:10.566475 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 23:58:10.576869 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 23:58:10.576869 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 23:58:10.576869 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 4 23:58:10.576869 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 4 23:58:10.576869 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 4 23:58:10.576869 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 4 23:58:10.893592 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 4 23:58:11.439560 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 4 23:58:11.439560 ignition[988]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 4 23:58:11.442756 ignition[988]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 23:58:11.446440 ignition[988]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 23:58:11.446440 ignition[988]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 4 23:58:11.446440 ignition[988]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 4 23:58:11.449914 ignition[988]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 23:58:11.449914 ignition[988]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 23:58:11.449914 ignition[988]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 4 23:58:11.449914 ignition[988]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 4 23:58:11.462906 ignition[988]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 23:58:11.466408 ignition[988]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 23:58:11.467645 ignition[988]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 4 23:58:11.467645 ignition[988]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 4 23:58:11.467645 ignition[988]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 23:58:11.467645 ignition[988]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 23:58:11.467645 ignition[988]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 23:58:11.467645 ignition[988]: INFO : files: files passed Sep 4 23:58:11.467645 ignition[988]: INFO : Ignition finished successfully Sep 4 23:58:11.470993 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 23:58:11.474303 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 23:58:11.476276 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 23:58:11.490121 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 23:58:11.490233 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 23:58:11.493188 initrd-setup-root-after-ignition[1016]: grep: /sysroot/oem/oem-release: No such file or directory Sep 4 23:58:11.494214 initrd-setup-root-after-ignition[1019]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:58:11.494214 initrd-setup-root-after-ignition[1019]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:58:11.496803 initrd-setup-root-after-ignition[1023]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:58:11.497450 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 23:58:11.499101 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 23:58:11.501418 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 23:58:11.527517 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 23:58:11.527626 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 23:58:11.529383 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 23:58:11.531125 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 23:58:11.532658 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 23:58:11.533490 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 23:58:11.569803 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 23:58:11.572174 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 23:58:11.599231 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:58:11.601212 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:58:11.603094 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 23:58:11.603968 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 23:58:11.604094 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 23:58:11.605943 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 23:58:11.607588 systemd[1]: Stopped target basic.target - Basic System. Sep 4 23:58:11.609006 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 23:58:11.610493 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 23:58:11.611978 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 23:58:11.613529 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 4 23:58:11.615072 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 23:58:11.616913 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 23:58:11.618636 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 23:58:11.619996 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 23:58:11.621646 systemd[1]: Stopped target swap.target - Swaps. Sep 4 23:58:11.622818 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 23:58:11.622948 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 23:58:11.624826 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:58:11.626239 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:58:11.627907 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 23:58:11.631201 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:58:11.632255 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 23:58:11.632381 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 23:58:11.634624 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 23:58:11.634749 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 23:58:11.636190 systemd[1]: Stopped target paths.target - Path Units. Sep 4 23:58:11.637403 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 23:58:11.637511 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:58:11.639053 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 23:58:11.640331 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 23:58:11.641653 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 23:58:11.641740 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 23:58:11.643309 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 23:58:11.643381 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 23:58:11.644643 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 23:58:11.644760 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 23:58:11.646149 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 23:58:11.646251 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 23:58:11.648229 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 23:58:11.649922 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 23:58:11.651501 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 23:58:11.651610 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:58:11.653324 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 23:58:11.653420 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 23:58:11.659077 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 23:58:11.659207 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 23:58:11.665617 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 23:58:11.672358 ignition[1044]: INFO : Ignition 2.21.0 Sep 4 23:58:11.672358 ignition[1044]: INFO : Stage: umount Sep 4 23:58:11.674437 ignition[1044]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:58:11.674437 ignition[1044]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 23:58:11.674437 ignition[1044]: INFO : umount: umount passed Sep 4 23:58:11.674437 ignition[1044]: INFO : Ignition finished successfully Sep 4 23:58:11.676849 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 23:58:11.676943 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 23:58:11.679058 systemd[1]: Stopped target network.target - Network. Sep 4 23:58:11.680124 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 23:58:11.680208 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 23:58:11.681752 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 23:58:11.681793 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 23:58:11.683058 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 23:58:11.683102 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 23:58:11.684477 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 23:58:11.684514 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 23:58:11.686272 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 23:58:11.688162 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 23:58:11.697075 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 23:58:11.697240 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 23:58:11.700855 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 4 23:58:11.701111 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 23:58:11.701249 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 23:58:11.704933 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 4 23:58:11.705502 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 4 23:58:11.707004 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 23:58:11.707045 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:58:11.709347 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 23:58:11.710637 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 23:58:11.710699 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 23:58:11.712186 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 23:58:11.712227 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:58:11.714317 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 23:58:11.714360 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 23:58:11.716586 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 23:58:11.716636 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:58:11.720323 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:58:11.722433 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 4 23:58:11.722496 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 4 23:58:11.729043 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 23:58:11.729152 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 23:58:11.732069 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 23:58:11.732118 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 23:58:11.738016 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 23:58:11.738180 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 23:58:11.742819 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 23:58:11.742950 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:58:11.744714 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 23:58:11.744770 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 23:58:11.746957 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 23:58:11.746993 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:58:11.748377 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 23:58:11.748424 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 23:58:11.750695 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 23:58:11.750743 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 23:58:11.752900 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 23:58:11.752973 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:58:11.756074 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 23:58:11.757519 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 4 23:58:11.757578 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 23:58:11.760334 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 23:58:11.760378 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:58:11.763172 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 4 23:58:11.763213 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 23:58:11.765727 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 23:58:11.765768 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:58:11.767417 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:58:11.767461 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:58:11.770862 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 4 23:58:11.770914 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 4 23:58:11.770941 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 4 23:58:11.770969 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 23:58:11.777307 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 23:58:11.778340 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 23:58:11.779457 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 23:58:11.781589 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 23:58:11.806553 systemd[1]: Switching root. Sep 4 23:58:11.846016 systemd-journald[244]: Journal stopped Sep 4 23:58:12.566486 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Sep 4 23:58:12.566535 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 23:58:12.566550 kernel: SELinux: policy capability open_perms=1 Sep 4 23:58:12.566559 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 23:58:12.566568 kernel: SELinux: policy capability always_check_network=0 Sep 4 23:58:12.566580 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 23:58:12.566592 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 23:58:12.566602 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 23:58:12.566611 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 23:58:12.566620 kernel: SELinux: policy capability userspace_initial_context=0 Sep 4 23:58:12.566643 kernel: audit: type=1403 audit(1757030292.014:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 23:58:12.566657 systemd[1]: Successfully loaded SELinux policy in 39.833ms. Sep 4 23:58:12.566686 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.476ms. Sep 4 23:58:12.566697 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 23:58:12.566713 systemd[1]: Detected virtualization kvm. Sep 4 23:58:12.566723 systemd[1]: Detected architecture arm64. Sep 4 23:58:12.566734 systemd[1]: Detected first boot. Sep 4 23:58:12.566745 systemd[1]: Initializing machine ID from VM UUID. Sep 4 23:58:12.566755 zram_generator::config[1089]: No configuration found. Sep 4 23:58:12.566766 kernel: NET: Registered PF_VSOCK protocol family Sep 4 23:58:12.566775 systemd[1]: Populated /etc with preset unit settings. Sep 4 23:58:12.566786 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 4 23:58:12.566797 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 23:58:12.566807 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 23:58:12.566816 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 23:58:12.566828 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 23:58:12.566838 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 23:58:12.566847 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 23:58:12.566857 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 23:58:12.566867 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 23:58:12.566877 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 23:58:12.566887 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 23:58:12.566897 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 23:58:12.566908 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:58:12.566918 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:58:12.566928 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 23:58:12.566938 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 23:58:12.566948 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 23:58:12.566959 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 23:58:12.566969 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 4 23:58:12.566979 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:58:12.566990 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:58:12.566999 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 23:58:12.567009 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 23:58:12.567018 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 23:58:12.567049 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 23:58:12.567059 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:58:12.567072 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 23:58:12.567083 systemd[1]: Reached target slices.target - Slice Units. Sep 4 23:58:12.567093 systemd[1]: Reached target swap.target - Swaps. Sep 4 23:58:12.567105 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 23:58:12.567115 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 23:58:12.567124 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 4 23:58:12.567207 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:58:12.567219 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 23:58:12.567230 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:58:12.567241 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 23:58:12.567251 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 23:58:12.567261 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 23:58:12.567273 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 23:58:12.567283 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 23:58:12.567293 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 23:58:12.567303 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 23:58:12.567314 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 23:58:12.567324 systemd[1]: Reached target machines.target - Containers. Sep 4 23:58:12.567334 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 23:58:12.567343 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:58:12.567355 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 23:58:12.567365 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 23:58:12.567375 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 23:58:12.567385 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 23:58:12.567394 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 23:58:12.567404 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 23:58:12.567414 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 23:58:12.567424 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 23:58:12.567434 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 23:58:12.567446 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 23:58:12.567455 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 23:58:12.567465 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 23:58:12.567476 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:58:12.567489 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 23:58:12.567498 kernel: fuse: init (API version 7.41) Sep 4 23:58:12.567508 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 23:58:12.567517 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 23:58:12.567527 kernel: ACPI: bus type drm_connector registered Sep 4 23:58:12.567537 kernel: loop: module loaded Sep 4 23:58:12.567546 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 23:58:12.567556 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 4 23:58:12.567566 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 23:58:12.567576 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 23:58:12.567586 systemd[1]: Stopped verity-setup.service. Sep 4 23:58:12.567596 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 23:58:12.567606 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 23:58:12.567618 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 23:58:12.567659 systemd-journald[1164]: Collecting audit messages is disabled. Sep 4 23:58:12.567683 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 23:58:12.567695 systemd-journald[1164]: Journal started Sep 4 23:58:12.567720 systemd-journald[1164]: Runtime Journal (/run/log/journal/a7f7aac55ead4e1a93623c95c66d4217) is 6M, max 48.5M, 42.4M free. Sep 4 23:58:12.370030 systemd[1]: Queued start job for default target multi-user.target. Sep 4 23:58:12.393105 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 4 23:58:12.393500 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 23:58:12.569385 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 23:58:12.570045 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 23:58:12.571114 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 23:58:12.573202 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 23:58:12.574411 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:58:12.575605 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 23:58:12.575784 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 23:58:12.577005 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 23:58:12.578209 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 23:58:12.579294 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 23:58:12.579467 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 23:58:12.580586 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 23:58:12.580766 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 23:58:12.581990 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 23:58:12.582176 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 23:58:12.583226 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 23:58:12.583394 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 23:58:12.584732 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 23:58:12.585923 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 23:58:12.587271 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 23:58:12.588470 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 4 23:58:12.600862 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 23:58:12.603213 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 23:58:12.605186 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 23:58:12.606043 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 23:58:12.606074 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 23:58:12.607959 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 4 23:58:12.615235 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 23:58:12.616190 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:58:12.617556 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 23:58:12.619367 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 23:58:12.620411 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 23:58:12.622282 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 23:58:12.623226 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 23:58:12.626294 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:58:12.627272 systemd-journald[1164]: Time spent on flushing to /var/log/journal/a7f7aac55ead4e1a93623c95c66d4217 is 21.754ms for 893 entries. Sep 4 23:58:12.627272 systemd-journald[1164]: System Journal (/var/log/journal/a7f7aac55ead4e1a93623c95c66d4217) is 8M, max 195.6M, 187.6M free. Sep 4 23:58:12.653087 systemd-journald[1164]: Received client request to flush runtime journal. Sep 4 23:58:12.653121 kernel: loop0: detected capacity change from 0 to 207008 Sep 4 23:58:12.629166 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 23:58:12.635368 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 23:58:12.638722 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:58:12.640480 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 23:58:12.642809 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 23:58:12.645690 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 23:58:12.649199 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 23:58:12.656115 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 4 23:58:12.658736 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 23:58:12.667165 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 23:58:12.670437 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:58:12.682909 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. Sep 4 23:58:12.682925 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. Sep 4 23:58:12.683966 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 4 23:58:12.686162 kernel: loop1: detected capacity change from 0 to 107312 Sep 4 23:58:12.689255 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 23:58:12.692044 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 23:58:12.718182 kernel: loop2: detected capacity change from 0 to 138376 Sep 4 23:58:12.730768 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 23:58:12.734446 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 23:58:12.759168 kernel: loop3: detected capacity change from 0 to 207008 Sep 4 23:58:12.761328 systemd-tmpfiles[1229]: ACLs are not supported, ignoring. Sep 4 23:58:12.761640 systemd-tmpfiles[1229]: ACLs are not supported, ignoring. Sep 4 23:58:12.765176 kernel: loop4: detected capacity change from 0 to 107312 Sep 4 23:58:12.765718 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:58:12.772178 kernel: loop5: detected capacity change from 0 to 138376 Sep 4 23:58:12.778814 (sd-merge)[1231]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 4 23:58:12.779203 (sd-merge)[1231]: Merged extensions into '/usr'. Sep 4 23:58:12.783441 systemd[1]: Reload requested from client PID 1205 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 23:58:12.783580 systemd[1]: Reloading... Sep 4 23:58:12.846173 zram_generator::config[1256]: No configuration found. Sep 4 23:58:12.896205 ldconfig[1200]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 23:58:12.933213 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:58:12.996404 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 23:58:12.996973 systemd[1]: Reloading finished in 212 ms. Sep 4 23:58:13.025181 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 23:58:13.026503 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 23:58:13.047544 systemd[1]: Starting ensure-sysext.service... Sep 4 23:58:13.049610 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 23:58:13.065770 systemd-tmpfiles[1293]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 4 23:58:13.065803 systemd-tmpfiles[1293]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 4 23:58:13.066018 systemd-tmpfiles[1293]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 23:58:13.066215 systemd-tmpfiles[1293]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 23:58:13.066382 systemd[1]: Reload requested from client PID 1292 ('systemctl') (unit ensure-sysext.service)... Sep 4 23:58:13.066398 systemd[1]: Reloading... Sep 4 23:58:13.066821 systemd-tmpfiles[1293]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 23:58:13.067026 systemd-tmpfiles[1293]: ACLs are not supported, ignoring. Sep 4 23:58:13.067075 systemd-tmpfiles[1293]: ACLs are not supported, ignoring. Sep 4 23:58:13.069889 systemd-tmpfiles[1293]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 23:58:13.069903 systemd-tmpfiles[1293]: Skipping /boot Sep 4 23:58:13.078956 systemd-tmpfiles[1293]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 23:58:13.078972 systemd-tmpfiles[1293]: Skipping /boot Sep 4 23:58:13.117175 zram_generator::config[1320]: No configuration found. Sep 4 23:58:13.179841 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:58:13.242400 systemd[1]: Reloading finished in 175 ms. Sep 4 23:58:13.261001 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 23:58:13.267749 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:58:13.278251 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 23:58:13.280595 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 23:58:13.283192 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 23:58:13.286235 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 23:58:13.291290 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:58:13.294384 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 23:58:13.301427 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:58:13.302825 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 23:58:13.305119 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 23:58:13.310588 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 23:58:13.311598 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:58:13.311745 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:58:13.313728 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 23:58:13.322263 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 23:58:13.324334 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 23:58:13.324512 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 23:58:13.326224 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 23:58:13.327314 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 23:58:13.328922 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 23:58:13.329145 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 23:58:13.330394 systemd-udevd[1362]: Using default interface naming scheme 'v255'. Sep 4 23:58:13.335927 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 23:58:13.340367 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:58:13.343395 augenrules[1392]: No rules Sep 4 23:58:13.344427 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 23:58:13.346471 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 23:58:13.350382 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 23:58:13.351291 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:58:13.351407 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:58:13.355225 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 23:58:13.356049 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 23:58:13.357067 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:58:13.359412 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 23:58:13.359638 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 23:58:13.360805 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 23:58:13.362778 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 23:58:13.365411 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 23:58:13.365584 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 23:58:13.367029 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 23:58:13.368205 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 23:58:13.369833 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 23:58:13.369987 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 23:58:13.373274 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 23:58:13.389829 systemd[1]: Finished ensure-sysext.service. Sep 4 23:58:13.406395 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 23:58:13.407236 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:58:13.408274 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 23:58:13.411010 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 23:58:13.414247 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 23:58:13.416929 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 23:58:13.419214 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:58:13.419268 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:58:13.421331 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 23:58:13.427686 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 4 23:58:13.428858 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 23:58:13.429856 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 4 23:58:13.466058 systemd-resolved[1361]: Positive Trust Anchors: Sep 4 23:58:13.466076 systemd-resolved[1361]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 23:58:13.466107 systemd-resolved[1361]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 23:58:13.466118 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 23:58:13.469897 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 23:58:13.471546 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 23:58:13.472188 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 23:58:13.473979 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 23:58:13.474483 systemd-resolved[1361]: Defaulting to hostname 'linux'. Sep 4 23:58:13.475227 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 23:58:13.476239 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 23:58:13.479037 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:58:13.483698 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 23:58:13.485993 augenrules[1439]: /sbin/augenrules: No change Sep 4 23:58:13.486791 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 23:58:13.486988 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 23:58:13.490827 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 23:58:13.493328 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 23:58:13.494248 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 23:58:13.496485 augenrules[1475]: No rules Sep 4 23:58:13.498231 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 23:58:13.503369 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 23:58:13.515347 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 23:58:13.568450 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 4 23:58:13.569668 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 23:58:13.570923 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 23:58:13.572183 systemd-networkd[1445]: lo: Link UP Sep 4 23:58:13.572194 systemd-networkd[1445]: lo: Gained carrier Sep 4 23:58:13.572337 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 23:58:13.573005 systemd-networkd[1445]: Enumeration completed Sep 4 23:58:13.573433 systemd-networkd[1445]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:58:13.573441 systemd-networkd[1445]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 23:58:13.573475 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 23:58:13.574023 systemd-networkd[1445]: eth0: Link UP Sep 4 23:58:13.574149 systemd-networkd[1445]: eth0: Gained carrier Sep 4 23:58:13.574167 systemd-networkd[1445]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:58:13.574810 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 23:58:13.574849 systemd[1]: Reached target paths.target - Path Units. Sep 4 23:58:13.575623 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 23:58:13.576585 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 23:58:13.577487 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 23:58:13.578657 systemd[1]: Reached target timers.target - Timer Units. Sep 4 23:58:13.583664 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 23:58:13.585918 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 23:58:13.589005 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 4 23:58:13.590302 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 4 23:58:13.591358 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 4 23:58:13.599199 systemd-networkd[1445]: eth0: DHCPv4 address 10.0.0.113/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 23:58:13.599941 systemd-timesyncd[1448]: Network configuration changed, trying to establish connection. Sep 4 23:58:13.600784 systemd-timesyncd[1448]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 4 23:58:13.600838 systemd-timesyncd[1448]: Initial clock synchronization to Thu 2025-09-04 23:58:13.832491 UTC. Sep 4 23:58:13.611141 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 23:58:13.612819 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 4 23:58:13.614574 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 23:58:13.615722 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 23:58:13.616797 systemd[1]: Reached target network.target - Network. Sep 4 23:58:13.617678 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 23:58:13.618587 systemd[1]: Reached target basic.target - Basic System. Sep 4 23:58:13.619456 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 23:58:13.619505 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 23:58:13.621112 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 23:58:13.625552 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 23:58:13.629252 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 23:58:13.635024 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 23:58:13.638508 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 23:58:13.639286 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 23:58:13.640508 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 23:58:13.643016 jq[1504]: false Sep 4 23:58:13.644041 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 23:58:13.648286 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 23:58:13.650432 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 23:58:13.654008 extend-filesystems[1505]: Found /dev/vda6 Sep 4 23:58:13.654897 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 23:58:13.658396 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 4 23:58:13.659921 extend-filesystems[1505]: Found /dev/vda9 Sep 4 23:58:13.661867 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 23:58:13.662434 extend-filesystems[1505]: Checking size of /dev/vda9 Sep 4 23:58:13.663958 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 23:58:13.667266 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 23:58:13.669275 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 23:58:13.671049 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 23:58:13.674787 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 23:58:13.678278 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 23:58:13.680657 jq[1528]: true Sep 4 23:58:13.678458 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 23:58:13.678830 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 23:58:13.678996 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 23:58:13.682303 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 23:58:13.682479 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 23:58:13.684511 extend-filesystems[1505]: Resized partition /dev/vda9 Sep 4 23:58:13.694225 extend-filesystems[1534]: resize2fs 1.47.2 (1-Jan-2025) Sep 4 23:58:13.710950 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 4 23:58:13.711513 (ntainerd)[1543]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 23:58:13.719149 update_engine[1525]: I20250904 23:58:13.716536 1525 main.cc:92] Flatcar Update Engine starting Sep 4 23:58:13.721526 tar[1533]: linux-arm64/LICENSE Sep 4 23:58:13.721526 tar[1533]: linux-arm64/helm Sep 4 23:58:13.724227 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:58:13.737524 dbus-daemon[1502]: [system] SELinux support is enabled Sep 4 23:58:13.734673 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 4 23:58:13.739051 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 23:58:13.742571 jq[1542]: true Sep 4 23:58:13.745149 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 23:58:13.745468 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 23:58:13.749006 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 4 23:58:13.747403 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 23:58:13.749122 update_engine[1525]: I20250904 23:58:13.747406 1525 update_check_scheduler.cc:74] Next update check in 10m13s Sep 4 23:58:13.747426 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 23:58:13.764325 extend-filesystems[1534]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 4 23:58:13.764325 extend-filesystems[1534]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 23:58:13.764325 extend-filesystems[1534]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 4 23:58:13.753222 systemd[1]: Started update-engine.service - Update Engine. Sep 4 23:58:13.780477 extend-filesystems[1505]: Resized filesystem in /dev/vda9 Sep 4 23:58:13.762901 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 23:58:13.762981 systemd-logind[1519]: Watching system buttons on /dev/input/event0 (Power Button) Sep 4 23:58:13.765342 systemd-logind[1519]: New seat seat0. Sep 4 23:58:13.771455 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 23:58:13.772933 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 23:58:13.813472 bash[1569]: Updated "/home/core/.ssh/authorized_keys" Sep 4 23:58:13.815515 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 23:58:13.819395 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:58:13.823165 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 23:58:13.830075 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 4 23:58:13.869919 locksmithd[1553]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 23:58:13.926151 containerd[1543]: time="2025-09-04T23:58:13Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 4 23:58:13.926454 containerd[1543]: time="2025-09-04T23:58:13.926414800Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 4 23:58:13.936202 containerd[1543]: time="2025-09-04T23:58:13.935409360Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.44µs" Sep 4 23:58:13.936202 containerd[1543]: time="2025-09-04T23:58:13.935454480Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 4 23:58:13.936202 containerd[1543]: time="2025-09-04T23:58:13.935475440Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 4 23:58:13.936202 containerd[1543]: time="2025-09-04T23:58:13.935695680Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 4 23:58:13.936202 containerd[1543]: time="2025-09-04T23:58:13.935714000Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 4 23:58:13.936202 containerd[1543]: time="2025-09-04T23:58:13.935740440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 4 23:58:13.936202 containerd[1543]: time="2025-09-04T23:58:13.935791040Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 4 23:58:13.936202 containerd[1543]: time="2025-09-04T23:58:13.935801760Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 4 23:58:13.936202 containerd[1543]: time="2025-09-04T23:58:13.936028400Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 4 23:58:13.936202 containerd[1543]: time="2025-09-04T23:58:13.936041560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 4 23:58:13.936202 containerd[1543]: time="2025-09-04T23:58:13.936053080Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 4 23:58:13.936202 containerd[1543]: time="2025-09-04T23:58:13.936062160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 4 23:58:13.936470 containerd[1543]: time="2025-09-04T23:58:13.936128520Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 4 23:58:13.936470 containerd[1543]: time="2025-09-04T23:58:13.936340760Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 4 23:58:13.936470 containerd[1543]: time="2025-09-04T23:58:13.936369520Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 4 23:58:13.936470 containerd[1543]: time="2025-09-04T23:58:13.936380960Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 4 23:58:13.936470 containerd[1543]: time="2025-09-04T23:58:13.936428880Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 4 23:58:13.937834 containerd[1543]: time="2025-09-04T23:58:13.937796560Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 4 23:58:13.937925 containerd[1543]: time="2025-09-04T23:58:13.937906880Z" level=info msg="metadata content store policy set" policy=shared Sep 4 23:58:13.941991 containerd[1543]: time="2025-09-04T23:58:13.941943840Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 4 23:58:13.942052 containerd[1543]: time="2025-09-04T23:58:13.942012240Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 4 23:58:13.942052 containerd[1543]: time="2025-09-04T23:58:13.942031480Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 4 23:58:13.942052 containerd[1543]: time="2025-09-04T23:58:13.942044360Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 4 23:58:13.942103 containerd[1543]: time="2025-09-04T23:58:13.942056760Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 4 23:58:13.942103 containerd[1543]: time="2025-09-04T23:58:13.942069080Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 4 23:58:13.942177 containerd[1543]: time="2025-09-04T23:58:13.942100600Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 4 23:58:13.942177 containerd[1543]: time="2025-09-04T23:58:13.942115440Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 4 23:58:13.942177 containerd[1543]: time="2025-09-04T23:58:13.942127280Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 4 23:58:13.942177 containerd[1543]: time="2025-09-04T23:58:13.942152800Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 4 23:58:13.942177 containerd[1543]: time="2025-09-04T23:58:13.942164120Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 4 23:58:13.942177 containerd[1543]: time="2025-09-04T23:58:13.942177760Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 4 23:58:13.942346 containerd[1543]: time="2025-09-04T23:58:13.942325640Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 4 23:58:13.942371 containerd[1543]: time="2025-09-04T23:58:13.942352240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 4 23:58:13.942371 containerd[1543]: time="2025-09-04T23:58:13.942367320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 4 23:58:13.942403 containerd[1543]: time="2025-09-04T23:58:13.942379440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 4 23:58:13.942403 containerd[1543]: time="2025-09-04T23:58:13.942393680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 4 23:58:13.942441 containerd[1543]: time="2025-09-04T23:58:13.942409280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 4 23:58:13.942441 containerd[1543]: time="2025-09-04T23:58:13.942421720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 4 23:58:13.942441 containerd[1543]: time="2025-09-04T23:58:13.942432720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 4 23:58:13.942493 containerd[1543]: time="2025-09-04T23:58:13.942445960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 4 23:58:13.942493 containerd[1543]: time="2025-09-04T23:58:13.942459560Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 4 23:58:13.942493 containerd[1543]: time="2025-09-04T23:58:13.942472560Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 4 23:58:13.942688 containerd[1543]: time="2025-09-04T23:58:13.942669960Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 4 23:58:13.942714 containerd[1543]: time="2025-09-04T23:58:13.942691040Z" level=info msg="Start snapshots syncer" Sep 4 23:58:13.942732 containerd[1543]: time="2025-09-04T23:58:13.942720520Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 4 23:58:13.943001 containerd[1543]: time="2025-09-04T23:58:13.942963240Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 4 23:58:13.943092 containerd[1543]: time="2025-09-04T23:58:13.943018120Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 4 23:58:13.943114 containerd[1543]: time="2025-09-04T23:58:13.943092520Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 4 23:58:13.943263 containerd[1543]: time="2025-09-04T23:58:13.943238240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 4 23:58:13.943288 containerd[1543]: time="2025-09-04T23:58:13.943273680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 4 23:58:13.943306 containerd[1543]: time="2025-09-04T23:58:13.943286040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 4 23:58:13.943306 containerd[1543]: time="2025-09-04T23:58:13.943296320Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 4 23:58:13.943338 containerd[1543]: time="2025-09-04T23:58:13.943307800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 4 23:58:13.943338 containerd[1543]: time="2025-09-04T23:58:13.943318960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 4 23:58:13.943381 containerd[1543]: time="2025-09-04T23:58:13.943338720Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 4 23:58:13.943381 containerd[1543]: time="2025-09-04T23:58:13.943365000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 4 23:58:13.943381 containerd[1543]: time="2025-09-04T23:58:13.943376680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 4 23:58:13.943432 containerd[1543]: time="2025-09-04T23:58:13.943391400Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 4 23:58:13.943449 containerd[1543]: time="2025-09-04T23:58:13.943437920Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 4 23:58:13.943469 containerd[1543]: time="2025-09-04T23:58:13.943452680Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 4 23:58:13.943469 containerd[1543]: time="2025-09-04T23:58:13.943462440Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 4 23:58:13.943502 containerd[1543]: time="2025-09-04T23:58:13.943472040Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 4 23:58:13.946142 containerd[1543]: time="2025-09-04T23:58:13.943480720Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 4 23:58:13.946142 containerd[1543]: time="2025-09-04T23:58:13.943552400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 4 23:58:13.946142 containerd[1543]: time="2025-09-04T23:58:13.943563720Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 4 23:58:13.946142 containerd[1543]: time="2025-09-04T23:58:13.943649840Z" level=info msg="runtime interface created" Sep 4 23:58:13.946142 containerd[1543]: time="2025-09-04T23:58:13.943655720Z" level=info msg="created NRI interface" Sep 4 23:58:13.946142 containerd[1543]: time="2025-09-04T23:58:13.943666320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 4 23:58:13.946142 containerd[1543]: time="2025-09-04T23:58:13.943677960Z" level=info msg="Connect containerd service" Sep 4 23:58:13.946142 containerd[1543]: time="2025-09-04T23:58:13.943703040Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 23:58:13.946142 containerd[1543]: time="2025-09-04T23:58:13.944544200Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 23:58:14.028504 containerd[1543]: time="2025-09-04T23:58:14.028440470Z" level=info msg="Start subscribing containerd event" Sep 4 23:58:14.028630 containerd[1543]: time="2025-09-04T23:58:14.028518713Z" level=info msg="Start recovering state" Sep 4 23:58:14.028630 containerd[1543]: time="2025-09-04T23:58:14.028613955Z" level=info msg="Start event monitor" Sep 4 23:58:14.028666 containerd[1543]: time="2025-09-04T23:58:14.028630130Z" level=info msg="Start cni network conf syncer for default" Sep 4 23:58:14.028666 containerd[1543]: time="2025-09-04T23:58:14.028638774Z" level=info msg="Start streaming server" Sep 4 23:58:14.028666 containerd[1543]: time="2025-09-04T23:58:14.028649269Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 4 23:58:14.028666 containerd[1543]: time="2025-09-04T23:58:14.028657995Z" level=info msg="runtime interface starting up..." Sep 4 23:58:14.028729 containerd[1543]: time="2025-09-04T23:58:14.028663675Z" level=info msg="starting plugins..." Sep 4 23:58:14.028729 containerd[1543]: time="2025-09-04T23:58:14.028684378Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 4 23:58:14.028878 containerd[1543]: time="2025-09-04T23:58:14.028850042Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 23:58:14.028928 containerd[1543]: time="2025-09-04T23:58:14.028913633Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 23:58:14.029141 containerd[1543]: time="2025-09-04T23:58:14.029119756Z" level=info msg="containerd successfully booted in 0.103561s" Sep 4 23:58:14.029304 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 23:58:14.150210 tar[1533]: linux-arm64/README.md Sep 4 23:58:14.167662 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 23:58:14.751307 systemd-networkd[1445]: eth0: Gained IPv6LL Sep 4 23:58:14.753715 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 23:58:14.755260 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 23:58:14.757704 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 4 23:58:14.760065 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:58:14.767457 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 23:58:14.790070 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 4 23:58:14.792558 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 4 23:58:14.794019 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 23:58:14.802286 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 23:58:14.995136 sshd_keygen[1532]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 23:58:15.015482 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 23:58:15.018318 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 23:58:15.042135 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 23:58:15.042426 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 23:58:15.045093 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 23:58:15.062733 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 23:58:15.065629 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 23:58:15.067744 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 4 23:58:15.068967 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 23:58:15.355344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:58:15.356721 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 23:58:15.359341 (kubelet)[1644]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:58:15.362314 systemd[1]: Startup finished in 2.005s (kernel) + 5.425s (initrd) + 3.387s (userspace) = 10.818s. Sep 4 23:58:15.738184 kubelet[1644]: E0904 23:58:15.738010 1644 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:58:15.740351 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:58:15.740492 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:58:15.740813 systemd[1]: kubelet.service: Consumed 752ms CPU time, 257.7M memory peak. Sep 4 23:58:19.913012 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 23:58:19.914261 systemd[1]: Started sshd@0-10.0.0.113:22-10.0.0.1:46540.service - OpenSSH per-connection server daemon (10.0.0.1:46540). Sep 4 23:58:19.991856 sshd[1658]: Accepted publickey for core from 10.0.0.1 port 46540 ssh2: RSA SHA256:dz8a5vpzhl9T1tN+PlbA3wzUJkL1bHm+PkgBuWVD7dg Sep 4 23:58:19.994993 sshd-session[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:58:20.001637 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 23:58:20.002703 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 23:58:20.008208 systemd-logind[1519]: New session 1 of user core. Sep 4 23:58:20.032225 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 23:58:20.035108 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 23:58:20.056389 (systemd)[1662]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 23:58:20.058728 systemd-logind[1519]: New session c1 of user core. Sep 4 23:58:20.175492 systemd[1662]: Queued start job for default target default.target. Sep 4 23:58:20.187257 systemd[1662]: Created slice app.slice - User Application Slice. Sep 4 23:58:20.187288 systemd[1662]: Reached target paths.target - Paths. Sep 4 23:58:20.187330 systemd[1662]: Reached target timers.target - Timers. Sep 4 23:58:20.188727 systemd[1662]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 23:58:20.200208 systemd[1662]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 23:58:20.200523 systemd[1662]: Reached target sockets.target - Sockets. Sep 4 23:58:20.200664 systemd[1662]: Reached target basic.target - Basic System. Sep 4 23:58:20.200780 systemd[1662]: Reached target default.target - Main User Target. Sep 4 23:58:20.200820 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 23:58:20.200920 systemd[1662]: Startup finished in 135ms. Sep 4 23:58:20.202312 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 23:58:20.273244 systemd[1]: Started sshd@1-10.0.0.113:22-10.0.0.1:46554.service - OpenSSH per-connection server daemon (10.0.0.1:46554). Sep 4 23:58:20.327774 sshd[1673]: Accepted publickey for core from 10.0.0.1 port 46554 ssh2: RSA SHA256:dz8a5vpzhl9T1tN+PlbA3wzUJkL1bHm+PkgBuWVD7dg Sep 4 23:58:20.329578 sshd-session[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:58:20.337840 systemd-logind[1519]: New session 2 of user core. Sep 4 23:58:20.357417 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 23:58:20.416204 sshd[1675]: Connection closed by 10.0.0.1 port 46554 Sep 4 23:58:20.416890 sshd-session[1673]: pam_unix(sshd:session): session closed for user core Sep 4 23:58:20.429656 systemd[1]: sshd@1-10.0.0.113:22-10.0.0.1:46554.service: Deactivated successfully. Sep 4 23:58:20.431378 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 23:58:20.432170 systemd-logind[1519]: Session 2 logged out. Waiting for processes to exit. Sep 4 23:58:20.435372 systemd[1]: Started sshd@2-10.0.0.113:22-10.0.0.1:46570.service - OpenSSH per-connection server daemon (10.0.0.1:46570). Sep 4 23:58:20.435944 systemd-logind[1519]: Removed session 2. Sep 4 23:58:20.514362 sshd[1681]: Accepted publickey for core from 10.0.0.1 port 46570 ssh2: RSA SHA256:dz8a5vpzhl9T1tN+PlbA3wzUJkL1bHm+PkgBuWVD7dg Sep 4 23:58:20.515813 sshd-session[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:58:20.520219 systemd-logind[1519]: New session 3 of user core. Sep 4 23:58:20.537338 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 23:58:20.586869 sshd[1684]: Connection closed by 10.0.0.1 port 46570 Sep 4 23:58:20.587386 sshd-session[1681]: pam_unix(sshd:session): session closed for user core Sep 4 23:58:20.601484 systemd[1]: sshd@2-10.0.0.113:22-10.0.0.1:46570.service: Deactivated successfully. Sep 4 23:58:20.603650 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 23:58:20.606321 systemd-logind[1519]: Session 3 logged out. Waiting for processes to exit. Sep 4 23:58:20.609011 systemd[1]: Started sshd@3-10.0.0.113:22-10.0.0.1:46582.service - OpenSSH per-connection server daemon (10.0.0.1:46582). Sep 4 23:58:20.609686 systemd-logind[1519]: Removed session 3. Sep 4 23:58:20.663368 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 46582 ssh2: RSA SHA256:dz8a5vpzhl9T1tN+PlbA3wzUJkL1bHm+PkgBuWVD7dg Sep 4 23:58:20.664712 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:58:20.668752 systemd-logind[1519]: New session 4 of user core. Sep 4 23:58:20.683345 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 23:58:20.738283 sshd[1692]: Connection closed by 10.0.0.1 port 46582 Sep 4 23:58:20.738085 sshd-session[1690]: pam_unix(sshd:session): session closed for user core Sep 4 23:58:20.747537 systemd[1]: sshd@3-10.0.0.113:22-10.0.0.1:46582.service: Deactivated successfully. Sep 4 23:58:20.750622 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 23:58:20.752913 systemd-logind[1519]: Session 4 logged out. Waiting for processes to exit. Sep 4 23:58:20.755303 systemd[1]: Started sshd@4-10.0.0.113:22-10.0.0.1:46590.service - OpenSSH per-connection server daemon (10.0.0.1:46590). Sep 4 23:58:20.758453 systemd-logind[1519]: Removed session 4. Sep 4 23:58:20.818322 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 46590 ssh2: RSA SHA256:dz8a5vpzhl9T1tN+PlbA3wzUJkL1bHm+PkgBuWVD7dg Sep 4 23:58:20.819681 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:58:20.824232 systemd-logind[1519]: New session 5 of user core. Sep 4 23:58:20.834336 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 23:58:20.893112 sudo[1701]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 23:58:20.893425 sudo[1701]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:58:20.919972 sudo[1701]: pam_unix(sudo:session): session closed for user root Sep 4 23:58:20.922793 sshd[1700]: Connection closed by 10.0.0.1 port 46590 Sep 4 23:58:20.922294 sshd-session[1698]: pam_unix(sshd:session): session closed for user core Sep 4 23:58:20.930498 systemd[1]: sshd@4-10.0.0.113:22-10.0.0.1:46590.service: Deactivated successfully. Sep 4 23:58:20.933561 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 23:58:20.934364 systemd-logind[1519]: Session 5 logged out. Waiting for processes to exit. Sep 4 23:58:20.936660 systemd[1]: Started sshd@5-10.0.0.113:22-10.0.0.1:46596.service - OpenSSH per-connection server daemon (10.0.0.1:46596). Sep 4 23:58:20.937559 systemd-logind[1519]: Removed session 5. Sep 4 23:58:20.989811 sshd[1707]: Accepted publickey for core from 10.0.0.1 port 46596 ssh2: RSA SHA256:dz8a5vpzhl9T1tN+PlbA3wzUJkL1bHm+PkgBuWVD7dg Sep 4 23:58:20.991352 sshd-session[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:58:20.995554 systemd-logind[1519]: New session 6 of user core. Sep 4 23:58:21.012331 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 23:58:21.065731 sudo[1711]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 23:58:21.066018 sudo[1711]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:58:21.149690 sudo[1711]: pam_unix(sudo:session): session closed for user root Sep 4 23:58:21.154951 sudo[1710]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 4 23:58:21.155273 sudo[1710]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:58:21.165030 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 23:58:21.211742 augenrules[1733]: No rules Sep 4 23:58:21.213269 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 23:58:21.213487 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 23:58:21.214851 sudo[1710]: pam_unix(sudo:session): session closed for user root Sep 4 23:58:21.216848 sshd[1709]: Connection closed by 10.0.0.1 port 46596 Sep 4 23:58:21.216754 sshd-session[1707]: pam_unix(sshd:session): session closed for user core Sep 4 23:58:21.229471 systemd[1]: sshd@5-10.0.0.113:22-10.0.0.1:46596.service: Deactivated successfully. Sep 4 23:58:21.231136 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 23:58:21.231858 systemd-logind[1519]: Session 6 logged out. Waiting for processes to exit. Sep 4 23:58:21.234239 systemd[1]: Started sshd@6-10.0.0.113:22-10.0.0.1:46608.service - OpenSSH per-connection server daemon (10.0.0.1:46608). Sep 4 23:58:21.235216 systemd-logind[1519]: Removed session 6. Sep 4 23:58:21.290046 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 46608 ssh2: RSA SHA256:dz8a5vpzhl9T1tN+PlbA3wzUJkL1bHm+PkgBuWVD7dg Sep 4 23:58:21.291569 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:58:21.295543 systemd-logind[1519]: New session 7 of user core. Sep 4 23:58:21.310328 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 23:58:21.363250 sudo[1745]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 23:58:21.363528 sudo[1745]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:58:21.673610 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 23:58:21.700562 (dockerd)[1765]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 23:58:21.936460 dockerd[1765]: time="2025-09-04T23:58:21.936315424Z" level=info msg="Starting up" Sep 4 23:58:21.937914 dockerd[1765]: time="2025-09-04T23:58:21.937883463Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 4 23:58:21.984002 dockerd[1765]: time="2025-09-04T23:58:21.983791894Z" level=info msg="Loading containers: start." Sep 4 23:58:21.995176 kernel: Initializing XFRM netlink socket Sep 4 23:58:22.189406 systemd-networkd[1445]: docker0: Link UP Sep 4 23:58:22.193528 dockerd[1765]: time="2025-09-04T23:58:22.193469992Z" level=info msg="Loading containers: done." Sep 4 23:58:22.205886 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2114448333-merged.mount: Deactivated successfully. Sep 4 23:58:22.211693 dockerd[1765]: time="2025-09-04T23:58:22.211636537Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 23:58:22.211816 dockerd[1765]: time="2025-09-04T23:58:22.211732846Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 4 23:58:22.211885 dockerd[1765]: time="2025-09-04T23:58:22.211859293Z" level=info msg="Initializing buildkit" Sep 4 23:58:22.236457 dockerd[1765]: time="2025-09-04T23:58:22.236399098Z" level=info msg="Completed buildkit initialization" Sep 4 23:58:22.241510 dockerd[1765]: time="2025-09-04T23:58:22.241458528Z" level=info msg="Daemon has completed initialization" Sep 4 23:58:22.241697 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 23:58:22.242062 dockerd[1765]: time="2025-09-04T23:58:22.242008227Z" level=info msg="API listen on /run/docker.sock" Sep 4 23:58:22.765265 containerd[1543]: time="2025-09-04T23:58:22.765221992Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 4 23:58:23.327585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4125909392.mount: Deactivated successfully. Sep 4 23:58:24.204215 containerd[1543]: time="2025-09-04T23:58:24.204165055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:58:24.205469 containerd[1543]: time="2025-09-04T23:58:24.205249860Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=26328359" Sep 4 23:58:24.206243 containerd[1543]: time="2025-09-04T23:58:24.206215444Z" level=info msg="ImageCreate event name:\"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:58:24.208694 containerd[1543]: time="2025-09-04T23:58:24.208658242Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:58:24.209927 containerd[1543]: time="2025-09-04T23:58:24.209712375Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"26325157\" in 1.444440478s" Sep 4 23:58:24.209927 containerd[1543]: time="2025-09-04T23:58:24.209757557Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\"" Sep 4 23:58:24.210964 containerd[1543]: time="2025-09-04T23:58:24.210935507Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 4 23:58:25.207967 containerd[1543]: time="2025-09-04T23:58:25.207909337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:58:25.208826 containerd[1543]: time="2025-09-04T23:58:25.208792427Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=22528554" Sep 4 23:58:25.209710 containerd[1543]: time="2025-09-04T23:58:25.209669276Z" level=info msg="ImageCreate event name:\"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:58:25.213127 containerd[1543]: time="2025-09-04T23:58:25.213069764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:58:25.213906 containerd[1543]: time="2025-09-04T23:58:25.213868737Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"24065666\" in 1.002893418s" Sep 4 23:58:25.213953 containerd[1543]: time="2025-09-04T23:58:25.213908077Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\"" Sep 4 23:58:25.214576 containerd[1543]: time="2025-09-04T23:58:25.214546507Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 4 23:58:25.991010 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 23:58:25.992412 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:58:26.130859 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:58:26.133827 (kubelet)[2042]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:58:26.194347 kubelet[2042]: E0904 23:58:26.194283 2042 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:58:26.197106 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:58:26.197259 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:58:26.199209 systemd[1]: kubelet.service: Consumed 142ms CPU time, 109.5M memory peak. Sep 4 23:58:26.566905 containerd[1543]: time="2025-09-04T23:58:26.566856206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:58:26.567746 containerd[1543]: time="2025-09-04T23:58:26.567415129Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=17483529" Sep 4 23:58:26.568404 containerd[1543]: time="2025-09-04T23:58:26.568369224Z" level=info msg="ImageCreate event name:\"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:58:26.571272 containerd[1543]: time="2025-09-04T23:58:26.571241649Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:58:26.572907 containerd[1543]: time="2025-09-04T23:58:26.572870780Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"19020659\" in 1.358286592s" Sep 4 23:58:26.572907 containerd[1543]: time="2025-09-04T23:58:26.572911054Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\"" Sep 4 23:58:26.573474 containerd[1543]: time="2025-09-04T23:58:26.573447366Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 4 23:58:27.518748 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4163189930.mount: Deactivated successfully. Sep 4 23:58:27.752416 containerd[1543]: time="2025-09-04T23:58:27.752354181Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:58:27.752872 containerd[1543]: time="2025-09-04T23:58:27.752824089Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=27376726" Sep 4 23:58:27.753632 containerd[1543]: time="2025-09-04T23:58:27.753603729Z" level=info msg="ImageCreate event name:\"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:58:27.755169 containerd[1543]: time="2025-09-04T23:58:27.755119268Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:58:27.755991 containerd[1543]: time="2025-09-04T23:58:27.755952340Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"27375743\" in 1.182472592s" Sep 4 23:58:27.756022 containerd[1543]: time="2025-09-04T23:58:27.755991379Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\"" Sep 4 23:58:27.756735 containerd[1543]: time="2025-09-04T23:58:27.756523080Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 4 23:58:28.231235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3544100206.mount: Deactivated successfully. Sep 4 23:58:28.939838 containerd[1543]: time="2025-09-04T23:58:28.939788590Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:58:28.940784 containerd[1543]: time="2025-09-04T23:58:28.940525950Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 4 23:58:28.941527 containerd[1543]: time="2025-09-04T23:58:28.941489355Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:58:28.984219 containerd[1543]: time="2025-09-04T23:58:28.984168265Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:58:28.985581 containerd[1543]: time="2025-09-04T23:58:28.985545993Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.228988827s" Sep 4 23:58:28.985581 containerd[1543]: time="2025-09-04T23:58:28.985581712Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 4 23:58:28.986381 containerd[1543]: time="2025-09-04T23:58:28.986024843Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 4 23:58:29.412498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3900091063.mount: Deactivated successfully. Sep 4 23:58:29.416804 containerd[1543]: time="2025-09-04T23:58:29.416757421Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:58:29.417498 containerd[1543]: time="2025-09-04T23:58:29.417277527Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 4 23:58:29.418314 containerd[1543]: time="2025-09-04T23:58:29.418271317Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:58:29.420307 containerd[1543]: time="2025-09-04T23:58:29.420268012Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:58:29.421124 containerd[1543]: time="2025-09-04T23:58:29.421095113Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 435.042992ms" Sep 4 23:58:29.421124 containerd[1543]: time="2025-09-04T23:58:29.421124708Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 4 23:58:29.421666 containerd[1543]: time="2025-09-04T23:58:29.421635457Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 4 23:58:30.306878 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount234518579.mount: Deactivated successfully. Sep 4 23:58:32.077143 containerd[1543]: time="2025-09-04T23:58:32.077091034Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:58:32.078405 containerd[1543]: time="2025-09-04T23:58:32.078371378Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943167" Sep 4 23:58:32.079378 containerd[1543]: time="2025-09-04T23:58:32.079346004Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:58:32.081937 containerd[1543]: time="2025-09-04T23:58:32.081901679Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:58:32.083935 containerd[1543]: time="2025-09-04T23:58:32.083900380Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.662224859s" Sep 4 23:58:32.083979 containerd[1543]: time="2025-09-04T23:58:32.083940806Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 4 23:58:35.978953 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:58:35.979100 systemd[1]: kubelet.service: Consumed 142ms CPU time, 109.5M memory peak. Sep 4 23:58:35.980947 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:58:36.001841 systemd[1]: Reload requested from client PID 2201 ('systemctl') (unit session-7.scope)... Sep 4 23:58:36.001856 systemd[1]: Reloading... Sep 4 23:58:36.066281 zram_generator::config[2244]: No configuration found. Sep 4 23:58:36.154336 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:58:36.240498 systemd[1]: Reloading finished in 238 ms. Sep 4 23:58:36.308718 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 23:58:36.308801 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 23:58:36.309064 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:58:36.309110 systemd[1]: kubelet.service: Consumed 83ms CPU time, 95.1M memory peak. Sep 4 23:58:36.310649 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:58:36.418893 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:58:36.423307 (kubelet)[2288]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 23:58:36.454508 kubelet[2288]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:58:36.454508 kubelet[2288]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 23:58:36.454508 kubelet[2288]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:58:36.454821 kubelet[2288]: I0904 23:58:36.454597 2288 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 23:58:36.911289 kubelet[2288]: I0904 23:58:36.910642 2288 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 4 23:58:36.911289 kubelet[2288]: I0904 23:58:36.910671 2288 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 23:58:36.911289 kubelet[2288]: I0904 23:58:36.911180 2288 server.go:954] "Client rotation is on, will bootstrap in background" Sep 4 23:58:36.930829 kubelet[2288]: E0904 23:58:36.930798 2288 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.113:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:58:36.932605 kubelet[2288]: I0904 23:58:36.932575 2288 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 23:58:36.937572 kubelet[2288]: I0904 23:58:36.937554 2288 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 4 23:58:36.940122 kubelet[2288]: I0904 23:58:36.940105 2288 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 23:58:36.940753 kubelet[2288]: I0904 23:58:36.940709 2288 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 23:58:36.940905 kubelet[2288]: I0904 23:58:36.940746 2288 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 23:58:36.940989 kubelet[2288]: I0904 23:58:36.940970 2288 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 23:58:36.940989 kubelet[2288]: I0904 23:58:36.940979 2288 container_manager_linux.go:304] "Creating device plugin manager" Sep 4 23:58:36.941183 kubelet[2288]: I0904 23:58:36.941170 2288 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:58:36.943447 kubelet[2288]: I0904 23:58:36.943430 2288 kubelet.go:446] "Attempting to sync node with API server" Sep 4 23:58:36.943498 kubelet[2288]: I0904 23:58:36.943451 2288 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 23:58:36.943498 kubelet[2288]: I0904 23:58:36.943471 2288 kubelet.go:352] "Adding apiserver pod source" Sep 4 23:58:36.943498 kubelet[2288]: I0904 23:58:36.943481 2288 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 23:58:36.946936 kubelet[2288]: I0904 23:58:36.946916 2288 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 4 23:58:36.947501 kubelet[2288]: I0904 23:58:36.947478 2288 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 23:58:36.948577 kubelet[2288]: W0904 23:58:36.948523 2288 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.113:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Sep 4 23:58:36.948609 kubelet[2288]: E0904 23:58:36.948593 2288 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.113:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:58:36.948690 kubelet[2288]: W0904 23:58:36.948669 2288 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Sep 4 23:58:36.948714 kubelet[2288]: E0904 23:58:36.948700 2288 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:58:36.951944 kubelet[2288]: W0904 23:58:36.948177 2288 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 23:58:36.953643 kubelet[2288]: I0904 23:58:36.952999 2288 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 23:58:36.953643 kubelet[2288]: I0904 23:58:36.953038 2288 server.go:1287] "Started kubelet" Sep 4 23:58:36.953643 kubelet[2288]: I0904 23:58:36.953445 2288 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 23:58:36.953643 kubelet[2288]: I0904 23:58:36.953475 2288 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 23:58:36.953992 kubelet[2288]: I0904 23:58:36.953974 2288 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 23:58:36.954494 kubelet[2288]: I0904 23:58:36.954457 2288 server.go:479] "Adding debug handlers to kubelet server" Sep 4 23:58:36.955701 kubelet[2288]: E0904 23:58:36.955481 2288 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.113:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.113:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186239c39d939460 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-04 23:58:36.953015392 +0000 UTC m=+0.526938375,LastTimestamp:2025-09-04 23:58:36.953015392 +0000 UTC m=+0.526938375,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 4 23:58:36.955971 kubelet[2288]: I0904 23:58:36.955925 2288 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 23:58:36.956032 kubelet[2288]: I0904 23:58:36.956008 2288 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 23:58:36.956396 kubelet[2288]: I0904 23:58:36.956376 2288 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 23:58:36.956480 kubelet[2288]: I0904 23:58:36.956468 2288 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 23:58:36.956535 kubelet[2288]: I0904 23:58:36.956525 2288 reconciler.go:26] "Reconciler: start to sync state" Sep 4 23:58:36.956808 kubelet[2288]: W0904 23:58:36.956777 2288 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Sep 4 23:58:36.956844 kubelet[2288]: E0904 23:58:36.956822 2288 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:58:36.957154 kubelet[2288]: E0904 23:58:36.957119 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:58:36.957253 kubelet[2288]: E0904 23:58:36.957229 2288 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.113:6443: connect: connection refused" interval="200ms" Sep 4 23:58:36.957837 kubelet[2288]: I0904 23:58:36.957821 2288 factory.go:221] Registration of the systemd container factory successfully Sep 4 23:58:36.958064 kubelet[2288]: I0904 23:58:36.958024 2288 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 23:58:36.960487 kubelet[2288]: E0904 23:58:36.960465 2288 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 23:58:36.962469 kubelet[2288]: I0904 23:58:36.962451 2288 factory.go:221] Registration of the containerd container factory successfully Sep 4 23:58:36.972558 kubelet[2288]: I0904 23:58:36.972510 2288 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 23:58:36.972558 kubelet[2288]: I0904 23:58:36.972524 2288 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 23:58:36.972558 kubelet[2288]: I0904 23:58:36.972540 2288 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:58:36.973170 kubelet[2288]: I0904 23:58:36.973035 2288 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 23:58:36.974063 kubelet[2288]: I0904 23:58:36.974043 2288 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 23:58:36.974158 kubelet[2288]: I0904 23:58:36.974132 2288 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 4 23:58:36.974240 kubelet[2288]: I0904 23:58:36.974227 2288 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 23:58:36.974284 kubelet[2288]: I0904 23:58:36.974276 2288 kubelet.go:2382] "Starting kubelet main sync loop" Sep 4 23:58:36.974368 kubelet[2288]: E0904 23:58:36.974345 2288 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 23:58:37.057518 kubelet[2288]: E0904 23:58:37.057468 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:58:37.067701 kubelet[2288]: I0904 23:58:37.067355 2288 policy_none.go:49] "None policy: Start" Sep 4 23:58:37.067701 kubelet[2288]: I0904 23:58:37.067387 2288 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 23:58:37.067701 kubelet[2288]: I0904 23:58:37.067399 2288 state_mem.go:35] "Initializing new in-memory state store" Sep 4 23:58:37.068282 kubelet[2288]: W0904 23:58:37.068237 2288 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Sep 4 23:58:37.068965 kubelet[2288]: E0904 23:58:37.068355 2288 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:58:37.073698 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 23:58:37.075335 kubelet[2288]: E0904 23:58:37.075311 2288 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 23:58:37.087303 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 23:58:37.090229 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 23:58:37.109948 kubelet[2288]: I0904 23:58:37.109927 2288 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 23:58:37.110146 kubelet[2288]: I0904 23:58:37.110115 2288 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 23:58:37.110189 kubelet[2288]: I0904 23:58:37.110146 2288 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 23:58:37.110793 kubelet[2288]: I0904 23:58:37.110439 2288 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 23:58:37.111312 kubelet[2288]: E0904 23:58:37.111292 2288 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 23:58:37.111395 kubelet[2288]: E0904 23:58:37.111354 2288 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 4 23:58:37.158217 kubelet[2288]: E0904 23:58:37.158178 2288 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.113:6443: connect: connection refused" interval="400ms" Sep 4 23:58:37.211732 kubelet[2288]: I0904 23:58:37.211299 2288 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 23:58:37.211732 kubelet[2288]: E0904 23:58:37.211645 2288 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.113:6443/api/v1/nodes\": dial tcp 10.0.0.113:6443: connect: connection refused" node="localhost" Sep 4 23:58:37.286351 systemd[1]: Created slice kubepods-burstable-pod0d43850bbd5fc69fb1a21be74c6473e2.slice - libcontainer container kubepods-burstable-pod0d43850bbd5fc69fb1a21be74c6473e2.slice. Sep 4 23:58:37.316818 kubelet[2288]: E0904 23:58:37.316640 2288 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:58:37.319428 systemd[1]: Created slice kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice - libcontainer container kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice. Sep 4 23:58:37.333470 kubelet[2288]: E0904 23:58:37.333428 2288 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:58:37.337113 systemd[1]: Created slice kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice - libcontainer container kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice. Sep 4 23:58:37.338967 kubelet[2288]: E0904 23:58:37.338803 2288 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:58:37.358385 kubelet[2288]: I0904 23:58:37.358347 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 4 23:58:37.358600 kubelet[2288]: I0904 23:58:37.358562 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0d43850bbd5fc69fb1a21be74c6473e2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0d43850bbd5fc69fb1a21be74c6473e2\") " pod="kube-system/kube-apiserver-localhost" Sep 4 23:58:37.358674 kubelet[2288]: I0904 23:58:37.358585 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 23:58:37.358804 kubelet[2288]: I0904 23:58:37.358750 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 23:58:37.358804 kubelet[2288]: I0904 23:58:37.358775 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 23:58:37.358988 kubelet[2288]: I0904 23:58:37.358790 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 23:58:37.358988 kubelet[2288]: I0904 23:58:37.358947 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0d43850bbd5fc69fb1a21be74c6473e2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0d43850bbd5fc69fb1a21be74c6473e2\") " pod="kube-system/kube-apiserver-localhost" Sep 4 23:58:37.358988 kubelet[2288]: I0904 23:58:37.358965 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0d43850bbd5fc69fb1a21be74c6473e2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0d43850bbd5fc69fb1a21be74c6473e2\") " pod="kube-system/kube-apiserver-localhost" Sep 4 23:58:37.359148 kubelet[2288]: I0904 23:58:37.359101 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 23:58:37.414095 kubelet[2288]: I0904 23:58:37.413757 2288 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 23:58:37.414095 kubelet[2288]: E0904 23:58:37.414064 2288 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.113:6443/api/v1/nodes\": dial tcp 10.0.0.113:6443: connect: connection refused" node="localhost" Sep 4 23:58:37.558989 kubelet[2288]: E0904 23:58:37.558887 2288 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.113:6443: connect: connection refused" interval="800ms" Sep 4 23:58:37.617962 containerd[1543]: time="2025-09-04T23:58:37.617914660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0d43850bbd5fc69fb1a21be74c6473e2,Namespace:kube-system,Attempt:0,}" Sep 4 23:58:37.636723 containerd[1543]: time="2025-09-04T23:58:37.636686281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,}" Sep 4 23:58:37.640362 containerd[1543]: time="2025-09-04T23:58:37.640340463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,}" Sep 4 23:58:37.790464 containerd[1543]: time="2025-09-04T23:58:37.790426358Z" level=info msg="connecting to shim e7e22a5490ea140887c5a98868000439a39068460864a8d7093611c04233af11" address="unix:///run/containerd/s/e88ccde4af12af40d1dddc941077832caf920e094e4074320530c4caf4dd8fa0" namespace=k8s.io protocol=ttrpc version=3 Sep 4 23:58:37.794713 containerd[1543]: time="2025-09-04T23:58:37.794520089Z" level=info msg="connecting to shim 463eb048281df0ff6cd273daf6f5ad9e0a348a882ef8a2834ef483b3a906e409" address="unix:///run/containerd/s/4b55246e7c38e4bf922fd9b037330791fd32483e5ff229c1bbb31dbe95b44dad" namespace=k8s.io protocol=ttrpc version=3 Sep 4 23:58:37.806476 containerd[1543]: time="2025-09-04T23:58:37.806432509Z" level=info msg="connecting to shim 0a5440d1f4bca03020ccecd7072f2e240649b01815608840702a5ac42569e1d3" address="unix:///run/containerd/s/0fc4035a6743bb71645d981672fc9a32adef9c4a3c8740593864a5698f486522" namespace=k8s.io protocol=ttrpc version=3 Sep 4 23:58:37.816607 kubelet[2288]: I0904 23:58:37.815931 2288 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 23:58:37.816607 kubelet[2288]: E0904 23:58:37.816275 2288 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.113:6443/api/v1/nodes\": dial tcp 10.0.0.113:6443: connect: connection refused" node="localhost" Sep 4 23:58:37.820360 systemd[1]: Started cri-containerd-463eb048281df0ff6cd273daf6f5ad9e0a348a882ef8a2834ef483b3a906e409.scope - libcontainer container 463eb048281df0ff6cd273daf6f5ad9e0a348a882ef8a2834ef483b3a906e409. Sep 4 23:58:37.823852 systemd[1]: Started cri-containerd-e7e22a5490ea140887c5a98868000439a39068460864a8d7093611c04233af11.scope - libcontainer container e7e22a5490ea140887c5a98868000439a39068460864a8d7093611c04233af11. Sep 4 23:58:37.826311 kubelet[2288]: W0904 23:58:37.826201 2288 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Sep 4 23:58:37.826311 kubelet[2288]: E0904 23:58:37.826283 2288 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:58:37.827696 systemd[1]: Started cri-containerd-0a5440d1f4bca03020ccecd7072f2e240649b01815608840702a5ac42569e1d3.scope - libcontainer container 0a5440d1f4bca03020ccecd7072f2e240649b01815608840702a5ac42569e1d3. Sep 4 23:58:37.839016 kubelet[2288]: W0904 23:58:37.838927 2288 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.113:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Sep 4 23:58:37.839016 kubelet[2288]: E0904 23:58:37.838986 2288 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.113:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:58:37.867487 containerd[1543]: time="2025-09-04T23:58:37.867437185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0d43850bbd5fc69fb1a21be74c6473e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"463eb048281df0ff6cd273daf6f5ad9e0a348a882ef8a2834ef483b3a906e409\"" Sep 4 23:58:37.870859 containerd[1543]: time="2025-09-04T23:58:37.870335713Z" level=info msg="CreateContainer within sandbox \"463eb048281df0ff6cd273daf6f5ad9e0a348a882ef8a2834ef483b3a906e409\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 23:58:37.875231 containerd[1543]: time="2025-09-04T23:58:37.875200439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a5440d1f4bca03020ccecd7072f2e240649b01815608840702a5ac42569e1d3\"" Sep 4 23:58:37.875966 containerd[1543]: time="2025-09-04T23:58:37.875943556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,} returns sandbox id \"e7e22a5490ea140887c5a98868000439a39068460864a8d7093611c04233af11\"" Sep 4 23:58:37.878177 containerd[1543]: time="2025-09-04T23:58:37.878131691Z" level=info msg="CreateContainer within sandbox \"0a5440d1f4bca03020ccecd7072f2e240649b01815608840702a5ac42569e1d3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 23:58:37.878539 containerd[1543]: time="2025-09-04T23:58:37.878514445Z" level=info msg="CreateContainer within sandbox \"e7e22a5490ea140887c5a98868000439a39068460864a8d7093611c04233af11\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 23:58:37.882980 containerd[1543]: time="2025-09-04T23:58:37.882838886Z" level=info msg="Container 5d9e3c52b29f8911835bbb4fc2dfab184dee4332d7450618b2783a858caa0fdf: CDI devices from CRI Config.CDIDevices: []" Sep 4 23:58:37.885638 containerd[1543]: time="2025-09-04T23:58:37.885602954Z" level=info msg="Container a5318d74b68b5c4a555a0a74e07fcc0b88f040567e14948ccb21eaff3d5f81c7: CDI devices from CRI Config.CDIDevices: []" Sep 4 23:58:37.892383 containerd[1543]: time="2025-09-04T23:58:37.892350806Z" level=info msg="CreateContainer within sandbox \"463eb048281df0ff6cd273daf6f5ad9e0a348a882ef8a2834ef483b3a906e409\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5d9e3c52b29f8911835bbb4fc2dfab184dee4332d7450618b2783a858caa0fdf\"" Sep 4 23:58:37.892914 containerd[1543]: time="2025-09-04T23:58:37.892892813Z" level=info msg="StartContainer for \"5d9e3c52b29f8911835bbb4fc2dfab184dee4332d7450618b2783a858caa0fdf\"" Sep 4 23:58:37.893907 containerd[1543]: time="2025-09-04T23:58:37.893884102Z" level=info msg="connecting to shim 5d9e3c52b29f8911835bbb4fc2dfab184dee4332d7450618b2783a858caa0fdf" address="unix:///run/containerd/s/4b55246e7c38e4bf922fd9b037330791fd32483e5ff229c1bbb31dbe95b44dad" protocol=ttrpc version=3 Sep 4 23:58:37.896654 containerd[1543]: time="2025-09-04T23:58:37.896621575Z" level=info msg="CreateContainer within sandbox \"0a5440d1f4bca03020ccecd7072f2e240649b01815608840702a5ac42569e1d3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a5318d74b68b5c4a555a0a74e07fcc0b88f040567e14948ccb21eaff3d5f81c7\"" Sep 4 23:58:37.897047 containerd[1543]: time="2025-09-04T23:58:37.897022833Z" level=info msg="StartContainer for \"a5318d74b68b5c4a555a0a74e07fcc0b88f040567e14948ccb21eaff3d5f81c7\"" Sep 4 23:58:37.898059 containerd[1543]: time="2025-09-04T23:58:37.897989129Z" level=info msg="Container 8c4049d53cf23cd5631c0ab0afda31a9f96374b4642d0160a345dcf6aeb586ed: CDI devices from CRI Config.CDIDevices: []" Sep 4 23:58:37.898199 containerd[1543]: time="2025-09-04T23:58:37.898173296Z" level=info msg="connecting to shim a5318d74b68b5c4a555a0a74e07fcc0b88f040567e14948ccb21eaff3d5f81c7" address="unix:///run/containerd/s/0fc4035a6743bb71645d981672fc9a32adef9c4a3c8740593864a5698f486522" protocol=ttrpc version=3 Sep 4 23:58:37.907379 containerd[1543]: time="2025-09-04T23:58:37.907287523Z" level=info msg="CreateContainer within sandbox \"e7e22a5490ea140887c5a98868000439a39068460864a8d7093611c04233af11\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8c4049d53cf23cd5631c0ab0afda31a9f96374b4642d0160a345dcf6aeb586ed\"" Sep 4 23:58:37.908188 containerd[1543]: time="2025-09-04T23:58:37.907987662Z" level=info msg="StartContainer for \"8c4049d53cf23cd5631c0ab0afda31a9f96374b4642d0160a345dcf6aeb586ed\"" Sep 4 23:58:37.909174 containerd[1543]: time="2025-09-04T23:58:37.909117698Z" level=info msg="connecting to shim 8c4049d53cf23cd5631c0ab0afda31a9f96374b4642d0160a345dcf6aeb586ed" address="unix:///run/containerd/s/e88ccde4af12af40d1dddc941077832caf920e094e4074320530c4caf4dd8fa0" protocol=ttrpc version=3 Sep 4 23:58:37.915308 systemd[1]: Started cri-containerd-5d9e3c52b29f8911835bbb4fc2dfab184dee4332d7450618b2783a858caa0fdf.scope - libcontainer container 5d9e3c52b29f8911835bbb4fc2dfab184dee4332d7450618b2783a858caa0fdf. Sep 4 23:58:37.918091 systemd[1]: Started cri-containerd-a5318d74b68b5c4a555a0a74e07fcc0b88f040567e14948ccb21eaff3d5f81c7.scope - libcontainer container a5318d74b68b5c4a555a0a74e07fcc0b88f040567e14948ccb21eaff3d5f81c7. Sep 4 23:58:37.937305 systemd[1]: Started cri-containerd-8c4049d53cf23cd5631c0ab0afda31a9f96374b4642d0160a345dcf6aeb586ed.scope - libcontainer container 8c4049d53cf23cd5631c0ab0afda31a9f96374b4642d0160a345dcf6aeb586ed. Sep 4 23:58:37.974355 containerd[1543]: time="2025-09-04T23:58:37.974307748Z" level=info msg="StartContainer for \"a5318d74b68b5c4a555a0a74e07fcc0b88f040567e14948ccb21eaff3d5f81c7\" returns successfully" Sep 4 23:58:37.978316 containerd[1543]: time="2025-09-04T23:58:37.978283481Z" level=info msg="StartContainer for \"5d9e3c52b29f8911835bbb4fc2dfab184dee4332d7450618b2783a858caa0fdf\" returns successfully" Sep 4 23:58:37.986544 containerd[1543]: time="2025-09-04T23:58:37.986370249Z" level=info msg="StartContainer for \"8c4049d53cf23cd5631c0ab0afda31a9f96374b4642d0160a345dcf6aeb586ed\" returns successfully" Sep 4 23:58:37.991456 kubelet[2288]: E0904 23:58:37.991355 2288 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:58:37.993768 kubelet[2288]: E0904 23:58:37.993740 2288 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:58:37.996365 kubelet[2288]: E0904 23:58:37.996344 2288 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:58:38.617410 kubelet[2288]: I0904 23:58:38.617375 2288 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 23:58:38.999202 kubelet[2288]: E0904 23:58:38.998975 2288 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:58:38.999202 kubelet[2288]: E0904 23:58:38.999076 2288 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:58:39.692579 kubelet[2288]: E0904 23:58:39.692546 2288 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 4 23:58:39.753174 kubelet[2288]: E0904 23:58:39.752922 2288 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.186239c39d939460 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-04 23:58:36.953015392 +0000 UTC m=+0.526938375,LastTimestamp:2025-09-04 23:58:36.953015392 +0000 UTC m=+0.526938375,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 4 23:58:39.818146 kubelet[2288]: I0904 23:58:39.816279 2288 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 4 23:58:39.857729 kubelet[2288]: I0904 23:58:39.857691 2288 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 4 23:58:39.865867 kubelet[2288]: E0904 23:58:39.865835 2288 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 4 23:58:39.865867 kubelet[2288]: I0904 23:58:39.865869 2288 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 4 23:58:39.868919 kubelet[2288]: E0904 23:58:39.868892 2288 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 4 23:58:39.868919 kubelet[2288]: I0904 23:58:39.868921 2288 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 4 23:58:39.870861 kubelet[2288]: E0904 23:58:39.870838 2288 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 4 23:58:39.948750 kubelet[2288]: I0904 23:58:39.948399 2288 apiserver.go:52] "Watching apiserver" Sep 4 23:58:39.956798 kubelet[2288]: I0904 23:58:39.956772 2288 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 23:58:41.219953 kubelet[2288]: I0904 23:58:41.219912 2288 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 4 23:58:41.856240 systemd[1]: Reload requested from client PID 2561 ('systemctl') (unit session-7.scope)... Sep 4 23:58:41.856256 systemd[1]: Reloading... Sep 4 23:58:41.913704 zram_generator::config[2604]: No configuration found. Sep 4 23:58:41.985970 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:58:42.081907 systemd[1]: Reloading finished in 225 ms. Sep 4 23:58:42.108681 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:58:42.125132 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 23:58:42.125376 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:58:42.125433 systemd[1]: kubelet.service: Consumed 890ms CPU time, 127.8M memory peak. Sep 4 23:58:42.129210 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:58:42.253280 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:58:42.257045 (kubelet)[2646]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 23:58:42.294580 kubelet[2646]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:58:42.294580 kubelet[2646]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 23:58:42.294580 kubelet[2646]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:58:42.294924 kubelet[2646]: I0904 23:58:42.294643 2646 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 23:58:42.301485 kubelet[2646]: I0904 23:58:42.301437 2646 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 4 23:58:42.301485 kubelet[2646]: I0904 23:58:42.301469 2646 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 23:58:42.301717 kubelet[2646]: I0904 23:58:42.301700 2646 server.go:954] "Client rotation is on, will bootstrap in background" Sep 4 23:58:42.302868 kubelet[2646]: I0904 23:58:42.302852 2646 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 23:58:42.304977 kubelet[2646]: I0904 23:58:42.304959 2646 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 23:58:42.309177 kubelet[2646]: I0904 23:58:42.308573 2646 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 4 23:58:42.311874 kubelet[2646]: I0904 23:58:42.311398 2646 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 23:58:42.311874 kubelet[2646]: I0904 23:58:42.311617 2646 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 23:58:42.311964 kubelet[2646]: I0904 23:58:42.311651 2646 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 23:58:42.311964 kubelet[2646]: I0904 23:58:42.311928 2646 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 23:58:42.311964 kubelet[2646]: I0904 23:58:42.311940 2646 container_manager_linux.go:304] "Creating device plugin manager" Sep 4 23:58:42.312084 kubelet[2646]: I0904 23:58:42.311983 2646 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:58:42.312125 kubelet[2646]: I0904 23:58:42.312112 2646 kubelet.go:446] "Attempting to sync node with API server" Sep 4 23:58:42.312253 kubelet[2646]: I0904 23:58:42.312128 2646 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 23:58:42.312627 kubelet[2646]: I0904 23:58:42.312534 2646 kubelet.go:352] "Adding apiserver pod source" Sep 4 23:58:42.312627 kubelet[2646]: I0904 23:58:42.312567 2646 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 23:58:42.315262 kubelet[2646]: I0904 23:58:42.315242 2646 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 4 23:58:42.315700 kubelet[2646]: I0904 23:58:42.315686 2646 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 23:58:42.316156 kubelet[2646]: I0904 23:58:42.316047 2646 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 23:58:42.316156 kubelet[2646]: I0904 23:58:42.316080 2646 server.go:1287] "Started kubelet" Sep 4 23:58:42.317003 kubelet[2646]: I0904 23:58:42.316948 2646 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 23:58:42.317003 kubelet[2646]: I0904 23:58:42.317104 2646 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 23:58:42.317003 kubelet[2646]: I0904 23:58:42.317208 2646 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 23:58:42.317003 kubelet[2646]: I0904 23:58:42.317356 2646 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 23:58:42.317003 kubelet[2646]: I0904 23:58:42.317466 2646 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 23:58:42.317003 kubelet[2646]: I0904 23:58:42.317528 2646 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 23:58:42.317003 kubelet[2646]: I0904 23:58:42.317900 2646 server.go:479] "Adding debug handlers to kubelet server" Sep 4 23:58:42.318635 kubelet[2646]: I0904 23:58:42.318615 2646 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 23:58:42.318747 kubelet[2646]: I0904 23:58:42.318729 2646 reconciler.go:26] "Reconciler: start to sync state" Sep 4 23:58:42.319619 kubelet[2646]: E0904 23:58:42.319602 2646 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:58:42.322185 kubelet[2646]: I0904 23:58:42.322164 2646 factory.go:221] Registration of the systemd container factory successfully Sep 4 23:58:42.322399 kubelet[2646]: I0904 23:58:42.322367 2646 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 23:58:42.323824 kubelet[2646]: E0904 23:58:42.323353 2646 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 23:58:42.325145 kubelet[2646]: I0904 23:58:42.324041 2646 factory.go:221] Registration of the containerd container factory successfully Sep 4 23:58:42.338331 kubelet[2646]: I0904 23:58:42.325193 2646 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 23:58:42.339426 kubelet[2646]: I0904 23:58:42.339388 2646 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 23:58:42.339531 kubelet[2646]: I0904 23:58:42.339521 2646 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 4 23:58:42.339883 kubelet[2646]: I0904 23:58:42.339689 2646 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 23:58:42.340201 kubelet[2646]: I0904 23:58:42.340187 2646 kubelet.go:2382] "Starting kubelet main sync loop" Sep 4 23:58:42.340319 kubelet[2646]: E0904 23:58:42.340301 2646 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 23:58:42.372155 kubelet[2646]: I0904 23:58:42.372044 2646 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 23:58:42.372155 kubelet[2646]: I0904 23:58:42.372064 2646 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 23:58:42.372155 kubelet[2646]: I0904 23:58:42.372085 2646 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:58:42.372299 kubelet[2646]: I0904 23:58:42.372252 2646 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 23:58:42.372299 kubelet[2646]: I0904 23:58:42.372262 2646 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 23:58:42.372299 kubelet[2646]: I0904 23:58:42.372279 2646 policy_none.go:49] "None policy: Start" Sep 4 23:58:42.372299 kubelet[2646]: I0904 23:58:42.372287 2646 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 23:58:42.372299 kubelet[2646]: I0904 23:58:42.372296 2646 state_mem.go:35] "Initializing new in-memory state store" Sep 4 23:58:42.372392 kubelet[2646]: I0904 23:58:42.372381 2646 state_mem.go:75] "Updated machine memory state" Sep 4 23:58:42.376416 kubelet[2646]: I0904 23:58:42.376389 2646 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 23:58:42.376552 kubelet[2646]: I0904 23:58:42.376536 2646 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 23:58:42.376581 kubelet[2646]: I0904 23:58:42.376554 2646 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 23:58:42.376804 kubelet[2646]: I0904 23:58:42.376782 2646 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 23:58:42.377554 kubelet[2646]: E0904 23:58:42.377512 2646 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 23:58:42.441324 kubelet[2646]: I0904 23:58:42.441286 2646 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 4 23:58:42.441324 kubelet[2646]: I0904 23:58:42.441328 2646 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 4 23:58:42.441469 kubelet[2646]: I0904 23:58:42.441373 2646 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 4 23:58:42.450520 kubelet[2646]: E0904 23:58:42.450479 2646 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 4 23:58:42.478710 kubelet[2646]: I0904 23:58:42.478684 2646 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 23:58:42.486685 kubelet[2646]: I0904 23:58:42.486656 2646 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 4 23:58:42.487000 kubelet[2646]: I0904 23:58:42.486730 2646 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 4 23:58:42.520224 kubelet[2646]: I0904 23:58:42.520185 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0d43850bbd5fc69fb1a21be74c6473e2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0d43850bbd5fc69fb1a21be74c6473e2\") " pod="kube-system/kube-apiserver-localhost" Sep 4 23:58:42.520224 kubelet[2646]: I0904 23:58:42.520224 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 23:58:42.520346 kubelet[2646]: I0904 23:58:42.520245 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 23:58:42.520346 kubelet[2646]: I0904 23:58:42.520262 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 23:58:42.520346 kubelet[2646]: I0904 23:58:42.520279 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 23:58:42.520346 kubelet[2646]: I0904 23:58:42.520295 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 23:58:42.520346 kubelet[2646]: I0904 23:58:42.520310 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0d43850bbd5fc69fb1a21be74c6473e2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0d43850bbd5fc69fb1a21be74c6473e2\") " pod="kube-system/kube-apiserver-localhost" Sep 4 23:58:42.520492 kubelet[2646]: I0904 23:58:42.520324 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0d43850bbd5fc69fb1a21be74c6473e2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0d43850bbd5fc69fb1a21be74c6473e2\") " pod="kube-system/kube-apiserver-localhost" Sep 4 23:58:42.520492 kubelet[2646]: I0904 23:58:42.520340 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 4 23:58:42.856039 sudo[2679]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 4 23:58:42.856341 sudo[2679]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 4 23:58:43.281420 sudo[2679]: pam_unix(sudo:session): session closed for user root Sep 4 23:58:43.313996 kubelet[2646]: I0904 23:58:43.313944 2646 apiserver.go:52] "Watching apiserver" Sep 4 23:58:43.318739 kubelet[2646]: I0904 23:58:43.318703 2646 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 23:58:43.338766 kubelet[2646]: I0904 23:58:43.338696 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.338680028 podStartE2EDuration="1.338680028s" podCreationTimestamp="2025-09-04 23:58:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:58:43.338653612 +0000 UTC m=+1.078523412" watchObservedRunningTime="2025-09-04 23:58:43.338680028 +0000 UTC m=+1.078549828" Sep 4 23:58:43.354989 kubelet[2646]: I0904 23:58:43.354929 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.354912408 podStartE2EDuration="2.354912408s" podCreationTimestamp="2025-09-04 23:58:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:58:43.35481711 +0000 UTC m=+1.094686910" watchObservedRunningTime="2025-09-04 23:58:43.354912408 +0000 UTC m=+1.094782168" Sep 4 23:58:43.355338 kubelet[2646]: I0904 23:58:43.355032 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.355028118 podStartE2EDuration="1.355028118s" podCreationTimestamp="2025-09-04 23:58:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:58:43.347658998 +0000 UTC m=+1.087528798" watchObservedRunningTime="2025-09-04 23:58:43.355028118 +0000 UTC m=+1.094897918" Sep 4 23:58:43.360998 kubelet[2646]: I0904 23:58:43.360963 2646 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 4 23:58:43.367172 kubelet[2646]: E0904 23:58:43.367043 2646 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 4 23:58:44.832230 sudo[1745]: pam_unix(sudo:session): session closed for user root Sep 4 23:58:44.836682 sshd[1744]: Connection closed by 10.0.0.1 port 46608 Sep 4 23:58:44.837190 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Sep 4 23:58:44.840632 systemd-logind[1519]: Session 7 logged out. Waiting for processes to exit. Sep 4 23:58:44.840805 systemd[1]: sshd@6-10.0.0.113:22-10.0.0.1:46608.service: Deactivated successfully. Sep 4 23:58:44.843632 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 23:58:44.843881 systemd[1]: session-7.scope: Consumed 5.802s CPU time, 264.2M memory peak. Sep 4 23:58:44.845567 systemd-logind[1519]: Removed session 7. Sep 4 23:58:48.247588 kubelet[2646]: I0904 23:58:48.247561 2646 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 23:58:48.248250 containerd[1543]: time="2025-09-04T23:58:48.248153746Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 23:58:48.248473 kubelet[2646]: I0904 23:58:48.248344 2646 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 23:58:49.187016 systemd[1]: Created slice kubepods-besteffort-podf8dbddc8_7ad7_4fc0_b11b_9661fbe88bd1.slice - libcontainer container kubepods-besteffort-podf8dbddc8_7ad7_4fc0_b11b_9661fbe88bd1.slice. Sep 4 23:58:49.197525 systemd[1]: Created slice kubepods-burstable-pod1dfa9288_4f0b_442d_9138_9fe232970d3a.slice - libcontainer container kubepods-burstable-pod1dfa9288_4f0b_442d_9138_9fe232970d3a.slice. Sep 4 23:58:49.264043 kubelet[2646]: I0904 23:58:49.263998 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1dfa9288-4f0b-442d-9138-9fe232970d3a-cni-path\") pod \"cilium-m4wqm\" (UID: \"1dfa9288-4f0b-442d-9138-9fe232970d3a\") " pod="kube-system/cilium-m4wqm" Sep 4 23:58:49.264043 kubelet[2646]: I0904 23:58:49.264042 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1dfa9288-4f0b-442d-9138-9fe232970d3a-lib-modules\") pod \"cilium-m4wqm\" (UID: \"1dfa9288-4f0b-442d-9138-9fe232970d3a\") " pod="kube-system/cilium-m4wqm" Sep 4 23:58:49.264398 kubelet[2646]: I0904 23:58:49.264063 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1dfa9288-4f0b-442d-9138-9fe232970d3a-cilium-config-path\") pod \"cilium-m4wqm\" (UID: \"1dfa9288-4f0b-442d-9138-9fe232970d3a\") " pod="kube-system/cilium-m4wqm" Sep 4 23:58:49.264398 kubelet[2646]: I0904 23:58:49.264077 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx5hs\" (UniqueName: \"kubernetes.io/projected/1dfa9288-4f0b-442d-9138-9fe232970d3a-kube-api-access-bx5hs\") pod \"cilium-m4wqm\" (UID: \"1dfa9288-4f0b-442d-9138-9fe232970d3a\") " pod="kube-system/cilium-m4wqm" Sep 4 23:58:49.264398 kubelet[2646]: I0904 23:58:49.264094 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1dfa9288-4f0b-442d-9138-9fe232970d3a-cilium-run\") pod \"cilium-m4wqm\" (UID: \"1dfa9288-4f0b-442d-9138-9fe232970d3a\") " pod="kube-system/cilium-m4wqm" Sep 4 23:58:49.264398 kubelet[2646]: I0904 23:58:49.264116 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1dfa9288-4f0b-442d-9138-9fe232970d3a-hostproc\") pod \"cilium-m4wqm\" (UID: \"1dfa9288-4f0b-442d-9138-9fe232970d3a\") " pod="kube-system/cilium-m4wqm" Sep 4 23:58:49.264398 kubelet[2646]: I0904 23:58:49.264148 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1dfa9288-4f0b-442d-9138-9fe232970d3a-xtables-lock\") pod \"cilium-m4wqm\" (UID: \"1dfa9288-4f0b-442d-9138-9fe232970d3a\") " pod="kube-system/cilium-m4wqm" Sep 4 23:58:49.264398 kubelet[2646]: I0904 23:58:49.264165 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1dfa9288-4f0b-442d-9138-9fe232970d3a-clustermesh-secrets\") pod \"cilium-m4wqm\" (UID: \"1dfa9288-4f0b-442d-9138-9fe232970d3a\") " pod="kube-system/cilium-m4wqm" Sep 4 23:58:49.264520 kubelet[2646]: I0904 23:58:49.264180 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1dfa9288-4f0b-442d-9138-9fe232970d3a-cilium-cgroup\") pod \"cilium-m4wqm\" (UID: \"1dfa9288-4f0b-442d-9138-9fe232970d3a\") " pod="kube-system/cilium-m4wqm" Sep 4 23:58:49.264520 kubelet[2646]: I0904 23:58:49.264196 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1dfa9288-4f0b-442d-9138-9fe232970d3a-host-proc-sys-kernel\") pod \"cilium-m4wqm\" (UID: \"1dfa9288-4f0b-442d-9138-9fe232970d3a\") " pod="kube-system/cilium-m4wqm" Sep 4 23:58:49.264520 kubelet[2646]: I0904 23:58:49.264212 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f8dbddc8-7ad7-4fc0-b11b-9661fbe88bd1-kube-proxy\") pod \"kube-proxy-mgvbj\" (UID: \"f8dbddc8-7ad7-4fc0-b11b-9661fbe88bd1\") " pod="kube-system/kube-proxy-mgvbj" Sep 4 23:58:49.264520 kubelet[2646]: I0904 23:58:49.264226 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f8dbddc8-7ad7-4fc0-b11b-9661fbe88bd1-lib-modules\") pod \"kube-proxy-mgvbj\" (UID: \"f8dbddc8-7ad7-4fc0-b11b-9661fbe88bd1\") " pod="kube-system/kube-proxy-mgvbj" Sep 4 23:58:49.264520 kubelet[2646]: I0904 23:58:49.264295 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1dfa9288-4f0b-442d-9138-9fe232970d3a-bpf-maps\") pod \"cilium-m4wqm\" (UID: \"1dfa9288-4f0b-442d-9138-9fe232970d3a\") " pod="kube-system/cilium-m4wqm" Sep 4 23:58:49.264520 kubelet[2646]: I0904 23:58:49.264348 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1dfa9288-4f0b-442d-9138-9fe232970d3a-host-proc-sys-net\") pod \"cilium-m4wqm\" (UID: \"1dfa9288-4f0b-442d-9138-9fe232970d3a\") " pod="kube-system/cilium-m4wqm" Sep 4 23:58:49.264626 kubelet[2646]: I0904 23:58:49.264369 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzqvn\" (UniqueName: \"kubernetes.io/projected/f8dbddc8-7ad7-4fc0-b11b-9661fbe88bd1-kube-api-access-xzqvn\") pod \"kube-proxy-mgvbj\" (UID: \"f8dbddc8-7ad7-4fc0-b11b-9661fbe88bd1\") " pod="kube-system/kube-proxy-mgvbj" Sep 4 23:58:49.264626 kubelet[2646]: I0904 23:58:49.264385 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1dfa9288-4f0b-442d-9138-9fe232970d3a-etc-cni-netd\") pod \"cilium-m4wqm\" (UID: \"1dfa9288-4f0b-442d-9138-9fe232970d3a\") " pod="kube-system/cilium-m4wqm" Sep 4 23:58:49.264626 kubelet[2646]: I0904 23:58:49.264400 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1dfa9288-4f0b-442d-9138-9fe232970d3a-hubble-tls\") pod \"cilium-m4wqm\" (UID: \"1dfa9288-4f0b-442d-9138-9fe232970d3a\") " pod="kube-system/cilium-m4wqm" Sep 4 23:58:49.264626 kubelet[2646]: I0904 23:58:49.264417 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f8dbddc8-7ad7-4fc0-b11b-9661fbe88bd1-xtables-lock\") pod \"kube-proxy-mgvbj\" (UID: \"f8dbddc8-7ad7-4fc0-b11b-9661fbe88bd1\") " pod="kube-system/kube-proxy-mgvbj" Sep 4 23:58:49.319343 systemd[1]: Created slice kubepods-besteffort-pod9046f78a_8aa4_4440_add9_7f298421896c.slice - libcontainer container kubepods-besteffort-pod9046f78a_8aa4_4440_add9_7f298421896c.slice. Sep 4 23:58:49.365643 kubelet[2646]: I0904 23:58:49.365608 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcv4c\" (UniqueName: \"kubernetes.io/projected/9046f78a-8aa4-4440-add9-7f298421896c-kube-api-access-jcv4c\") pod \"cilium-operator-6c4d7847fc-bzrzb\" (UID: \"9046f78a-8aa4-4440-add9-7f298421896c\") " pod="kube-system/cilium-operator-6c4d7847fc-bzrzb" Sep 4 23:58:49.365887 kubelet[2646]: I0904 23:58:49.365872 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9046f78a-8aa4-4440-add9-7f298421896c-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-bzrzb\" (UID: \"9046f78a-8aa4-4440-add9-7f298421896c\") " pod="kube-system/cilium-operator-6c4d7847fc-bzrzb" Sep 4 23:58:49.496681 containerd[1543]: time="2025-09-04T23:58:49.496577207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mgvbj,Uid:f8dbddc8-7ad7-4fc0-b11b-9661fbe88bd1,Namespace:kube-system,Attempt:0,}" Sep 4 23:58:49.502219 containerd[1543]: time="2025-09-04T23:58:49.502177300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m4wqm,Uid:1dfa9288-4f0b-442d-9138-9fe232970d3a,Namespace:kube-system,Attempt:0,}" Sep 4 23:58:49.514626 containerd[1543]: time="2025-09-04T23:58:49.514579602Z" level=info msg="connecting to shim a86fd961b016ca5fb669ab83645c43565ae9a6f47b24bce3712304790dd65a9c" address="unix:///run/containerd/s/38b7ce43bf6983be44f15e2cb26aced4d86df95a1e1831eea73fb758a7c5fa3e" namespace=k8s.io protocol=ttrpc version=3 Sep 4 23:58:49.524549 containerd[1543]: time="2025-09-04T23:58:49.524506126Z" level=info msg="connecting to shim 6afaa6b3f8c1a7c237822aae6de2382bf7114ad67645fd18df69392e9d34a92d" address="unix:///run/containerd/s/c19a3369f95114a1826d9b2b34a2fdc4325f2e5f2582d1e98acac1cdf9e42582" namespace=k8s.io protocol=ttrpc version=3 Sep 4 23:58:49.538342 systemd[1]: Started cri-containerd-a86fd961b016ca5fb669ab83645c43565ae9a6f47b24bce3712304790dd65a9c.scope - libcontainer container a86fd961b016ca5fb669ab83645c43565ae9a6f47b24bce3712304790dd65a9c. Sep 4 23:58:49.542672 systemd[1]: Started cri-containerd-6afaa6b3f8c1a7c237822aae6de2382bf7114ad67645fd18df69392e9d34a92d.scope - libcontainer container 6afaa6b3f8c1a7c237822aae6de2382bf7114ad67645fd18df69392e9d34a92d. Sep 4 23:58:49.569323 containerd[1543]: time="2025-09-04T23:58:49.569266897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mgvbj,Uid:f8dbddc8-7ad7-4fc0-b11b-9661fbe88bd1,Namespace:kube-system,Attempt:0,} returns sandbox id \"a86fd961b016ca5fb669ab83645c43565ae9a6f47b24bce3712304790dd65a9c\"" Sep 4 23:58:49.571502 containerd[1543]: time="2025-09-04T23:58:49.571406303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m4wqm,Uid:1dfa9288-4f0b-442d-9138-9fe232970d3a,Namespace:kube-system,Attempt:0,} returns sandbox id \"6afaa6b3f8c1a7c237822aae6de2382bf7114ad67645fd18df69392e9d34a92d\"" Sep 4 23:58:49.576475 containerd[1543]: time="2025-09-04T23:58:49.576443214Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 4 23:58:49.584465 containerd[1543]: time="2025-09-04T23:58:49.584432051Z" level=info msg="CreateContainer within sandbox \"a86fd961b016ca5fb669ab83645c43565ae9a6f47b24bce3712304790dd65a9c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 23:58:49.597215 containerd[1543]: time="2025-09-04T23:58:49.597031631Z" level=info msg="Container 5ba45c7f28fb1c443d8a5a1e066cda84b88d436c04743c0203dfe155c45a7f42: CDI devices from CRI Config.CDIDevices: []" Sep 4 23:58:49.605322 containerd[1543]: time="2025-09-04T23:58:49.605271528Z" level=info msg="CreateContainer within sandbox \"a86fd961b016ca5fb669ab83645c43565ae9a6f47b24bce3712304790dd65a9c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5ba45c7f28fb1c443d8a5a1e066cda84b88d436c04743c0203dfe155c45a7f42\"" Sep 4 23:58:49.606955 containerd[1543]: time="2025-09-04T23:58:49.606880644Z" level=info msg="StartContainer for \"5ba45c7f28fb1c443d8a5a1e066cda84b88d436c04743c0203dfe155c45a7f42\"" Sep 4 23:58:49.610281 containerd[1543]: time="2025-09-04T23:58:49.610226086Z" level=info msg="connecting to shim 5ba45c7f28fb1c443d8a5a1e066cda84b88d436c04743c0203dfe155c45a7f42" address="unix:///run/containerd/s/38b7ce43bf6983be44f15e2cb26aced4d86df95a1e1831eea73fb758a7c5fa3e" protocol=ttrpc version=3 Sep 4 23:58:49.624053 containerd[1543]: time="2025-09-04T23:58:49.624019058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-bzrzb,Uid:9046f78a-8aa4-4440-add9-7f298421896c,Namespace:kube-system,Attempt:0,}" Sep 4 23:58:49.630317 systemd[1]: Started cri-containerd-5ba45c7f28fb1c443d8a5a1e066cda84b88d436c04743c0203dfe155c45a7f42.scope - libcontainer container 5ba45c7f28fb1c443d8a5a1e066cda84b88d436c04743c0203dfe155c45a7f42. Sep 4 23:58:49.641727 containerd[1543]: time="2025-09-04T23:58:49.641679678Z" level=info msg="connecting to shim 2f7b3670638433c10af1123831539cdf02fca9e7e6de0cf48aba91473dd1daed" address="unix:///run/containerd/s/1040f8989e2c97043f6f8b33da7d4b697519236adf531ad374017cf5f2795775" namespace=k8s.io protocol=ttrpc version=3 Sep 4 23:58:49.672368 systemd[1]: Started cri-containerd-2f7b3670638433c10af1123831539cdf02fca9e7e6de0cf48aba91473dd1daed.scope - libcontainer container 2f7b3670638433c10af1123831539cdf02fca9e7e6de0cf48aba91473dd1daed. Sep 4 23:58:49.682340 containerd[1543]: time="2025-09-04T23:58:49.682302414Z" level=info msg="StartContainer for \"5ba45c7f28fb1c443d8a5a1e066cda84b88d436c04743c0203dfe155c45a7f42\" returns successfully" Sep 4 23:58:49.711092 containerd[1543]: time="2025-09-04T23:58:49.711035931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-bzrzb,Uid:9046f78a-8aa4-4440-add9-7f298421896c,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f7b3670638433c10af1123831539cdf02fca9e7e6de0cf48aba91473dd1daed\"" Sep 4 23:58:50.388960 kubelet[2646]: I0904 23:58:50.388896 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mgvbj" podStartSLOduration=1.388878015 podStartE2EDuration="1.388878015s" podCreationTimestamp="2025-09-04 23:58:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:58:50.388747606 +0000 UTC m=+8.128617406" watchObservedRunningTime="2025-09-04 23:58:50.388878015 +0000 UTC m=+8.128747775" Sep 4 23:58:58.552259 update_engine[1525]: I20250904 23:58:58.552189 1525 update_attempter.cc:509] Updating boot flags... Sep 4 23:58:58.614531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3177584884.mount: Deactivated successfully. Sep 4 23:58:59.918206 containerd[1543]: time="2025-09-04T23:58:59.918160130Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:58:59.918667 containerd[1543]: time="2025-09-04T23:58:59.918638361Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 4 23:58:59.919409 containerd[1543]: time="2025-09-04T23:58:59.919370811Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:58:59.920852 containerd[1543]: time="2025-09-04T23:58:59.920718804Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 10.344236017s" Sep 4 23:58:59.920852 containerd[1543]: time="2025-09-04T23:58:59.920756853Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 4 23:58:59.929465 containerd[1543]: time="2025-09-04T23:58:59.929424945Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 4 23:58:59.941547 containerd[1543]: time="2025-09-04T23:58:59.941478182Z" level=info msg="CreateContainer within sandbox \"6afaa6b3f8c1a7c237822aae6de2382bf7114ad67645fd18df69392e9d34a92d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 23:58:59.949376 containerd[1543]: time="2025-09-04T23:58:59.949338487Z" level=info msg="Container 8f2548f8f928060e0d16e025a131fa5ef129df75be3d5db88b85a8946782ea3c: CDI devices from CRI Config.CDIDevices: []" Sep 4 23:58:59.954421 containerd[1543]: time="2025-09-04T23:58:59.954293036Z" level=info msg="CreateContainer within sandbox \"6afaa6b3f8c1a7c237822aae6de2382bf7114ad67645fd18df69392e9d34a92d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8f2548f8f928060e0d16e025a131fa5ef129df75be3d5db88b85a8946782ea3c\"" Sep 4 23:58:59.957162 containerd[1543]: time="2025-09-04T23:58:59.957011747Z" level=info msg="StartContainer for \"8f2548f8f928060e0d16e025a131fa5ef129df75be3d5db88b85a8946782ea3c\"" Sep 4 23:58:59.958080 containerd[1543]: time="2025-09-04T23:58:59.958051629Z" level=info msg="connecting to shim 8f2548f8f928060e0d16e025a131fa5ef129df75be3d5db88b85a8946782ea3c" address="unix:///run/containerd/s/c19a3369f95114a1826d9b2b34a2fdc4325f2e5f2582d1e98acac1cdf9e42582" protocol=ttrpc version=3 Sep 4 23:59:00.002391 systemd[1]: Started cri-containerd-8f2548f8f928060e0d16e025a131fa5ef129df75be3d5db88b85a8946782ea3c.scope - libcontainer container 8f2548f8f928060e0d16e025a131fa5ef129df75be3d5db88b85a8946782ea3c. Sep 4 23:59:00.035049 containerd[1543]: time="2025-09-04T23:59:00.034987784Z" level=info msg="StartContainer for \"8f2548f8f928060e0d16e025a131fa5ef129df75be3d5db88b85a8946782ea3c\" returns successfully" Sep 4 23:59:00.050747 systemd[1]: cri-containerd-8f2548f8f928060e0d16e025a131fa5ef129df75be3d5db88b85a8946782ea3c.scope: Deactivated successfully. Sep 4 23:59:00.073818 containerd[1543]: time="2025-09-04T23:59:00.073766148Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8f2548f8f928060e0d16e025a131fa5ef129df75be3d5db88b85a8946782ea3c\" id:\"8f2548f8f928060e0d16e025a131fa5ef129df75be3d5db88b85a8946782ea3c\" pid:3075 exited_at:{seconds:1757030340 nanos:68169592}" Sep 4 23:59:00.074600 containerd[1543]: time="2025-09-04T23:59:00.074542119Z" level=info msg="received exit event container_id:\"8f2548f8f928060e0d16e025a131fa5ef129df75be3d5db88b85a8946782ea3c\" id:\"8f2548f8f928060e0d16e025a131fa5ef129df75be3d5db88b85a8946782ea3c\" pid:3075 exited_at:{seconds:1757030340 nanos:68169592}" Sep 4 23:59:00.110202 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f2548f8f928060e0d16e025a131fa5ef129df75be3d5db88b85a8946782ea3c-rootfs.mount: Deactivated successfully. Sep 4 23:59:00.406724 containerd[1543]: time="2025-09-04T23:59:00.406618975Z" level=info msg="CreateContainer within sandbox \"6afaa6b3f8c1a7c237822aae6de2382bf7114ad67645fd18df69392e9d34a92d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 23:59:00.417171 containerd[1543]: time="2025-09-04T23:59:00.416634587Z" level=info msg="Container dc4da08eba209c02893e7c98ff3cd352beb2d6f432cd35b24854899f48f2c94a: CDI devices from CRI Config.CDIDevices: []" Sep 4 23:59:00.423279 containerd[1543]: time="2025-09-04T23:59:00.423238205Z" level=info msg="CreateContainer within sandbox \"6afaa6b3f8c1a7c237822aae6de2382bf7114ad67645fd18df69392e9d34a92d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dc4da08eba209c02893e7c98ff3cd352beb2d6f432cd35b24854899f48f2c94a\"" Sep 4 23:59:00.423824 containerd[1543]: time="2025-09-04T23:59:00.423800689Z" level=info msg="StartContainer for \"dc4da08eba209c02893e7c98ff3cd352beb2d6f432cd35b24854899f48f2c94a\"" Sep 4 23:59:00.424811 containerd[1543]: time="2025-09-04T23:59:00.424774624Z" level=info msg="connecting to shim dc4da08eba209c02893e7c98ff3cd352beb2d6f432cd35b24854899f48f2c94a" address="unix:///run/containerd/s/c19a3369f95114a1826d9b2b34a2fdc4325f2e5f2582d1e98acac1cdf9e42582" protocol=ttrpc version=3 Sep 4 23:59:00.441305 systemd[1]: Started cri-containerd-dc4da08eba209c02893e7c98ff3cd352beb2d6f432cd35b24854899f48f2c94a.scope - libcontainer container dc4da08eba209c02893e7c98ff3cd352beb2d6f432cd35b24854899f48f2c94a. Sep 4 23:59:00.468413 containerd[1543]: time="2025-09-04T23:59:00.468356809Z" level=info msg="StartContainer for \"dc4da08eba209c02893e7c98ff3cd352beb2d6f432cd35b24854899f48f2c94a\" returns successfully" Sep 4 23:59:00.483225 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 23:59:00.483453 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:59:00.483870 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:59:00.486052 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:59:00.488313 systemd[1]: cri-containerd-dc4da08eba209c02893e7c98ff3cd352beb2d6f432cd35b24854899f48f2c94a.scope: Deactivated successfully. Sep 4 23:59:00.513674 containerd[1543]: time="2025-09-04T23:59:00.498081574Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dc4da08eba209c02893e7c98ff3cd352beb2d6f432cd35b24854899f48f2c94a\" id:\"dc4da08eba209c02893e7c98ff3cd352beb2d6f432cd35b24854899f48f2c94a\" pid:3121 exited_at:{seconds:1757030340 nanos:497767064}" Sep 4 23:59:00.513674 containerd[1543]: time="2025-09-04T23:59:00.509073401Z" level=info msg="received exit event container_id:\"dc4da08eba209c02893e7c98ff3cd352beb2d6f432cd35b24854899f48f2c94a\" id:\"dc4da08eba209c02893e7c98ff3cd352beb2d6f432cd35b24854899f48f2c94a\" pid:3121 exited_at:{seconds:1757030340 nanos:497767064}" Sep 4 23:59:00.529225 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:59:01.125318 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount800031928.mount: Deactivated successfully. Sep 4 23:59:01.415481 containerd[1543]: time="2025-09-04T23:59:01.413913946Z" level=info msg="CreateContainer within sandbox \"6afaa6b3f8c1a7c237822aae6de2382bf7114ad67645fd18df69392e9d34a92d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 23:59:01.430317 containerd[1543]: time="2025-09-04T23:59:01.430278868Z" level=info msg="Container ffc10a4e513c7360c9fde36bab63432eea4fd42ae642ad0cd1cefa5b1e090e8b: CDI devices from CRI Config.CDIDevices: []" Sep 4 23:59:01.458444 containerd[1543]: time="2025-09-04T23:59:01.458396501Z" level=info msg="CreateContainer within sandbox \"6afaa6b3f8c1a7c237822aae6de2382bf7114ad67645fd18df69392e9d34a92d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ffc10a4e513c7360c9fde36bab63432eea4fd42ae642ad0cd1cefa5b1e090e8b\"" Sep 4 23:59:01.459128 containerd[1543]: time="2025-09-04T23:59:01.458964380Z" level=info msg="StartContainer for \"ffc10a4e513c7360c9fde36bab63432eea4fd42ae642ad0cd1cefa5b1e090e8b\"" Sep 4 23:59:01.460385 containerd[1543]: time="2025-09-04T23:59:01.460359194Z" level=info msg="connecting to shim ffc10a4e513c7360c9fde36bab63432eea4fd42ae642ad0cd1cefa5b1e090e8b" address="unix:///run/containerd/s/c19a3369f95114a1826d9b2b34a2fdc4325f2e5f2582d1e98acac1cdf9e42582" protocol=ttrpc version=3 Sep 4 23:59:01.506295 systemd[1]: Started cri-containerd-ffc10a4e513c7360c9fde36bab63432eea4fd42ae642ad0cd1cefa5b1e090e8b.scope - libcontainer container ffc10a4e513c7360c9fde36bab63432eea4fd42ae642ad0cd1cefa5b1e090e8b. Sep 4 23:59:01.550886 containerd[1543]: time="2025-09-04T23:59:01.550848823Z" level=info msg="StartContainer for \"ffc10a4e513c7360c9fde36bab63432eea4fd42ae642ad0cd1cefa5b1e090e8b\" returns successfully" Sep 4 23:59:01.550973 systemd[1]: cri-containerd-ffc10a4e513c7360c9fde36bab63432eea4fd42ae642ad0cd1cefa5b1e090e8b.scope: Deactivated successfully. Sep 4 23:59:01.553463 containerd[1543]: time="2025-09-04T23:59:01.553430126Z" level=info msg="received exit event container_id:\"ffc10a4e513c7360c9fde36bab63432eea4fd42ae642ad0cd1cefa5b1e090e8b\" id:\"ffc10a4e513c7360c9fde36bab63432eea4fd42ae642ad0cd1cefa5b1e090e8b\" pid:3181 exited_at:{seconds:1757030341 nanos:553214280}" Sep 4 23:59:01.554072 containerd[1543]: time="2025-09-04T23:59:01.554038614Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ffc10a4e513c7360c9fde36bab63432eea4fd42ae642ad0cd1cefa5b1e090e8b\" id:\"ffc10a4e513c7360c9fde36bab63432eea4fd42ae642ad0cd1cefa5b1e090e8b\" pid:3181 exited_at:{seconds:1757030341 nanos:553214280}" Sep 4 23:59:01.657237 containerd[1543]: time="2025-09-04T23:59:01.657195547Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:59:01.657745 containerd[1543]: time="2025-09-04T23:59:01.657706375Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 4 23:59:01.658616 containerd[1543]: time="2025-09-04T23:59:01.658576638Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:59:01.659992 containerd[1543]: time="2025-09-04T23:59:01.659821339Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.730352545s" Sep 4 23:59:01.659992 containerd[1543]: time="2025-09-04T23:59:01.659863228Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 4 23:59:01.663044 containerd[1543]: time="2025-09-04T23:59:01.663007209Z" level=info msg="CreateContainer within sandbox \"2f7b3670638433c10af1123831539cdf02fca9e7e6de0cf48aba91473dd1daed\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 4 23:59:01.670045 containerd[1543]: time="2025-09-04T23:59:01.669572230Z" level=info msg="Container 33588ef9840b8eb6da2e2b8373335ac31e74fa3687c51fd9cfbbe5d39fa62e9f: CDI devices from CRI Config.CDIDevices: []" Sep 4 23:59:01.674879 containerd[1543]: time="2025-09-04T23:59:01.674832376Z" level=info msg="CreateContainer within sandbox \"2f7b3670638433c10af1123831539cdf02fca9e7e6de0cf48aba91473dd1daed\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"33588ef9840b8eb6da2e2b8373335ac31e74fa3687c51fd9cfbbe5d39fa62e9f\"" Sep 4 23:59:01.675431 containerd[1543]: time="2025-09-04T23:59:01.675395935Z" level=info msg="StartContainer for \"33588ef9840b8eb6da2e2b8373335ac31e74fa3687c51fd9cfbbe5d39fa62e9f\"" Sep 4 23:59:01.676479 containerd[1543]: time="2025-09-04T23:59:01.676436873Z" level=info msg="connecting to shim 33588ef9840b8eb6da2e2b8373335ac31e74fa3687c51fd9cfbbe5d39fa62e9f" address="unix:///run/containerd/s/1040f8989e2c97043f6f8b33da7d4b697519236adf531ad374017cf5f2795775" protocol=ttrpc version=3 Sep 4 23:59:01.698510 systemd[1]: Started cri-containerd-33588ef9840b8eb6da2e2b8373335ac31e74fa3687c51fd9cfbbe5d39fa62e9f.scope - libcontainer container 33588ef9840b8eb6da2e2b8373335ac31e74fa3687c51fd9cfbbe5d39fa62e9f. Sep 4 23:59:01.739858 containerd[1543]: time="2025-09-04T23:59:01.739796078Z" level=info msg="StartContainer for \"33588ef9840b8eb6da2e2b8373335ac31e74fa3687c51fd9cfbbe5d39fa62e9f\" returns successfully" Sep 4 23:59:02.433106 containerd[1543]: time="2025-09-04T23:59:02.433038334Z" level=info msg="CreateContainer within sandbox \"6afaa6b3f8c1a7c237822aae6de2382bf7114ad67645fd18df69392e9d34a92d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 23:59:02.441898 kubelet[2646]: I0904 23:59:02.441742 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-bzrzb" podStartSLOduration=1.493771247 podStartE2EDuration="13.441724594s" podCreationTimestamp="2025-09-04 23:58:49 +0000 UTC" firstStartedPulling="2025-09-04 23:58:49.712795067 +0000 UTC m=+7.452664867" lastFinishedPulling="2025-09-04 23:59:01.660748414 +0000 UTC m=+19.400618214" observedRunningTime="2025-09-04 23:59:02.441196969 +0000 UTC m=+20.181066809" watchObservedRunningTime="2025-09-04 23:59:02.441724594 +0000 UTC m=+20.181594394" Sep 4 23:59:02.454188 containerd[1543]: time="2025-09-04T23:59:02.452245023Z" level=info msg="Container d054996befe3ffb63272fe49c11437291acbc484095f2f9c345b1f860d7e2c94: CDI devices from CRI Config.CDIDevices: []" Sep 4 23:59:02.452615 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount731934409.mount: Deactivated successfully. Sep 4 23:59:02.463787 containerd[1543]: time="2025-09-04T23:59:02.463743407Z" level=info msg="CreateContainer within sandbox \"6afaa6b3f8c1a7c237822aae6de2382bf7114ad67645fd18df69392e9d34a92d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d054996befe3ffb63272fe49c11437291acbc484095f2f9c345b1f860d7e2c94\"" Sep 4 23:59:02.465767 containerd[1543]: time="2025-09-04T23:59:02.465738847Z" level=info msg="StartContainer for \"d054996befe3ffb63272fe49c11437291acbc484095f2f9c345b1f860d7e2c94\"" Sep 4 23:59:02.468681 containerd[1543]: time="2025-09-04T23:59:02.467982057Z" level=info msg="connecting to shim d054996befe3ffb63272fe49c11437291acbc484095f2f9c345b1f860d7e2c94" address="unix:///run/containerd/s/c19a3369f95114a1826d9b2b34a2fdc4325f2e5f2582d1e98acac1cdf9e42582" protocol=ttrpc version=3 Sep 4 23:59:02.491516 systemd[1]: Started cri-containerd-d054996befe3ffb63272fe49c11437291acbc484095f2f9c345b1f860d7e2c94.scope - libcontainer container d054996befe3ffb63272fe49c11437291acbc484095f2f9c345b1f860d7e2c94. Sep 4 23:59:02.520241 systemd[1]: cri-containerd-d054996befe3ffb63272fe49c11437291acbc484095f2f9c345b1f860d7e2c94.scope: Deactivated successfully. Sep 4 23:59:02.520653 containerd[1543]: time="2025-09-04T23:59:02.520610044Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d054996befe3ffb63272fe49c11437291acbc484095f2f9c345b1f860d7e2c94\" id:\"d054996befe3ffb63272fe49c11437291acbc484095f2f9c345b1f860d7e2c94\" pid:3259 exited_at:{seconds:1757030342 nanos:520366555}" Sep 4 23:59:02.523348 containerd[1543]: time="2025-09-04T23:59:02.523299983Z" level=info msg="received exit event container_id:\"d054996befe3ffb63272fe49c11437291acbc484095f2f9c345b1f860d7e2c94\" id:\"d054996befe3ffb63272fe49c11437291acbc484095f2f9c345b1f860d7e2c94\" pid:3259 exited_at:{seconds:1757030342 nanos:520366555}" Sep 4 23:59:02.540018 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d054996befe3ffb63272fe49c11437291acbc484095f2f9c345b1f860d7e2c94-rootfs.mount: Deactivated successfully. Sep 4 23:59:02.547488 containerd[1543]: time="2025-09-04T23:59:02.547405094Z" level=error msg="copy shim log after reload" error="read /proc/self/fd/35: file already closed" Sep 4 23:59:02.547948 containerd[1543]: time="2025-09-04T23:59:02.547519997Z" level=info msg="StartContainer for \"d054996befe3ffb63272fe49c11437291acbc484095f2f9c345b1f860d7e2c94\" returns successfully" Sep 4 23:59:03.444675 containerd[1543]: time="2025-09-04T23:59:03.443895786Z" level=info msg="CreateContainer within sandbox \"6afaa6b3f8c1a7c237822aae6de2382bf7114ad67645fd18df69392e9d34a92d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 23:59:03.485328 containerd[1543]: time="2025-09-04T23:59:03.484567800Z" level=info msg="Container 19f801a49f487d6f86ebab7516030e38150f1b610b84203a573a46bd4c5ba26f: CDI devices from CRI Config.CDIDevices: []" Sep 4 23:59:03.488506 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount819699380.mount: Deactivated successfully. Sep 4 23:59:03.492629 containerd[1543]: time="2025-09-04T23:59:03.492595574Z" level=info msg="CreateContainer within sandbox \"6afaa6b3f8c1a7c237822aae6de2382bf7114ad67645fd18df69392e9d34a92d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"19f801a49f487d6f86ebab7516030e38150f1b610b84203a573a46bd4c5ba26f\"" Sep 4 23:59:03.493220 containerd[1543]: time="2025-09-04T23:59:03.493187607Z" level=info msg="StartContainer for \"19f801a49f487d6f86ebab7516030e38150f1b610b84203a573a46bd4c5ba26f\"" Sep 4 23:59:03.495176 containerd[1543]: time="2025-09-04T23:59:03.495152663Z" level=info msg="connecting to shim 19f801a49f487d6f86ebab7516030e38150f1b610b84203a573a46bd4c5ba26f" address="unix:///run/containerd/s/c19a3369f95114a1826d9b2b34a2fdc4325f2e5f2582d1e98acac1cdf9e42582" protocol=ttrpc version=3 Sep 4 23:59:03.513299 systemd[1]: Started cri-containerd-19f801a49f487d6f86ebab7516030e38150f1b610b84203a573a46bd4c5ba26f.scope - libcontainer container 19f801a49f487d6f86ebab7516030e38150f1b610b84203a573a46bd4c5ba26f. Sep 4 23:59:03.540312 containerd[1543]: time="2025-09-04T23:59:03.540274847Z" level=info msg="StartContainer for \"19f801a49f487d6f86ebab7516030e38150f1b610b84203a573a46bd4c5ba26f\" returns successfully" Sep 4 23:59:03.634701 containerd[1543]: time="2025-09-04T23:59:03.634337666Z" level=info msg="TaskExit event in podsandbox handler container_id:\"19f801a49f487d6f86ebab7516030e38150f1b610b84203a573a46bd4c5ba26f\" id:\"b4e3613c9ea8873e8acecf2e091dc81823de12827a6bc9cc8ad383c2ae284a5a\" pid:3327 exited_at:{seconds:1757030343 nanos:634004963}" Sep 4 23:59:03.722858 kubelet[2646]: I0904 23:59:03.722292 2646 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 4 23:59:03.765220 systemd[1]: Created slice kubepods-burstable-pod949625c7_e137_40f9_91d7_609627b0aa62.slice - libcontainer container kubepods-burstable-pod949625c7_e137_40f9_91d7_609627b0aa62.slice. Sep 4 23:59:03.770563 kubelet[2646]: I0904 23:59:03.770533 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks9nt\" (UniqueName: \"kubernetes.io/projected/bf97e0aa-e084-4b6d-b5ab-075cd68bc527-kube-api-access-ks9nt\") pod \"coredns-668d6bf9bc-d567n\" (UID: \"bf97e0aa-e084-4b6d-b5ab-075cd68bc527\") " pod="kube-system/coredns-668d6bf9bc-d567n" Sep 4 23:59:03.770685 kubelet[2646]: I0904 23:59:03.770575 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/949625c7-e137-40f9-91d7-609627b0aa62-config-volume\") pod \"coredns-668d6bf9bc-95ddj\" (UID: \"949625c7-e137-40f9-91d7-609627b0aa62\") " pod="kube-system/coredns-668d6bf9bc-95ddj" Sep 4 23:59:03.770685 kubelet[2646]: I0904 23:59:03.770593 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxrcd\" (UniqueName: \"kubernetes.io/projected/949625c7-e137-40f9-91d7-609627b0aa62-kube-api-access-mxrcd\") pod \"coredns-668d6bf9bc-95ddj\" (UID: \"949625c7-e137-40f9-91d7-609627b0aa62\") " pod="kube-system/coredns-668d6bf9bc-95ddj" Sep 4 23:59:03.770685 kubelet[2646]: I0904 23:59:03.770610 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf97e0aa-e084-4b6d-b5ab-075cd68bc527-config-volume\") pod \"coredns-668d6bf9bc-d567n\" (UID: \"bf97e0aa-e084-4b6d-b5ab-075cd68bc527\") " pod="kube-system/coredns-668d6bf9bc-d567n" Sep 4 23:59:03.776123 systemd[1]: Created slice kubepods-burstable-podbf97e0aa_e084_4b6d_b5ab_075cd68bc527.slice - libcontainer container kubepods-burstable-podbf97e0aa_e084_4b6d_b5ab_075cd68bc527.slice. Sep 4 23:59:04.074466 containerd[1543]: time="2025-09-04T23:59:04.074281641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-95ddj,Uid:949625c7-e137-40f9-91d7-609627b0aa62,Namespace:kube-system,Attempt:0,}" Sep 4 23:59:04.082193 containerd[1543]: time="2025-09-04T23:59:04.082086265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d567n,Uid:bf97e0aa-e084-4b6d-b5ab-075cd68bc527,Namespace:kube-system,Attempt:0,}" Sep 4 23:59:04.460411 kubelet[2646]: I0904 23:59:04.460351 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-m4wqm" podStartSLOduration=5.107205321 podStartE2EDuration="15.460335156s" podCreationTimestamp="2025-09-04 23:58:49 +0000 UTC" firstStartedPulling="2025-09-04 23:58:49.576019246 +0000 UTC m=+7.315889046" lastFinishedPulling="2025-09-04 23:58:59.929149121 +0000 UTC m=+17.669018881" observedRunningTime="2025-09-04 23:59:04.459574057 +0000 UTC m=+22.199443857" watchObservedRunningTime="2025-09-04 23:59:04.460335156 +0000 UTC m=+22.200204916" Sep 4 23:59:05.577300 systemd-networkd[1445]: cilium_host: Link UP Sep 4 23:59:05.577437 systemd-networkd[1445]: cilium_net: Link UP Sep 4 23:59:05.577553 systemd-networkd[1445]: cilium_host: Gained carrier Sep 4 23:59:05.577651 systemd-networkd[1445]: cilium_net: Gained carrier Sep 4 23:59:05.656800 systemd-networkd[1445]: cilium_vxlan: Link UP Sep 4 23:59:05.656806 systemd-networkd[1445]: cilium_vxlan: Gained carrier Sep 4 23:59:05.909159 kernel: NET: Registered PF_ALG protocol family Sep 4 23:59:06.271332 systemd-networkd[1445]: cilium_net: Gained IPv6LL Sep 4 23:59:06.480403 systemd-networkd[1445]: lxc_health: Link UP Sep 4 23:59:06.481681 systemd-networkd[1445]: lxc_health: Gained carrier Sep 4 23:59:06.528304 systemd-networkd[1445]: cilium_host: Gained IPv6LL Sep 4 23:59:06.615464 systemd-networkd[1445]: lxc5cb45c8724c5: Link UP Sep 4 23:59:06.615601 systemd-networkd[1445]: lxce8b532e23297: Link UP Sep 4 23:59:06.625149 kernel: eth0: renamed from tmp0ec0d Sep 4 23:59:06.638238 systemd-networkd[1445]: lxce8b532e23297: Gained carrier Sep 4 23:59:06.641364 kernel: eth0: renamed from tmp1236b Sep 4 23:59:06.640425 systemd-networkd[1445]: lxc5cb45c8724c5: Gained carrier Sep 4 23:59:06.911316 systemd-networkd[1445]: cilium_vxlan: Gained IPv6LL Sep 4 23:59:07.743329 systemd-networkd[1445]: lxc_health: Gained IPv6LL Sep 4 23:59:07.871284 systemd-networkd[1445]: lxce8b532e23297: Gained IPv6LL Sep 4 23:59:08.127328 systemd-networkd[1445]: lxc5cb45c8724c5: Gained IPv6LL Sep 4 23:59:10.199885 containerd[1543]: time="2025-09-04T23:59:10.199812150Z" level=info msg="connecting to shim 0ec0dee289058f71d9f1f8089b0181a0ff0df707d60422337f09b208e66e7ca9" address="unix:///run/containerd/s/e58fda9c750df29265d3eb08db3efa3d8070d4a813c829bb0b9fc5c08183758b" namespace=k8s.io protocol=ttrpc version=3 Sep 4 23:59:10.200390 containerd[1543]: time="2025-09-04T23:59:10.200346465Z" level=info msg="connecting to shim 1236b8f14de3c46267f1fffd2b893f00730cfd85f1e2a2b7a506f62d36167aed" address="unix:///run/containerd/s/3f2488a12ffef5081033f2d6922ac00aed09858712e2887a4eeefdcc35f73df2" namespace=k8s.io protocol=ttrpc version=3 Sep 4 23:59:10.228359 systemd[1]: Started cri-containerd-0ec0dee289058f71d9f1f8089b0181a0ff0df707d60422337f09b208e66e7ca9.scope - libcontainer container 0ec0dee289058f71d9f1f8089b0181a0ff0df707d60422337f09b208e66e7ca9. Sep 4 23:59:10.232303 systemd[1]: Started cri-containerd-1236b8f14de3c46267f1fffd2b893f00730cfd85f1e2a2b7a506f62d36167aed.scope - libcontainer container 1236b8f14de3c46267f1fffd2b893f00730cfd85f1e2a2b7a506f62d36167aed. Sep 4 23:59:10.242241 systemd-resolved[1361]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 23:59:10.244775 systemd-resolved[1361]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 23:59:10.274482 containerd[1543]: time="2025-09-04T23:59:10.274436962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-95ddj,Uid:949625c7-e137-40f9-91d7-609627b0aa62,Namespace:kube-system,Attempt:0,} returns sandbox id \"1236b8f14de3c46267f1fffd2b893f00730cfd85f1e2a2b7a506f62d36167aed\"" Sep 4 23:59:10.276426 containerd[1543]: time="2025-09-04T23:59:10.276389316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d567n,Uid:bf97e0aa-e084-4b6d-b5ab-075cd68bc527,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ec0dee289058f71d9f1f8089b0181a0ff0df707d60422337f09b208e66e7ca9\"" Sep 4 23:59:10.278038 containerd[1543]: time="2025-09-04T23:59:10.277715103Z" level=info msg="CreateContainer within sandbox \"1236b8f14de3c46267f1fffd2b893f00730cfd85f1e2a2b7a506f62d36167aed\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 23:59:10.278446 containerd[1543]: time="2025-09-04T23:59:10.278418762Z" level=info msg="CreateContainer within sandbox \"0ec0dee289058f71d9f1f8089b0181a0ff0df707d60422337f09b208e66e7ca9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 23:59:10.290122 containerd[1543]: time="2025-09-04T23:59:10.289453433Z" level=info msg="Container efaf7cc9b62c3fec441257537202e756def39e14d4f7633cb84f1ce31a57ab2e: CDI devices from CRI Config.CDIDevices: []" Sep 4 23:59:10.300868 containerd[1543]: time="2025-09-04T23:59:10.300823912Z" level=info msg="CreateContainer within sandbox \"1236b8f14de3c46267f1fffd2b893f00730cfd85f1e2a2b7a506f62d36167aed\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"efaf7cc9b62c3fec441257537202e756def39e14d4f7633cb84f1ce31a57ab2e\"" Sep 4 23:59:10.301164 containerd[1543]: time="2025-09-04T23:59:10.301124154Z" level=info msg="Container 3eea878d83ca65648a81f690ec1b07bf49d15c9b37cacab4db8e2574af36f98b: CDI devices from CRI Config.CDIDevices: []" Sep 4 23:59:10.301895 containerd[1543]: time="2025-09-04T23:59:10.301792208Z" level=info msg="StartContainer for \"efaf7cc9b62c3fec441257537202e756def39e14d4f7633cb84f1ce31a57ab2e\"" Sep 4 23:59:10.302977 containerd[1543]: time="2025-09-04T23:59:10.302950451Z" level=info msg="connecting to shim efaf7cc9b62c3fec441257537202e756def39e14d4f7633cb84f1ce31a57ab2e" address="unix:///run/containerd/s/3f2488a12ffef5081033f2d6922ac00aed09858712e2887a4eeefdcc35f73df2" protocol=ttrpc version=3 Sep 4 23:59:10.309799 containerd[1543]: time="2025-09-04T23:59:10.309755608Z" level=info msg="CreateContainer within sandbox \"0ec0dee289058f71d9f1f8089b0181a0ff0df707d60422337f09b208e66e7ca9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3eea878d83ca65648a81f690ec1b07bf49d15c9b37cacab4db8e2574af36f98b\"" Sep 4 23:59:10.310726 containerd[1543]: time="2025-09-04T23:59:10.310619529Z" level=info msg="StartContainer for \"3eea878d83ca65648a81f690ec1b07bf49d15c9b37cacab4db8e2574af36f98b\"" Sep 4 23:59:10.311751 containerd[1543]: time="2025-09-04T23:59:10.311718003Z" level=info msg="connecting to shim 3eea878d83ca65648a81f690ec1b07bf49d15c9b37cacab4db8e2574af36f98b" address="unix:///run/containerd/s/e58fda9c750df29265d3eb08db3efa3d8070d4a813c829bb0b9fc5c08183758b" protocol=ttrpc version=3 Sep 4 23:59:10.328350 systemd[1]: Started cri-containerd-efaf7cc9b62c3fec441257537202e756def39e14d4f7633cb84f1ce31a57ab2e.scope - libcontainer container efaf7cc9b62c3fec441257537202e756def39e14d4f7633cb84f1ce31a57ab2e. Sep 4 23:59:10.331933 systemd[1]: Started cri-containerd-3eea878d83ca65648a81f690ec1b07bf49d15c9b37cacab4db8e2574af36f98b.scope - libcontainer container 3eea878d83ca65648a81f690ec1b07bf49d15c9b37cacab4db8e2574af36f98b. Sep 4 23:59:10.369536 containerd[1543]: time="2025-09-04T23:59:10.369424237Z" level=info msg="StartContainer for \"efaf7cc9b62c3fec441257537202e756def39e14d4f7633cb84f1ce31a57ab2e\" returns successfully" Sep 4 23:59:10.375270 containerd[1543]: time="2025-09-04T23:59:10.375103435Z" level=info msg="StartContainer for \"3eea878d83ca65648a81f690ec1b07bf49d15c9b37cacab4db8e2574af36f98b\" returns successfully" Sep 4 23:59:10.476233 kubelet[2646]: I0904 23:59:10.476169 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-d567n" podStartSLOduration=21.476148322 podStartE2EDuration="21.476148322s" podCreationTimestamp="2025-09-04 23:58:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:59:10.474548177 +0000 UTC m=+28.214417977" watchObservedRunningTime="2025-09-04 23:59:10.476148322 +0000 UTC m=+28.216018162" Sep 4 23:59:10.487669 kubelet[2646]: I0904 23:59:10.487091 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-95ddj" podStartSLOduration=21.487075098 podStartE2EDuration="21.487075098s" podCreationTimestamp="2025-09-04 23:58:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:59:10.486833344 +0000 UTC m=+28.226703144" watchObservedRunningTime="2025-09-04 23:59:10.487075098 +0000 UTC m=+28.226944898" Sep 4 23:59:10.983352 systemd[1]: Started sshd@7-10.0.0.113:22-10.0.0.1:42770.service - OpenSSH per-connection server daemon (10.0.0.1:42770). Sep 4 23:59:11.045839 sshd[3980]: Accepted publickey for core from 10.0.0.1 port 42770 ssh2: RSA SHA256:dz8a5vpzhl9T1tN+PlbA3wzUJkL1bHm+PkgBuWVD7dg Sep 4 23:59:11.047232 sshd-session[3980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:59:11.051475 systemd-logind[1519]: New session 8 of user core. Sep 4 23:59:11.069367 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 23:59:11.185432 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2824693586.mount: Deactivated successfully. Sep 4 23:59:11.208202 sshd[3982]: Connection closed by 10.0.0.1 port 42770 Sep 4 23:59:11.208539 sshd-session[3980]: pam_unix(sshd:session): session closed for user core Sep 4 23:59:11.212130 systemd[1]: sshd@7-10.0.0.113:22-10.0.0.1:42770.service: Deactivated successfully. Sep 4 23:59:11.214739 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 23:59:11.215585 systemd-logind[1519]: Session 8 logged out. Waiting for processes to exit. Sep 4 23:59:11.216529 systemd-logind[1519]: Removed session 8. Sep 4 23:59:16.220429 systemd[1]: Started sshd@8-10.0.0.113:22-10.0.0.1:42786.service - OpenSSH per-connection server daemon (10.0.0.1:42786). Sep 4 23:59:16.266522 sshd[4004]: Accepted publickey for core from 10.0.0.1 port 42786 ssh2: RSA SHA256:dz8a5vpzhl9T1tN+PlbA3wzUJkL1bHm+PkgBuWVD7dg Sep 4 23:59:16.267885 sshd-session[4004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:59:16.272607 systemd-logind[1519]: New session 9 of user core. Sep 4 23:59:16.281302 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 23:59:16.407533 sshd[4006]: Connection closed by 10.0.0.1 port 42786 Sep 4 23:59:16.407972 sshd-session[4004]: pam_unix(sshd:session): session closed for user core Sep 4 23:59:16.411314 systemd[1]: sshd@8-10.0.0.113:22-10.0.0.1:42786.service: Deactivated successfully. Sep 4 23:59:16.414448 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 23:59:16.415466 systemd-logind[1519]: Session 9 logged out. Waiting for processes to exit. Sep 4 23:59:16.416911 systemd-logind[1519]: Removed session 9. Sep 4 23:59:21.423370 systemd[1]: Started sshd@9-10.0.0.113:22-10.0.0.1:57212.service - OpenSSH per-connection server daemon (10.0.0.1:57212). Sep 4 23:59:21.486841 sshd[4022]: Accepted publickey for core from 10.0.0.1 port 57212 ssh2: RSA SHA256:dz8a5vpzhl9T1tN+PlbA3wzUJkL1bHm+PkgBuWVD7dg Sep 4 23:59:21.488053 sshd-session[4022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:59:21.492204 systemd-logind[1519]: New session 10 of user core. Sep 4 23:59:21.498286 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 23:59:21.618191 sshd[4024]: Connection closed by 10.0.0.1 port 57212 Sep 4 23:59:21.618688 sshd-session[4022]: pam_unix(sshd:session): session closed for user core Sep 4 23:59:21.624600 systemd[1]: sshd@9-10.0.0.113:22-10.0.0.1:57212.service: Deactivated successfully. Sep 4 23:59:21.626209 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 23:59:21.627286 systemd-logind[1519]: Session 10 logged out. Waiting for processes to exit. Sep 4 23:59:21.629351 systemd-logind[1519]: Removed session 10. Sep 4 23:59:26.644978 systemd[1]: Started sshd@10-10.0.0.113:22-10.0.0.1:57216.service - OpenSSH per-connection server daemon (10.0.0.1:57216). Sep 4 23:59:26.701154 sshd[4041]: Accepted publickey for core from 10.0.0.1 port 57216 ssh2: RSA SHA256:dz8a5vpzhl9T1tN+PlbA3wzUJkL1bHm+PkgBuWVD7dg Sep 4 23:59:26.702903 sshd-session[4041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:59:26.708074 systemd-logind[1519]: New session 11 of user core. Sep 4 23:59:26.727384 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 23:59:26.853689 sshd[4043]: Connection closed by 10.0.0.1 port 57216 Sep 4 23:59:26.854241 sshd-session[4041]: pam_unix(sshd:session): session closed for user core Sep 4 23:59:26.866326 systemd[1]: sshd@10-10.0.0.113:22-10.0.0.1:57216.service: Deactivated successfully. Sep 4 23:59:26.869587 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 23:59:26.870256 systemd-logind[1519]: Session 11 logged out. Waiting for processes to exit. Sep 4 23:59:26.872560 systemd[1]: Started sshd@11-10.0.0.113:22-10.0.0.1:57230.service - OpenSSH per-connection server daemon (10.0.0.1:57230). Sep 4 23:59:26.873356 systemd-logind[1519]: Removed session 11. Sep 4 23:59:26.921934 sshd[4057]: Accepted publickey for core from 10.0.0.1 port 57230 ssh2: RSA SHA256:dz8a5vpzhl9T1tN+PlbA3wzUJkL1bHm+PkgBuWVD7dg Sep 4 23:59:26.923356 sshd-session[4057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:59:26.927264 systemd-logind[1519]: New session 12 of user core. Sep 4 23:59:26.942326 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 23:59:27.097711 sshd[4059]: Connection closed by 10.0.0.1 port 57230 Sep 4 23:59:27.098357 sshd-session[4057]: pam_unix(sshd:session): session closed for user core Sep 4 23:59:27.108250 systemd[1]: sshd@11-10.0.0.113:22-10.0.0.1:57230.service: Deactivated successfully. Sep 4 23:59:27.111458 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 23:59:27.112766 systemd-logind[1519]: Session 12 logged out. Waiting for processes to exit. Sep 4 23:59:27.117327 systemd-logind[1519]: Removed session 12. Sep 4 23:59:27.118478 systemd[1]: Started sshd@12-10.0.0.113:22-10.0.0.1:57240.service - OpenSSH per-connection server daemon (10.0.0.1:57240). Sep 4 23:59:27.180822 sshd[4071]: Accepted publickey for core from 10.0.0.1 port 57240 ssh2: RSA SHA256:dz8a5vpzhl9T1tN+PlbA3wzUJkL1bHm+PkgBuWVD7dg Sep 4 23:59:27.182427 sshd-session[4071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:59:27.188953 systemd-logind[1519]: New session 13 of user core. Sep 4 23:59:27.207426 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 23:59:27.349216 sshd[4073]: Connection closed by 10.0.0.1 port 57240 Sep 4 23:59:27.349533 sshd-session[4071]: pam_unix(sshd:session): session closed for user core Sep 4 23:59:27.352568 systemd[1]: sshd@12-10.0.0.113:22-10.0.0.1:57240.service: Deactivated successfully. Sep 4 23:59:27.354438 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 23:59:27.355666 systemd-logind[1519]: Session 13 logged out. Waiting for processes to exit. Sep 4 23:59:27.356995 systemd-logind[1519]: Removed session 13. Sep 4 23:59:32.364707 systemd[1]: Started sshd@13-10.0.0.113:22-10.0.0.1:41538.service - OpenSSH per-connection server daemon (10.0.0.1:41538). Sep 4 23:59:32.409075 sshd[4086]: Accepted publickey for core from 10.0.0.1 port 41538 ssh2: RSA SHA256:dz8a5vpzhl9T1tN+PlbA3wzUJkL1bHm+PkgBuWVD7dg Sep 4 23:59:32.410430 sshd-session[4086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:59:32.415751 systemd-logind[1519]: New session 14 of user core. Sep 4 23:59:32.429336 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 23:59:32.549378 sshd[4088]: Connection closed by 10.0.0.1 port 41538 Sep 4 23:59:32.549720 sshd-session[4086]: pam_unix(sshd:session): session closed for user core Sep 4 23:59:32.559339 systemd[1]: sshd@13-10.0.0.113:22-10.0.0.1:41538.service: Deactivated successfully. Sep 4 23:59:32.561085 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 23:59:32.561866 systemd-logind[1519]: Session 14 logged out. Waiting for processes to exit. Sep 4 23:59:32.564360 systemd[1]: Started sshd@14-10.0.0.113:22-10.0.0.1:41554.service - OpenSSH per-connection server daemon (10.0.0.1:41554). Sep 4 23:59:32.564841 systemd-logind[1519]: Removed session 14. Sep 4 23:59:32.616841 sshd[4101]: Accepted publickey for core from 10.0.0.1 port 41554 ssh2: RSA SHA256:dz8a5vpzhl9T1tN+PlbA3wzUJkL1bHm+PkgBuWVD7dg Sep 4 23:59:32.618478 sshd-session[4101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:59:32.622353 systemd-logind[1519]: New session 15 of user core. Sep 4 23:59:32.639310 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 23:59:32.856123 sshd[4103]: Connection closed by 10.0.0.1 port 41554 Sep 4 23:59:32.856815 sshd-session[4101]: pam_unix(sshd:session): session closed for user core Sep 4 23:59:32.867529 systemd[1]: sshd@14-10.0.0.113:22-10.0.0.1:41554.service: Deactivated successfully. Sep 4 23:59:32.869454 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 23:59:32.870387 systemd-logind[1519]: Session 15 logged out. Waiting for processes to exit. Sep 4 23:59:32.873153 systemd[1]: Started sshd@15-10.0.0.113:22-10.0.0.1:41564.service - OpenSSH per-connection server daemon (10.0.0.1:41564). Sep 4 23:59:32.873902 systemd-logind[1519]: Removed session 15. Sep 4 23:59:32.931279 sshd[4115]: Accepted publickey for core from 10.0.0.1 port 41564 ssh2: RSA SHA256:dz8a5vpzhl9T1tN+PlbA3wzUJkL1bHm+PkgBuWVD7dg Sep 4 23:59:32.932640 sshd-session[4115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:59:32.936562 systemd-logind[1519]: New session 16 of user core. Sep 4 23:59:32.943297 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 23:59:33.518396 sshd[4117]: Connection closed by 10.0.0.1 port 41564 Sep 4 23:59:33.518747 sshd-session[4115]: pam_unix(sshd:session): session closed for user core Sep 4 23:59:33.528881 systemd[1]: sshd@15-10.0.0.113:22-10.0.0.1:41564.service: Deactivated successfully. Sep 4 23:59:33.531100 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 23:59:33.534008 systemd-logind[1519]: Session 16 logged out. Waiting for processes to exit. Sep 4 23:59:33.539627 systemd[1]: Started sshd@16-10.0.0.113:22-10.0.0.1:41566.service - OpenSSH per-connection server daemon (10.0.0.1:41566). Sep 4 23:59:33.541120 systemd-logind[1519]: Removed session 16. Sep 4 23:59:33.590767 sshd[4139]: Accepted publickey for core from 10.0.0.1 port 41566 ssh2: RSA SHA256:dz8a5vpzhl9T1tN+PlbA3wzUJkL1bHm+PkgBuWVD7dg Sep 4 23:59:33.591913 sshd-session[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:59:33.595763 systemd-logind[1519]: New session 17 of user core. Sep 4 23:59:33.613273 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 23:59:33.837071 sshd[4141]: Connection closed by 10.0.0.1 port 41566 Sep 4 23:59:33.838004 sshd-session[4139]: pam_unix(sshd:session): session closed for user core Sep 4 23:59:33.849736 systemd[1]: sshd@16-10.0.0.113:22-10.0.0.1:41566.service: Deactivated successfully. Sep 4 23:59:33.851541 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 23:59:33.852190 systemd-logind[1519]: Session 17 logged out. Waiting for processes to exit. Sep 4 23:59:33.854674 systemd[1]: Started sshd@17-10.0.0.113:22-10.0.0.1:41568.service - OpenSSH per-connection server daemon (10.0.0.1:41568). Sep 4 23:59:33.856057 systemd-logind[1519]: Removed session 17. Sep 4 23:59:33.915902 sshd[4153]: Accepted publickey for core from 10.0.0.1 port 41568 ssh2: RSA SHA256:dz8a5vpzhl9T1tN+PlbA3wzUJkL1bHm+PkgBuWVD7dg Sep 4 23:59:33.917263 sshd-session[4153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:59:33.922009 systemd-logind[1519]: New session 18 of user core. Sep 4 23:59:33.931314 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 23:59:34.044034 sshd[4155]: Connection closed by 10.0.0.1 port 41568 Sep 4 23:59:34.043873 sshd-session[4153]: pam_unix(sshd:session): session closed for user core Sep 4 23:59:34.047866 systemd[1]: sshd@17-10.0.0.113:22-10.0.0.1:41568.service: Deactivated successfully. Sep 4 23:59:34.049681 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 23:59:34.051628 systemd-logind[1519]: Session 18 logged out. Waiting for processes to exit. Sep 4 23:59:34.053032 systemd-logind[1519]: Removed session 18. Sep 4 23:59:39.057124 systemd[1]: Started sshd@18-10.0.0.113:22-10.0.0.1:41572.service - OpenSSH per-connection server daemon (10.0.0.1:41572). Sep 4 23:59:39.113313 sshd[4171]: Accepted publickey for core from 10.0.0.1 port 41572 ssh2: RSA SHA256:dz8a5vpzhl9T1tN+PlbA3wzUJkL1bHm+PkgBuWVD7dg Sep 4 23:59:39.114721 sshd-session[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:59:39.119735 systemd-logind[1519]: New session 19 of user core. Sep 4 23:59:39.130334 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 23:59:39.244081 sshd[4173]: Connection closed by 10.0.0.1 port 41572 Sep 4 23:59:39.244548 sshd-session[4171]: pam_unix(sshd:session): session closed for user core Sep 4 23:59:39.248520 systemd[1]: sshd@18-10.0.0.113:22-10.0.0.1:41572.service: Deactivated successfully. Sep 4 23:59:39.250355 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 23:59:39.251176 systemd-logind[1519]: Session 19 logged out. Waiting for processes to exit. Sep 4 23:59:39.252420 systemd-logind[1519]: Removed session 19. Sep 4 23:59:44.260162 systemd[1]: Started sshd@19-10.0.0.113:22-10.0.0.1:40098.service - OpenSSH per-connection server daemon (10.0.0.1:40098). Sep 4 23:59:44.320225 sshd[4189]: Accepted publickey for core from 10.0.0.1 port 40098 ssh2: RSA SHA256:dz8a5vpzhl9T1tN+PlbA3wzUJkL1bHm+PkgBuWVD7dg Sep 4 23:59:44.321659 sshd-session[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:59:44.326301 systemd-logind[1519]: New session 20 of user core. Sep 4 23:59:44.333326 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 23:59:44.446441 sshd[4191]: Connection closed by 10.0.0.1 port 40098 Sep 4 23:59:44.446791 sshd-session[4189]: pam_unix(sshd:session): session closed for user core Sep 4 23:59:44.450274 systemd[1]: sshd@19-10.0.0.113:22-10.0.0.1:40098.service: Deactivated successfully. Sep 4 23:59:44.452209 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 23:59:44.452980 systemd-logind[1519]: Session 20 logged out. Waiting for processes to exit. Sep 4 23:59:44.454327 systemd-logind[1519]: Removed session 20. Sep 4 23:59:49.459768 systemd[1]: Started sshd@20-10.0.0.113:22-10.0.0.1:40114.service - OpenSSH per-connection server daemon (10.0.0.1:40114). Sep 4 23:59:49.519343 sshd[4205]: Accepted publickey for core from 10.0.0.1 port 40114 ssh2: RSA SHA256:dz8a5vpzhl9T1tN+PlbA3wzUJkL1bHm+PkgBuWVD7dg Sep 4 23:59:49.522010 sshd-session[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:59:49.526235 systemd-logind[1519]: New session 21 of user core. Sep 4 23:59:49.536357 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 23:59:49.662253 sshd[4208]: Connection closed by 10.0.0.1 port 40114 Sep 4 23:59:49.661407 sshd-session[4205]: pam_unix(sshd:session): session closed for user core Sep 4 23:59:49.673498 systemd[1]: sshd@20-10.0.0.113:22-10.0.0.1:40114.service: Deactivated successfully. Sep 4 23:59:49.678430 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 23:59:49.681289 systemd-logind[1519]: Session 21 logged out. Waiting for processes to exit. Sep 4 23:59:49.686013 systemd[1]: Started sshd@21-10.0.0.113:22-10.0.0.1:40128.service - OpenSSH per-connection server daemon (10.0.0.1:40128). Sep 4 23:59:49.688427 systemd-logind[1519]: Removed session 21. Sep 4 23:59:49.759431 sshd[4221]: Accepted publickey for core from 10.0.0.1 port 40128 ssh2: RSA SHA256:dz8a5vpzhl9T1tN+PlbA3wzUJkL1bHm+PkgBuWVD7dg Sep 4 23:59:49.761570 sshd-session[4221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:59:49.770329 systemd-logind[1519]: New session 22 of user core. Sep 4 23:59:49.783352 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 23:59:52.783290 containerd[1543]: time="2025-09-04T23:59:52.782172694Z" level=info msg="StopContainer for \"33588ef9840b8eb6da2e2b8373335ac31e74fa3687c51fd9cfbbe5d39fa62e9f\" with timeout 30 (s)" Sep 4 23:59:52.785157 containerd[1543]: time="2025-09-04T23:59:52.784479954Z" level=info msg="Stop container \"33588ef9840b8eb6da2e2b8373335ac31e74fa3687c51fd9cfbbe5d39fa62e9f\" with signal terminated" Sep 4 23:59:52.806805 systemd[1]: cri-containerd-33588ef9840b8eb6da2e2b8373335ac31e74fa3687c51fd9cfbbe5d39fa62e9f.scope: Deactivated successfully. Sep 4 23:59:52.809877 containerd[1543]: time="2025-09-04T23:59:52.809840050Z" level=info msg="received exit event container_id:\"33588ef9840b8eb6da2e2b8373335ac31e74fa3687c51fd9cfbbe5d39fa62e9f\" id:\"33588ef9840b8eb6da2e2b8373335ac31e74fa3687c51fd9cfbbe5d39fa62e9f\" pid:3223 exited_at:{seconds:1757030392 nanos:809547703}" Sep 4 23:59:52.810315 containerd[1543]: time="2025-09-04T23:59:52.809914207Z" level=info msg="TaskExit event in podsandbox handler container_id:\"33588ef9840b8eb6da2e2b8373335ac31e74fa3687c51fd9cfbbe5d39fa62e9f\" id:\"33588ef9840b8eb6da2e2b8373335ac31e74fa3687c51fd9cfbbe5d39fa62e9f\" pid:3223 exited_at:{seconds:1757030392 nanos:809547703}" Sep 4 23:59:52.824378 containerd[1543]: time="2025-09-04T23:59:52.824120829Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 23:59:52.831346 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33588ef9840b8eb6da2e2b8373335ac31e74fa3687c51fd9cfbbe5d39fa62e9f-rootfs.mount: Deactivated successfully. Sep 4 23:59:52.832323 containerd[1543]: time="2025-09-04T23:59:52.832272994Z" level=info msg="TaskExit event in podsandbox handler container_id:\"19f801a49f487d6f86ebab7516030e38150f1b610b84203a573a46bd4c5ba26f\" id:\"6ea861e142b1ceab34820d30023af167d56a08e43c2d6b2add689502a12fcf66\" pid:4253 exited_at:{seconds:1757030392 nanos:829726865}" Sep 4 23:59:52.834915 containerd[1543]: time="2025-09-04T23:59:52.834883041Z" level=info msg="StopContainer for \"19f801a49f487d6f86ebab7516030e38150f1b610b84203a573a46bd4c5ba26f\" with timeout 2 (s)" Sep 4 23:59:52.835339 containerd[1543]: time="2025-09-04T23:59:52.835290703Z" level=info msg="Stop container \"19f801a49f487d6f86ebab7516030e38150f1b610b84203a573a46bd4c5ba26f\" with signal terminated" Sep 4 23:59:52.842195 systemd-networkd[1445]: lxc_health: Link DOWN Sep 4 23:59:52.842202 systemd-networkd[1445]: lxc_health: Lost carrier Sep 4 23:59:52.848277 containerd[1543]: time="2025-09-04T23:59:52.848243539Z" level=info msg="StopContainer for \"33588ef9840b8eb6da2e2b8373335ac31e74fa3687c51fd9cfbbe5d39fa62e9f\" returns successfully" Sep 4 23:59:52.851483 containerd[1543]: time="2025-09-04T23:59:52.851420081Z" level=info msg="StopPodSandbox for \"2f7b3670638433c10af1123831539cdf02fca9e7e6de0cf48aba91473dd1daed\"" Sep 4 23:59:52.851587 containerd[1543]: time="2025-09-04T23:59:52.851529956Z" level=info msg="Container to stop \"33588ef9840b8eb6da2e2b8373335ac31e74fa3687c51fd9cfbbe5d39fa62e9f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:59:52.857461 systemd[1]: cri-containerd-19f801a49f487d6f86ebab7516030e38150f1b610b84203a573a46bd4c5ba26f.scope: Deactivated successfully. Sep 4 23:59:52.857760 systemd[1]: cri-containerd-19f801a49f487d6f86ebab7516030e38150f1b610b84203a573a46bd4c5ba26f.scope: Consumed 6.214s CPU time, 125.9M memory peak, 128K read from disk, 12.9M written to disk. Sep 4 23:59:52.859078 containerd[1543]: time="2025-09-04T23:59:52.859032710Z" level=info msg="received exit event container_id:\"19f801a49f487d6f86ebab7516030e38150f1b610b84203a573a46bd4c5ba26f\" id:\"19f801a49f487d6f86ebab7516030e38150f1b610b84203a573a46bd4c5ba26f\" pid:3295 exited_at:{seconds:1757030392 nanos:858770281}" Sep 4 23:59:52.859191 containerd[1543]: time="2025-09-04T23:59:52.859080388Z" level=info msg="TaskExit event in podsandbox handler container_id:\"19f801a49f487d6f86ebab7516030e38150f1b610b84203a573a46bd4c5ba26f\" id:\"19f801a49f487d6f86ebab7516030e38150f1b610b84203a573a46bd4c5ba26f\" pid:3295 exited_at:{seconds:1757030392 nanos:858770281}" Sep 4 23:59:52.863540 systemd[1]: cri-containerd-2f7b3670638433c10af1123831539cdf02fca9e7e6de0cf48aba91473dd1daed.scope: Deactivated successfully. Sep 4 23:59:52.865590 containerd[1543]: time="2025-09-04T23:59:52.865553466Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2f7b3670638433c10af1123831539cdf02fca9e7e6de0cf48aba91473dd1daed\" id:\"2f7b3670638433c10af1123831539cdf02fca9e7e6de0cf48aba91473dd1daed\" pid:2876 exit_status:137 exited_at:{seconds:1757030392 nanos:865266199}" Sep 4 23:59:52.880294 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19f801a49f487d6f86ebab7516030e38150f1b610b84203a573a46bd4c5ba26f-rootfs.mount: Deactivated successfully. Sep 4 23:59:52.891231 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f7b3670638433c10af1123831539cdf02fca9e7e6de0cf48aba91473dd1daed-rootfs.mount: Deactivated successfully. Sep 4 23:59:52.894089 containerd[1543]: time="2025-09-04T23:59:52.894036387Z" level=info msg="shim disconnected" id=2f7b3670638433c10af1123831539cdf02fca9e7e6de0cf48aba91473dd1daed namespace=k8s.io Sep 4 23:59:52.901830 containerd[1543]: time="2025-09-04T23:59:52.894083705Z" level=warning msg="cleaning up after shim disconnected" id=2f7b3670638433c10af1123831539cdf02fca9e7e6de0cf48aba91473dd1daed namespace=k8s.io Sep 4 23:59:52.901830 containerd[1543]: time="2025-09-04T23:59:52.901821928Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:59:52.901991 containerd[1543]: time="2025-09-04T23:59:52.899457751Z" level=info msg="StopContainer for \"19f801a49f487d6f86ebab7516030e38150f1b610b84203a573a46bd4c5ba26f\" returns successfully" Sep 4 23:59:52.902461 containerd[1543]: time="2025-09-04T23:59:52.902412783Z" level=info msg="StopPodSandbox for \"6afaa6b3f8c1a7c237822aae6de2382bf7114ad67645fd18df69392e9d34a92d\"" Sep 4 23:59:52.902525 containerd[1543]: time="2025-09-04T23:59:52.902474100Z" level=info msg="Container to stop \"dc4da08eba209c02893e7c98ff3cd352beb2d6f432cd35b24854899f48f2c94a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:59:52.902525 containerd[1543]: time="2025-09-04T23:59:52.902487379Z" level=info msg="Container to stop \"ffc10a4e513c7360c9fde36bab63432eea4fd42ae642ad0cd1cefa5b1e090e8b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:59:52.902525 containerd[1543]: time="2025-09-04T23:59:52.902496499Z" level=info msg="Container to stop \"d054996befe3ffb63272fe49c11437291acbc484095f2f9c345b1f860d7e2c94\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:59:52.902525 containerd[1543]: time="2025-09-04T23:59:52.902504699Z" level=info msg="Container to stop \"19f801a49f487d6f86ebab7516030e38150f1b610b84203a573a46bd4c5ba26f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:59:52.902525 containerd[1543]: time="2025-09-04T23:59:52.902512898Z" level=info msg="Container to stop \"8f2548f8f928060e0d16e025a131fa5ef129df75be3d5db88b85a8946782ea3c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:59:52.908417 systemd[1]: cri-containerd-6afaa6b3f8c1a7c237822aae6de2382bf7114ad67645fd18df69392e9d34a92d.scope: Deactivated successfully. Sep 4 23:59:52.922498 containerd[1543]: time="2025-09-04T23:59:52.922382514Z" level=info msg="received exit event sandbox_id:\"2f7b3670638433c10af1123831539cdf02fca9e7e6de0cf48aba91473dd1daed\" exit_status:137 exited_at:{seconds:1757030392 nanos:865266199}" Sep 4 23:59:52.922618 containerd[1543]: time="2025-09-04T23:59:52.922396873Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6afaa6b3f8c1a7c237822aae6de2382bf7114ad67645fd18df69392e9d34a92d\" id:\"6afaa6b3f8c1a7c237822aae6de2382bf7114ad67645fd18df69392e9d34a92d\" pid:2793 exit_status:137 exited_at:{seconds:1757030392 nanos:910003052}" Sep 4 23:59:52.924286 containerd[1543]: time="2025-09-04T23:59:52.924244233Z" level=info msg="TearDown network for sandbox \"2f7b3670638433c10af1123831539cdf02fca9e7e6de0cf48aba91473dd1daed\" successfully" Sep 4 23:59:52.924286 containerd[1543]: time="2025-09-04T23:59:52.924280631Z" level=info msg="StopPodSandbox for \"2f7b3670638433c10af1123831539cdf02fca9e7e6de0cf48aba91473dd1daed\" returns successfully" Sep 4 23:59:52.924294 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2f7b3670638433c10af1123831539cdf02fca9e7e6de0cf48aba91473dd1daed-shm.mount: Deactivated successfully. Sep 4 23:59:52.936315 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6afaa6b3f8c1a7c237822aae6de2382bf7114ad67645fd18df69392e9d34a92d-rootfs.mount: Deactivated successfully. Sep 4 23:59:52.942167 containerd[1543]: time="2025-09-04T23:59:52.940359492Z" level=info msg="received exit event sandbox_id:\"6afaa6b3f8c1a7c237822aae6de2382bf7114ad67645fd18df69392e9d34a92d\" exit_status:137 exited_at:{seconds:1757030392 nanos:910003052}" Sep 4 23:59:52.942167 containerd[1543]: time="2025-09-04T23:59:52.940457967Z" level=info msg="TearDown network for sandbox \"6afaa6b3f8c1a7c237822aae6de2382bf7114ad67645fd18df69392e9d34a92d\" successfully" Sep 4 23:59:52.942167 containerd[1543]: time="2025-09-04T23:59:52.940550723Z" level=info msg="StopPodSandbox for \"6afaa6b3f8c1a7c237822aae6de2382bf7114ad67645fd18df69392e9d34a92d\" returns successfully" Sep 4 23:59:52.944253 containerd[1543]: time="2025-09-04T23:59:52.944212164Z" level=info msg="shim disconnected" id=6afaa6b3f8c1a7c237822aae6de2382bf7114ad67645fd18df69392e9d34a92d namespace=k8s.io Sep 4 23:59:52.944358 containerd[1543]: time="2025-09-04T23:59:52.944247002Z" level=warning msg="cleaning up after shim disconnected" id=6afaa6b3f8c1a7c237822aae6de2382bf7114ad67645fd18df69392e9d34a92d namespace=k8s.io Sep 4 23:59:52.944385 containerd[1543]: time="2025-09-04T23:59:52.944361597Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:59:52.982675 kubelet[2646]: I0904 23:59:52.982265 2646 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1dfa9288-4f0b-442d-9138-9fe232970d3a-cilium-run\") pod \"1dfa9288-4f0b-442d-9138-9fe232970d3a\" (UID: \"1dfa9288-4f0b-442d-9138-9fe232970d3a\") " Sep 4 23:59:52.982675 kubelet[2646]: I0904 23:59:52.982325 2646 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1dfa9288-4f0b-442d-9138-9fe232970d3a-host-proc-sys-kernel\") pod \"1dfa9288-4f0b-442d-9138-9fe232970d3a\" (UID: \"1dfa9288-4f0b-442d-9138-9fe232970d3a\") " Sep 4 23:59:52.982675 kubelet[2646]: I0904 23:59:52.982352 2646 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1dfa9288-4f0b-442d-9138-9fe232970d3a-clustermesh-secrets\") pod \"1dfa9288-4f0b-442d-9138-9fe232970d3a\" (UID: \"1dfa9288-4f0b-442d-9138-9fe232970d3a\") " Sep 4 23:59:52.982675 kubelet[2646]: I0904 23:59:52.982372 2646 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1dfa9288-4f0b-442d-9138-9fe232970d3a-cilium-config-path\") pod \"1dfa9288-4f0b-442d-9138-9fe232970d3a\" (UID: \"1dfa9288-4f0b-442d-9138-9fe232970d3a\") " Sep 4 23:59:52.982675 kubelet[2646]: I0904 23:59:52.982392 2646 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcv4c\" (UniqueName: \"kubernetes.io/projected/9046f78a-8aa4-4440-add9-7f298421896c-kube-api-access-jcv4c\") pod \"9046f78a-8aa4-4440-add9-7f298421896c\" (UID: \"9046f78a-8aa4-4440-add9-7f298421896c\") " Sep 4 23:59:52.982675 kubelet[2646]: I0904 23:59:52.982411 2646 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1dfa9288-4f0b-442d-9138-9fe232970d3a-hubble-tls\") pod \"1dfa9288-4f0b-442d-9138-9fe232970d3a\" (UID: \"1dfa9288-4f0b-442d-9138-9fe232970d3a\") " Sep 4 23:59:52.983492 kubelet[2646]: I0904 23:59:52.982432 2646 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bx5hs\" (UniqueName: \"kubernetes.io/projected/1dfa9288-4f0b-442d-9138-9fe232970d3a-kube-api-access-bx5hs\") pod \"1dfa9288-4f0b-442d-9138-9fe232970d3a\" (UID: \"1dfa9288-4f0b-442d-9138-9fe232970d3a\") " Sep 4 23:59:52.983492 kubelet[2646]: I0904 23:59:52.982447 2646 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1dfa9288-4f0b-442d-9138-9fe232970d3a-host-proc-sys-net\") pod \"1dfa9288-4f0b-442d-9138-9fe232970d3a\" (UID: \"1dfa9288-4f0b-442d-9138-9fe232970d3a\") " Sep 4 23:59:52.983492 kubelet[2646]: I0904 23:59:52.982461 2646 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1dfa9288-4f0b-442d-9138-9fe232970d3a-etc-cni-netd\") pod \"1dfa9288-4f0b-442d-9138-9fe232970d3a\" (UID: \"1dfa9288-4f0b-442d-9138-9fe232970d3a\") " Sep 4 23:59:52.983492 kubelet[2646]: I0904 23:59:52.982479 2646 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1dfa9288-4f0b-442d-9138-9fe232970d3a-bpf-maps\") pod \"1dfa9288-4f0b-442d-9138-9fe232970d3a\" (UID: \"1dfa9288-4f0b-442d-9138-9fe232970d3a\") " Sep 4 23:59:52.983492 kubelet[2646]: I0904 23:59:52.982496 2646 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1dfa9288-4f0b-442d-9138-9fe232970d3a-hostproc\") pod \"1dfa9288-4f0b-442d-9138-9fe232970d3a\" (UID: \"1dfa9288-4f0b-442d-9138-9fe232970d3a\") " Sep 4 23:59:52.983492 kubelet[2646]: I0904 23:59:52.982510 2646 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1dfa9288-4f0b-442d-9138-9fe232970d3a-cni-path\") pod \"1dfa9288-4f0b-442d-9138-9fe232970d3a\" (UID: \"1dfa9288-4f0b-442d-9138-9fe232970d3a\") " Sep 4 23:59:52.983622 kubelet[2646]: I0904 23:59:52.982529 2646 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1dfa9288-4f0b-442d-9138-9fe232970d3a-cilium-cgroup\") pod \"1dfa9288-4f0b-442d-9138-9fe232970d3a\" (UID: \"1dfa9288-4f0b-442d-9138-9fe232970d3a\") " Sep 4 23:59:52.983622 kubelet[2646]: I0904 23:59:52.982544 2646 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1dfa9288-4f0b-442d-9138-9fe232970d3a-lib-modules\") pod \"1dfa9288-4f0b-442d-9138-9fe232970d3a\" (UID: \"1dfa9288-4f0b-442d-9138-9fe232970d3a\") " Sep 4 23:59:52.983622 kubelet[2646]: I0904 23:59:52.982570 2646 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9046f78a-8aa4-4440-add9-7f298421896c-cilium-config-path\") pod \"9046f78a-8aa4-4440-add9-7f298421896c\" (UID: \"9046f78a-8aa4-4440-add9-7f298421896c\") " Sep 4 23:59:52.983622 kubelet[2646]: I0904 23:59:52.982585 2646 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1dfa9288-4f0b-442d-9138-9fe232970d3a-xtables-lock\") pod \"1dfa9288-4f0b-442d-9138-9fe232970d3a\" (UID: \"1dfa9288-4f0b-442d-9138-9fe232970d3a\") " Sep 4 23:59:52.984340 kubelet[2646]: I0904 23:59:52.983894 2646 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1dfa9288-4f0b-442d-9138-9fe232970d3a-cni-path" (OuterVolumeSpecName: "cni-path") pod "1dfa9288-4f0b-442d-9138-9fe232970d3a" (UID: "1dfa9288-4f0b-442d-9138-9fe232970d3a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:59:52.984340 kubelet[2646]: I0904 23:59:52.983985 2646 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1dfa9288-4f0b-442d-9138-9fe232970d3a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1dfa9288-4f0b-442d-9138-9fe232970d3a" (UID: "1dfa9288-4f0b-442d-9138-9fe232970d3a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:59:52.984340 kubelet[2646]: I0904 23:59:52.984001 2646 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1dfa9288-4f0b-442d-9138-9fe232970d3a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1dfa9288-4f0b-442d-9138-9fe232970d3a" (UID: "1dfa9288-4f0b-442d-9138-9fe232970d3a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:59:52.984340 kubelet[2646]: I0904 23:59:52.984015 2646 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1dfa9288-4f0b-442d-9138-9fe232970d3a-hostproc" (OuterVolumeSpecName: "hostproc") pod "1dfa9288-4f0b-442d-9138-9fe232970d3a" (UID: "1dfa9288-4f0b-442d-9138-9fe232970d3a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:59:52.984340 kubelet[2646]: I0904 23:59:52.984032 2646 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1dfa9288-4f0b-442d-9138-9fe232970d3a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1dfa9288-4f0b-442d-9138-9fe232970d3a" (UID: "1dfa9288-4f0b-442d-9138-9fe232970d3a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:59:52.984497 kubelet[2646]: I0904 23:59:52.984046 2646 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1dfa9288-4f0b-442d-9138-9fe232970d3a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1dfa9288-4f0b-442d-9138-9fe232970d3a" (UID: "1dfa9288-4f0b-442d-9138-9fe232970d3a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:59:52.984867 kubelet[2646]: I0904 23:59:52.984816 2646 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1dfa9288-4f0b-442d-9138-9fe232970d3a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1dfa9288-4f0b-442d-9138-9fe232970d3a" (UID: "1dfa9288-4f0b-442d-9138-9fe232970d3a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:59:52.985165 kubelet[2646]: I0904 23:59:52.985123 2646 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1dfa9288-4f0b-442d-9138-9fe232970d3a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1dfa9288-4f0b-442d-9138-9fe232970d3a" (UID: "1dfa9288-4f0b-442d-9138-9fe232970d3a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:59:52.985219 kubelet[2646]: I0904 23:59:52.985175 2646 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1dfa9288-4f0b-442d-9138-9fe232970d3a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1dfa9288-4f0b-442d-9138-9fe232970d3a" (UID: "1dfa9288-4f0b-442d-9138-9fe232970d3a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:59:52.985432 kubelet[2646]: I0904 23:59:52.985382 2646 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dfa9288-4f0b-442d-9138-9fe232970d3a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1dfa9288-4f0b-442d-9138-9fe232970d3a" (UID: "1dfa9288-4f0b-442d-9138-9fe232970d3a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 4 23:59:52.986939 kubelet[2646]: I0904 23:59:52.986900 2646 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9046f78a-8aa4-4440-add9-7f298421896c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9046f78a-8aa4-4440-add9-7f298421896c" (UID: "9046f78a-8aa4-4440-add9-7f298421896c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 4 23:59:52.987017 kubelet[2646]: I0904 23:59:52.986941 2646 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1dfa9288-4f0b-442d-9138-9fe232970d3a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1dfa9288-4f0b-442d-9138-9fe232970d3a" (UID: "1dfa9288-4f0b-442d-9138-9fe232970d3a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:59:52.987654 kubelet[2646]: I0904 23:59:52.987575 2646 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dfa9288-4f0b-442d-9138-9fe232970d3a-kube-api-access-bx5hs" (OuterVolumeSpecName: "kube-api-access-bx5hs") pod "1dfa9288-4f0b-442d-9138-9fe232970d3a" (UID: "1dfa9288-4f0b-442d-9138-9fe232970d3a"). InnerVolumeSpecName "kube-api-access-bx5hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 23:59:52.987654 kubelet[2646]: I0904 23:59:52.987613 2646 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dfa9288-4f0b-442d-9138-9fe232970d3a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1dfa9288-4f0b-442d-9138-9fe232970d3a" (UID: "1dfa9288-4f0b-442d-9138-9fe232970d3a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 4 23:59:52.987775 kubelet[2646]: I0904 23:59:52.987755 2646 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dfa9288-4f0b-442d-9138-9fe232970d3a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1dfa9288-4f0b-442d-9138-9fe232970d3a" (UID: "1dfa9288-4f0b-442d-9138-9fe232970d3a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 23:59:52.989022 kubelet[2646]: I0904 23:59:52.988988 2646 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9046f78a-8aa4-4440-add9-7f298421896c-kube-api-access-jcv4c" (OuterVolumeSpecName: "kube-api-access-jcv4c") pod "9046f78a-8aa4-4440-add9-7f298421896c" (UID: "9046f78a-8aa4-4440-add9-7f298421896c"). InnerVolumeSpecName "kube-api-access-jcv4c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 23:59:53.084205 kubelet[2646]: I0904 23:59:53.083315 2646 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1dfa9288-4f0b-442d-9138-9fe232970d3a-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 4 23:59:53.084205 kubelet[2646]: I0904 23:59:53.083353 2646 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1dfa9288-4f0b-442d-9138-9fe232970d3a-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 4 23:59:53.084205 kubelet[2646]: I0904 23:59:53.083364 2646 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1dfa9288-4f0b-442d-9138-9fe232970d3a-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 4 23:59:53.084205 kubelet[2646]: I0904 23:59:53.083376 2646 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9046f78a-8aa4-4440-add9-7f298421896c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 4 23:59:53.084205 kubelet[2646]: I0904 23:59:53.083386 2646 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1dfa9288-4f0b-442d-9138-9fe232970d3a-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 4 23:59:53.084205 kubelet[2646]: I0904 23:59:53.083396 2646 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1dfa9288-4f0b-442d-9138-9fe232970d3a-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 4 23:59:53.084205 kubelet[2646]: I0904 23:59:53.083405 2646 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1dfa9288-4f0b-442d-9138-9fe232970d3a-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 4 23:59:53.084205 kubelet[2646]: I0904 23:59:53.083413 2646 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1dfa9288-4f0b-442d-9138-9fe232970d3a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 4 23:59:53.084461 kubelet[2646]: I0904 23:59:53.083421 2646 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1dfa9288-4f0b-442d-9138-9fe232970d3a-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 4 23:59:53.084461 kubelet[2646]: I0904 23:59:53.083435 2646 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jcv4c\" (UniqueName: \"kubernetes.io/projected/9046f78a-8aa4-4440-add9-7f298421896c-kube-api-access-jcv4c\") on node \"localhost\" DevicePath \"\"" Sep 4 23:59:53.084461 kubelet[2646]: I0904 23:59:53.083476 2646 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bx5hs\" (UniqueName: \"kubernetes.io/projected/1dfa9288-4f0b-442d-9138-9fe232970d3a-kube-api-access-bx5hs\") on node \"localhost\" DevicePath \"\"" Sep 4 23:59:53.084461 kubelet[2646]: I0904 23:59:53.083486 2646 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1dfa9288-4f0b-442d-9138-9fe232970d3a-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 4 23:59:53.084461 kubelet[2646]: I0904 23:59:53.083494 2646 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1dfa9288-4f0b-442d-9138-9fe232970d3a-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 4 23:59:53.084461 kubelet[2646]: I0904 23:59:53.083509 2646 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1dfa9288-4f0b-442d-9138-9fe232970d3a-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 4 23:59:53.084461 kubelet[2646]: I0904 23:59:53.083517 2646 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1dfa9288-4f0b-442d-9138-9fe232970d3a-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 4 23:59:53.084461 kubelet[2646]: I0904 23:59:53.083524 2646 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1dfa9288-4f0b-442d-9138-9fe232970d3a-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 4 23:59:53.550542 kubelet[2646]: I0904 23:59:53.550507 2646 scope.go:117] "RemoveContainer" containerID="33588ef9840b8eb6da2e2b8373335ac31e74fa3687c51fd9cfbbe5d39fa62e9f" Sep 4 23:59:53.554443 containerd[1543]: time="2025-09-04T23:59:53.554234324Z" level=info msg="RemoveContainer for \"33588ef9840b8eb6da2e2b8373335ac31e74fa3687c51fd9cfbbe5d39fa62e9f\"" Sep 4 23:59:53.555281 systemd[1]: Removed slice kubepods-besteffort-pod9046f78a_8aa4_4440_add9_7f298421896c.slice - libcontainer container kubepods-besteffort-pod9046f78a_8aa4_4440_add9_7f298421896c.slice. Sep 4 23:59:53.565920 systemd[1]: Removed slice kubepods-burstable-pod1dfa9288_4f0b_442d_9138_9fe232970d3a.slice - libcontainer container kubepods-burstable-pod1dfa9288_4f0b_442d_9138_9fe232970d3a.slice. Sep 4 23:59:53.566173 systemd[1]: kubepods-burstable-pod1dfa9288_4f0b_442d_9138_9fe232970d3a.slice: Consumed 6.304s CPU time, 126.2M memory peak, 140K read from disk, 12.9M written to disk. Sep 4 23:59:53.572648 containerd[1543]: time="2025-09-04T23:59:53.572593974Z" level=info msg="RemoveContainer for \"33588ef9840b8eb6da2e2b8373335ac31e74fa3687c51fd9cfbbe5d39fa62e9f\" returns successfully" Sep 4 23:59:53.573763 kubelet[2646]: I0904 23:59:53.573491 2646 scope.go:117] "RemoveContainer" containerID="33588ef9840b8eb6da2e2b8373335ac31e74fa3687c51fd9cfbbe5d39fa62e9f" Sep 4 23:59:53.574361 containerd[1543]: time="2025-09-04T23:59:53.574320063Z" level=error msg="ContainerStatus for \"33588ef9840b8eb6da2e2b8373335ac31e74fa3687c51fd9cfbbe5d39fa62e9f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"33588ef9840b8eb6da2e2b8373335ac31e74fa3687c51fd9cfbbe5d39fa62e9f\": not found" Sep 4 23:59:53.582018 kubelet[2646]: E0904 23:59:53.581802 2646 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"33588ef9840b8eb6da2e2b8373335ac31e74fa3687c51fd9cfbbe5d39fa62e9f\": not found" containerID="33588ef9840b8eb6da2e2b8373335ac31e74fa3687c51fd9cfbbe5d39fa62e9f" Sep 4 23:59:53.589732 kubelet[2646]: I0904 23:59:53.589227 2646 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"33588ef9840b8eb6da2e2b8373335ac31e74fa3687c51fd9cfbbe5d39fa62e9f"} err="failed to get container status \"33588ef9840b8eb6da2e2b8373335ac31e74fa3687c51fd9cfbbe5d39fa62e9f\": rpc error: code = NotFound desc = an error occurred when try to find container \"33588ef9840b8eb6da2e2b8373335ac31e74fa3687c51fd9cfbbe5d39fa62e9f\": not found" Sep 4 23:59:53.589732 kubelet[2646]: I0904 23:59:53.589739 2646 scope.go:117] "RemoveContainer" containerID="19f801a49f487d6f86ebab7516030e38150f1b610b84203a573a46bd4c5ba26f" Sep 4 23:59:53.593304 containerd[1543]: time="2025-09-04T23:59:53.593261329Z" level=info msg="RemoveContainer for \"19f801a49f487d6f86ebab7516030e38150f1b610b84203a573a46bd4c5ba26f\"" Sep 4 23:59:53.603878 containerd[1543]: time="2025-09-04T23:59:53.603822297Z" level=info msg="RemoveContainer for \"19f801a49f487d6f86ebab7516030e38150f1b610b84203a573a46bd4c5ba26f\" returns successfully" Sep 4 23:59:53.605114 kubelet[2646]: I0904 23:59:53.605079 2646 scope.go:117] "RemoveContainer" containerID="d054996befe3ffb63272fe49c11437291acbc484095f2f9c345b1f860d7e2c94" Sep 4 23:59:53.606693 containerd[1543]: time="2025-09-04T23:59:53.606658701Z" level=info msg="RemoveContainer for \"d054996befe3ffb63272fe49c11437291acbc484095f2f9c345b1f860d7e2c94\"" Sep 4 23:59:53.610255 containerd[1543]: time="2025-09-04T23:59:53.610225036Z" level=info msg="RemoveContainer for \"d054996befe3ffb63272fe49c11437291acbc484095f2f9c345b1f860d7e2c94\" returns successfully" Sep 4 23:59:53.610443 kubelet[2646]: I0904 23:59:53.610421 2646 scope.go:117] "RemoveContainer" containerID="ffc10a4e513c7360c9fde36bab63432eea4fd42ae642ad0cd1cefa5b1e090e8b" Sep 4 23:59:53.613015 containerd[1543]: time="2025-09-04T23:59:53.612981123Z" level=info msg="RemoveContainer for \"ffc10a4e513c7360c9fde36bab63432eea4fd42ae642ad0cd1cefa5b1e090e8b\"" Sep 4 23:59:53.616432 containerd[1543]: time="2025-09-04T23:59:53.616397863Z" level=info msg="RemoveContainer for \"ffc10a4e513c7360c9fde36bab63432eea4fd42ae642ad0cd1cefa5b1e090e8b\" returns successfully" Sep 4 23:59:53.616601 kubelet[2646]: I0904 23:59:53.616581 2646 scope.go:117] "RemoveContainer" containerID="dc4da08eba209c02893e7c98ff3cd352beb2d6f432cd35b24854899f48f2c94a" Sep 4 23:59:53.618123 containerd[1543]: time="2025-09-04T23:59:53.618100794Z" level=info msg="RemoveContainer for \"dc4da08eba209c02893e7c98ff3cd352beb2d6f432cd35b24854899f48f2c94a\"" Sep 4 23:59:53.621068 containerd[1543]: time="2025-09-04T23:59:53.621032274Z" level=info msg="RemoveContainer for \"dc4da08eba209c02893e7c98ff3cd352beb2d6f432cd35b24854899f48f2c94a\" returns successfully" Sep 4 23:59:53.621248 kubelet[2646]: I0904 23:59:53.621225 2646 scope.go:117] "RemoveContainer" containerID="8f2548f8f928060e0d16e025a131fa5ef129df75be3d5db88b85a8946782ea3c" Sep 4 23:59:53.622599 containerd[1543]: time="2025-09-04T23:59:53.622549572Z" level=info msg="RemoveContainer for \"8f2548f8f928060e0d16e025a131fa5ef129df75be3d5db88b85a8946782ea3c\"" Sep 4 23:59:53.625101 containerd[1543]: time="2025-09-04T23:59:53.625071069Z" level=info msg="RemoveContainer for \"8f2548f8f928060e0d16e025a131fa5ef129df75be3d5db88b85a8946782ea3c\" returns successfully" Sep 4 23:59:53.625278 kubelet[2646]: I0904 23:59:53.625253 2646 scope.go:117] "RemoveContainer" containerID="19f801a49f487d6f86ebab7516030e38150f1b610b84203a573a46bd4c5ba26f" Sep 4 23:59:53.625524 containerd[1543]: time="2025-09-04T23:59:53.625459173Z" level=error msg="ContainerStatus for \"19f801a49f487d6f86ebab7516030e38150f1b610b84203a573a46bd4c5ba26f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"19f801a49f487d6f86ebab7516030e38150f1b610b84203a573a46bd4c5ba26f\": not found" Sep 4 23:59:53.625633 kubelet[2646]: E0904 23:59:53.625606 2646 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"19f801a49f487d6f86ebab7516030e38150f1b610b84203a573a46bd4c5ba26f\": not found" containerID="19f801a49f487d6f86ebab7516030e38150f1b610b84203a573a46bd4c5ba26f" Sep 4 23:59:53.625676 kubelet[2646]: I0904 23:59:53.625643 2646 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"19f801a49f487d6f86ebab7516030e38150f1b610b84203a573a46bd4c5ba26f"} err="failed to get container status \"19f801a49f487d6f86ebab7516030e38150f1b610b84203a573a46bd4c5ba26f\": rpc error: code = NotFound desc = an error occurred when try to find container \"19f801a49f487d6f86ebab7516030e38150f1b610b84203a573a46bd4c5ba26f\": not found" Sep 4 23:59:53.625716 kubelet[2646]: I0904 23:59:53.625681 2646 scope.go:117] "RemoveContainer" containerID="d054996befe3ffb63272fe49c11437291acbc484095f2f9c345b1f860d7e2c94" Sep 4 23:59:53.625930 containerd[1543]: time="2025-09-04T23:59:53.625858117Z" level=error msg="ContainerStatus for \"d054996befe3ffb63272fe49c11437291acbc484095f2f9c345b1f860d7e2c94\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d054996befe3ffb63272fe49c11437291acbc484095f2f9c345b1f860d7e2c94\": not found" Sep 4 23:59:53.626020 kubelet[2646]: E0904 23:59:53.626001 2646 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d054996befe3ffb63272fe49c11437291acbc484095f2f9c345b1f860d7e2c94\": not found" containerID="d054996befe3ffb63272fe49c11437291acbc484095f2f9c345b1f860d7e2c94" Sep 4 23:59:53.626082 kubelet[2646]: I0904 23:59:53.626032 2646 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d054996befe3ffb63272fe49c11437291acbc484095f2f9c345b1f860d7e2c94"} err="failed to get container status \"d054996befe3ffb63272fe49c11437291acbc484095f2f9c345b1f860d7e2c94\": rpc error: code = NotFound desc = an error occurred when try to find container \"d054996befe3ffb63272fe49c11437291acbc484095f2f9c345b1f860d7e2c94\": not found" Sep 4 23:59:53.626082 kubelet[2646]: I0904 23:59:53.626066 2646 scope.go:117] "RemoveContainer" containerID="ffc10a4e513c7360c9fde36bab63432eea4fd42ae642ad0cd1cefa5b1e090e8b" Sep 4 23:59:53.626262 containerd[1543]: time="2025-09-04T23:59:53.626234501Z" level=error msg="ContainerStatus for \"ffc10a4e513c7360c9fde36bab63432eea4fd42ae642ad0cd1cefa5b1e090e8b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ffc10a4e513c7360c9fde36bab63432eea4fd42ae642ad0cd1cefa5b1e090e8b\": not found" Sep 4 23:59:53.626354 kubelet[2646]: E0904 23:59:53.626336 2646 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ffc10a4e513c7360c9fde36bab63432eea4fd42ae642ad0cd1cefa5b1e090e8b\": not found" containerID="ffc10a4e513c7360c9fde36bab63432eea4fd42ae642ad0cd1cefa5b1e090e8b" Sep 4 23:59:53.626385 kubelet[2646]: I0904 23:59:53.626358 2646 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ffc10a4e513c7360c9fde36bab63432eea4fd42ae642ad0cd1cefa5b1e090e8b"} err="failed to get container status \"ffc10a4e513c7360c9fde36bab63432eea4fd42ae642ad0cd1cefa5b1e090e8b\": rpc error: code = NotFound desc = an error occurred when try to find container \"ffc10a4e513c7360c9fde36bab63432eea4fd42ae642ad0cd1cefa5b1e090e8b\": not found" Sep 4 23:59:53.626423 kubelet[2646]: I0904 23:59:53.626401 2646 scope.go:117] "RemoveContainer" containerID="dc4da08eba209c02893e7c98ff3cd352beb2d6f432cd35b24854899f48f2c94a" Sep 4 23:59:53.626611 containerd[1543]: time="2025-09-04T23:59:53.626554288Z" level=error msg="ContainerStatus for \"dc4da08eba209c02893e7c98ff3cd352beb2d6f432cd35b24854899f48f2c94a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dc4da08eba209c02893e7c98ff3cd352beb2d6f432cd35b24854899f48f2c94a\": not found" Sep 4 23:59:53.626696 kubelet[2646]: E0904 23:59:53.626673 2646 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dc4da08eba209c02893e7c98ff3cd352beb2d6f432cd35b24854899f48f2c94a\": not found" containerID="dc4da08eba209c02893e7c98ff3cd352beb2d6f432cd35b24854899f48f2c94a" Sep 4 23:59:53.626728 kubelet[2646]: I0904 23:59:53.626699 2646 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dc4da08eba209c02893e7c98ff3cd352beb2d6f432cd35b24854899f48f2c94a"} err="failed to get container status \"dc4da08eba209c02893e7c98ff3cd352beb2d6f432cd35b24854899f48f2c94a\": rpc error: code = NotFound desc = an error occurred when try to find container \"dc4da08eba209c02893e7c98ff3cd352beb2d6f432cd35b24854899f48f2c94a\": not found" Sep 4 23:59:53.626728 kubelet[2646]: I0904 23:59:53.626712 2646 scope.go:117] "RemoveContainer" containerID="8f2548f8f928060e0d16e025a131fa5ef129df75be3d5db88b85a8946782ea3c" Sep 4 23:59:53.626855 containerd[1543]: time="2025-09-04T23:59:53.626831117Z" level=error msg="ContainerStatus for \"8f2548f8f928060e0d16e025a131fa5ef129df75be3d5db88b85a8946782ea3c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8f2548f8f928060e0d16e025a131fa5ef129df75be3d5db88b85a8946782ea3c\": not found" Sep 4 23:59:53.626949 kubelet[2646]: E0904 23:59:53.626933 2646 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8f2548f8f928060e0d16e025a131fa5ef129df75be3d5db88b85a8946782ea3c\": not found" containerID="8f2548f8f928060e0d16e025a131fa5ef129df75be3d5db88b85a8946782ea3c" Sep 4 23:59:53.626982 kubelet[2646]: I0904 23:59:53.626952 2646 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8f2548f8f928060e0d16e025a131fa5ef129df75be3d5db88b85a8946782ea3c"} err="failed to get container status \"8f2548f8f928060e0d16e025a131fa5ef129df75be3d5db88b85a8946782ea3c\": rpc error: code = NotFound desc = an error occurred when try to find container \"8f2548f8f928060e0d16e025a131fa5ef129df75be3d5db88b85a8946782ea3c\": not found" Sep 4 23:59:53.831076 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6afaa6b3f8c1a7c237822aae6de2382bf7114ad67645fd18df69392e9d34a92d-shm.mount: Deactivated successfully. Sep 4 23:59:53.831184 systemd[1]: var-lib-kubelet-pods-9046f78a\x2d8aa4\x2d4440\x2dadd9\x2d7f298421896c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djcv4c.mount: Deactivated successfully. Sep 4 23:59:53.831237 systemd[1]: var-lib-kubelet-pods-1dfa9288\x2d4f0b\x2d442d\x2d9138\x2d9fe232970d3a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbx5hs.mount: Deactivated successfully. Sep 4 23:59:53.831288 systemd[1]: var-lib-kubelet-pods-1dfa9288\x2d4f0b\x2d442d\x2d9138\x2d9fe232970d3a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 4 23:59:53.831338 systemd[1]: var-lib-kubelet-pods-1dfa9288\x2d4f0b\x2d442d\x2d9138\x2d9fe232970d3a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 4 23:59:54.343512 kubelet[2646]: I0904 23:59:54.343470 2646 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dfa9288-4f0b-442d-9138-9fe232970d3a" path="/var/lib/kubelet/pods/1dfa9288-4f0b-442d-9138-9fe232970d3a/volumes" Sep 4 23:59:54.343978 kubelet[2646]: I0904 23:59:54.343945 2646 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9046f78a-8aa4-4440-add9-7f298421896c" path="/var/lib/kubelet/pods/9046f78a-8aa4-4440-add9-7f298421896c/volumes" Sep 4 23:59:54.732267 sshd[4223]: Connection closed by 10.0.0.1 port 40128 Sep 4 23:59:54.733427 sshd-session[4221]: pam_unix(sshd:session): session closed for user core Sep 4 23:59:54.744602 systemd[1]: sshd@21-10.0.0.113:22-10.0.0.1:40128.service: Deactivated successfully. Sep 4 23:59:54.747640 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 23:59:54.747848 systemd[1]: session-22.scope: Consumed 2.309s CPU time, 25.6M memory peak. Sep 4 23:59:54.748549 systemd-logind[1519]: Session 22 logged out. Waiting for processes to exit. Sep 4 23:59:54.751324 systemd[1]: Started sshd@22-10.0.0.113:22-10.0.0.1:50622.service - OpenSSH per-connection server daemon (10.0.0.1:50622). Sep 4 23:59:54.753209 systemd-logind[1519]: Removed session 22. Sep 4 23:59:54.812938 sshd[4377]: Accepted publickey for core from 10.0.0.1 port 50622 ssh2: RSA SHA256:dz8a5vpzhl9T1tN+PlbA3wzUJkL1bHm+PkgBuWVD7dg Sep 4 23:59:54.814306 sshd-session[4377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:59:54.819202 systemd-logind[1519]: New session 23 of user core. Sep 4 23:59:54.833294 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 23:59:55.806781 sshd[4379]: Connection closed by 10.0.0.1 port 50622 Sep 4 23:59:55.807484 sshd-session[4377]: pam_unix(sshd:session): session closed for user core Sep 4 23:59:55.820402 systemd[1]: sshd@22-10.0.0.113:22-10.0.0.1:50622.service: Deactivated successfully. Sep 4 23:59:55.822070 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 23:59:55.823326 systemd-logind[1519]: Session 23 logged out. Waiting for processes to exit. Sep 4 23:59:55.827553 systemd[1]: Started sshd@23-10.0.0.113:22-10.0.0.1:50628.service - OpenSSH per-connection server daemon (10.0.0.1:50628). Sep 4 23:59:55.829563 systemd-logind[1519]: Removed session 23. Sep 4 23:59:55.838605 kubelet[2646]: I0904 23:59:55.838580 2646 memory_manager.go:355] "RemoveStaleState removing state" podUID="1dfa9288-4f0b-442d-9138-9fe232970d3a" containerName="cilium-agent" Sep 4 23:59:55.838867 kubelet[2646]: I0904 23:59:55.838853 2646 memory_manager.go:355] "RemoveStaleState removing state" podUID="9046f78a-8aa4-4440-add9-7f298421896c" containerName="cilium-operator" Sep 4 23:59:55.849266 systemd[1]: Created slice kubepods-burstable-podf0e9b813_47d0_4a22_9fe1_e3d917471d3e.slice - libcontainer container kubepods-burstable-podf0e9b813_47d0_4a22_9fe1_e3d917471d3e.slice. Sep 4 23:59:55.886065 sshd[4391]: Accepted publickey for core from 10.0.0.1 port 50628 ssh2: RSA SHA256:dz8a5vpzhl9T1tN+PlbA3wzUJkL1bHm+PkgBuWVD7dg Sep 4 23:59:55.887339 sshd-session[4391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:59:55.890878 systemd-logind[1519]: New session 24 of user core. Sep 4 23:59:55.896278 kubelet[2646]: I0904 23:59:55.896247 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f0e9b813-47d0-4a22-9fe1-e3d917471d3e-clustermesh-secrets\") pod \"cilium-lmmwv\" (UID: \"f0e9b813-47d0-4a22-9fe1-e3d917471d3e\") " pod="kube-system/cilium-lmmwv" Sep 4 23:59:55.896371 kubelet[2646]: I0904 23:59:55.896297 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f0e9b813-47d0-4a22-9fe1-e3d917471d3e-cilium-ipsec-secrets\") pod \"cilium-lmmwv\" (UID: \"f0e9b813-47d0-4a22-9fe1-e3d917471d3e\") " pod="kube-system/cilium-lmmwv" Sep 4 23:59:55.896371 kubelet[2646]: I0904 23:59:55.896315 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f0e9b813-47d0-4a22-9fe1-e3d917471d3e-host-proc-sys-net\") pod \"cilium-lmmwv\" (UID: \"f0e9b813-47d0-4a22-9fe1-e3d917471d3e\") " pod="kube-system/cilium-lmmwv" Sep 4 23:59:55.896371 kubelet[2646]: I0904 23:59:55.896332 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f0e9b813-47d0-4a22-9fe1-e3d917471d3e-host-proc-sys-kernel\") pod \"cilium-lmmwv\" (UID: \"f0e9b813-47d0-4a22-9fe1-e3d917471d3e\") " pod="kube-system/cilium-lmmwv" Sep 4 23:59:55.896371 kubelet[2646]: I0904 23:59:55.896350 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f0e9b813-47d0-4a22-9fe1-e3d917471d3e-cilium-run\") pod \"cilium-lmmwv\" (UID: \"f0e9b813-47d0-4a22-9fe1-e3d917471d3e\") " pod="kube-system/cilium-lmmwv" Sep 4 23:59:55.896371 kubelet[2646]: I0904 23:59:55.896366 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f0e9b813-47d0-4a22-9fe1-e3d917471d3e-cni-path\") pod \"cilium-lmmwv\" (UID: \"f0e9b813-47d0-4a22-9fe1-e3d917471d3e\") " pod="kube-system/cilium-lmmwv" Sep 4 23:59:55.896469 kubelet[2646]: I0904 23:59:55.896384 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f0e9b813-47d0-4a22-9fe1-e3d917471d3e-lib-modules\") pod \"cilium-lmmwv\" (UID: \"f0e9b813-47d0-4a22-9fe1-e3d917471d3e\") " pod="kube-system/cilium-lmmwv" Sep 4 23:59:55.896469 kubelet[2646]: I0904 23:59:55.896411 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f0e9b813-47d0-4a22-9fe1-e3d917471d3e-etc-cni-netd\") pod \"cilium-lmmwv\" (UID: \"f0e9b813-47d0-4a22-9fe1-e3d917471d3e\") " pod="kube-system/cilium-lmmwv" Sep 4 23:59:55.896469 kubelet[2646]: I0904 23:59:55.896433 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f0e9b813-47d0-4a22-9fe1-e3d917471d3e-hubble-tls\") pod \"cilium-lmmwv\" (UID: \"f0e9b813-47d0-4a22-9fe1-e3d917471d3e\") " pod="kube-system/cilium-lmmwv" Sep 4 23:59:55.896469 kubelet[2646]: I0904 23:59:55.896451 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km2lm\" (UniqueName: \"kubernetes.io/projected/f0e9b813-47d0-4a22-9fe1-e3d917471d3e-kube-api-access-km2lm\") pod \"cilium-lmmwv\" (UID: \"f0e9b813-47d0-4a22-9fe1-e3d917471d3e\") " pod="kube-system/cilium-lmmwv" Sep 4 23:59:55.896543 kubelet[2646]: I0904 23:59:55.896497 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f0e9b813-47d0-4a22-9fe1-e3d917471d3e-hostproc\") pod \"cilium-lmmwv\" (UID: \"f0e9b813-47d0-4a22-9fe1-e3d917471d3e\") " pod="kube-system/cilium-lmmwv" Sep 4 23:59:55.896543 kubelet[2646]: I0904 23:59:55.896538 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f0e9b813-47d0-4a22-9fe1-e3d917471d3e-cilium-cgroup\") pod \"cilium-lmmwv\" (UID: \"f0e9b813-47d0-4a22-9fe1-e3d917471d3e\") " pod="kube-system/cilium-lmmwv" Sep 4 23:59:55.896582 kubelet[2646]: I0904 23:59:55.896555 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f0e9b813-47d0-4a22-9fe1-e3d917471d3e-xtables-lock\") pod \"cilium-lmmwv\" (UID: \"f0e9b813-47d0-4a22-9fe1-e3d917471d3e\") " pod="kube-system/cilium-lmmwv" Sep 4 23:59:55.896602 kubelet[2646]: I0904 23:59:55.896590 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f0e9b813-47d0-4a22-9fe1-e3d917471d3e-cilium-config-path\") pod \"cilium-lmmwv\" (UID: \"f0e9b813-47d0-4a22-9fe1-e3d917471d3e\") " pod="kube-system/cilium-lmmwv" Sep 4 23:59:55.896624 kubelet[2646]: I0904 23:59:55.896607 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f0e9b813-47d0-4a22-9fe1-e3d917471d3e-bpf-maps\") pod \"cilium-lmmwv\" (UID: \"f0e9b813-47d0-4a22-9fe1-e3d917471d3e\") " pod="kube-system/cilium-lmmwv" Sep 4 23:59:55.904316 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 23:59:55.953204 sshd[4393]: Connection closed by 10.0.0.1 port 50628 Sep 4 23:59:55.953845 sshd-session[4391]: pam_unix(sshd:session): session closed for user core Sep 4 23:59:55.972198 systemd[1]: sshd@23-10.0.0.113:22-10.0.0.1:50628.service: Deactivated successfully. Sep 4 23:59:55.974575 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 23:59:55.975327 systemd-logind[1519]: Session 24 logged out. Waiting for processes to exit. Sep 4 23:59:55.977753 systemd[1]: Started sshd@24-10.0.0.113:22-10.0.0.1:50638.service - OpenSSH per-connection server daemon (10.0.0.1:50638). Sep 4 23:59:55.978277 systemd-logind[1519]: Removed session 24. Sep 4 23:59:56.031912 sshd[4400]: Accepted publickey for core from 10.0.0.1 port 50638 ssh2: RSA SHA256:dz8a5vpzhl9T1tN+PlbA3wzUJkL1bHm+PkgBuWVD7dg Sep 4 23:59:56.033152 sshd-session[4400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:59:56.036730 systemd-logind[1519]: New session 25 of user core. Sep 4 23:59:56.048289 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 23:59:56.156439 containerd[1543]: time="2025-09-04T23:59:56.156333691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lmmwv,Uid:f0e9b813-47d0-4a22-9fe1-e3d917471d3e,Namespace:kube-system,Attempt:0,}" Sep 4 23:59:56.171168 containerd[1543]: time="2025-09-04T23:59:56.170313903Z" level=info msg="connecting to shim f18f6ff8201d3dc2f306004fc70c1c9516584a8f80dc4c9bbb8cb922081edbeb" address="unix:///run/containerd/s/5b4ef8a1b41ea5cb8e579d9f9cc31557166390c9e538de011b59bd7c5de90876" namespace=k8s.io protocol=ttrpc version=3 Sep 4 23:59:56.190345 systemd[1]: Started cri-containerd-f18f6ff8201d3dc2f306004fc70c1c9516584a8f80dc4c9bbb8cb922081edbeb.scope - libcontainer container f18f6ff8201d3dc2f306004fc70c1c9516584a8f80dc4c9bbb8cb922081edbeb. Sep 4 23:59:56.210598 containerd[1543]: time="2025-09-04T23:59:56.210491359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lmmwv,Uid:f0e9b813-47d0-4a22-9fe1-e3d917471d3e,Namespace:kube-system,Attempt:0,} returns sandbox id \"f18f6ff8201d3dc2f306004fc70c1c9516584a8f80dc4c9bbb8cb922081edbeb\"" Sep 4 23:59:56.213462 containerd[1543]: time="2025-09-04T23:59:56.213397982Z" level=info msg="CreateContainer within sandbox \"f18f6ff8201d3dc2f306004fc70c1c9516584a8f80dc4c9bbb8cb922081edbeb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 23:59:56.218812 containerd[1543]: time="2025-09-04T23:59:56.218780602Z" level=info msg="Container d8dc812891510f099d73b830b86bc7a7518c2ad5cd3bef0ba936d68dcc2acb98: CDI devices from CRI Config.CDIDevices: []" Sep 4 23:59:56.224372 containerd[1543]: time="2025-09-04T23:59:56.224328616Z" level=info msg="CreateContainer within sandbox \"f18f6ff8201d3dc2f306004fc70c1c9516584a8f80dc4c9bbb8cb922081edbeb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d8dc812891510f099d73b830b86bc7a7518c2ad5cd3bef0ba936d68dcc2acb98\"" Sep 4 23:59:56.226401 containerd[1543]: time="2025-09-04T23:59:56.226350949Z" level=info msg="StartContainer for \"d8dc812891510f099d73b830b86bc7a7518c2ad5cd3bef0ba936d68dcc2acb98\"" Sep 4 23:59:56.227310 containerd[1543]: time="2025-09-04T23:59:56.227272478Z" level=info msg="connecting to shim d8dc812891510f099d73b830b86bc7a7518c2ad5cd3bef0ba936d68dcc2acb98" address="unix:///run/containerd/s/5b4ef8a1b41ea5cb8e579d9f9cc31557166390c9e538de011b59bd7c5de90876" protocol=ttrpc version=3 Sep 4 23:59:56.249320 systemd[1]: Started cri-containerd-d8dc812891510f099d73b830b86bc7a7518c2ad5cd3bef0ba936d68dcc2acb98.scope - libcontainer container d8dc812891510f099d73b830b86bc7a7518c2ad5cd3bef0ba936d68dcc2acb98. Sep 4 23:59:56.275086 containerd[1543]: time="2025-09-04T23:59:56.275050080Z" level=info msg="StartContainer for \"d8dc812891510f099d73b830b86bc7a7518c2ad5cd3bef0ba936d68dcc2acb98\" returns successfully" Sep 4 23:59:56.282399 systemd[1]: cri-containerd-d8dc812891510f099d73b830b86bc7a7518c2ad5cd3bef0ba936d68dcc2acb98.scope: Deactivated successfully. Sep 4 23:59:56.285434 containerd[1543]: time="2025-09-04T23:59:56.285403734Z" level=info msg="received exit event container_id:\"d8dc812891510f099d73b830b86bc7a7518c2ad5cd3bef0ba936d68dcc2acb98\" id:\"d8dc812891510f099d73b830b86bc7a7518c2ad5cd3bef0ba936d68dcc2acb98\" pid:4473 exited_at:{seconds:1757030396 nanos:285196501}" Sep 4 23:59:56.285518 containerd[1543]: time="2025-09-04T23:59:56.285493331Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d8dc812891510f099d73b830b86bc7a7518c2ad5cd3bef0ba936d68dcc2acb98\" id:\"d8dc812891510f099d73b830b86bc7a7518c2ad5cd3bef0ba936d68dcc2acb98\" pid:4473 exited_at:{seconds:1757030396 nanos:285196501}" Sep 4 23:59:56.570620 containerd[1543]: time="2025-09-04T23:59:56.570579955Z" level=info msg="CreateContainer within sandbox \"f18f6ff8201d3dc2f306004fc70c1c9516584a8f80dc4c9bbb8cb922081edbeb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 23:59:56.576638 containerd[1543]: time="2025-09-04T23:59:56.576590274Z" level=info msg="Container 18083213fe0352ccf9ba476dead3b1515181c60ad704ffc6c2624a46eda0eea2: CDI devices from CRI Config.CDIDevices: []" Sep 4 23:59:56.581423 containerd[1543]: time="2025-09-04T23:59:56.581372034Z" level=info msg="CreateContainer within sandbox \"f18f6ff8201d3dc2f306004fc70c1c9516584a8f80dc4c9bbb8cb922081edbeb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"18083213fe0352ccf9ba476dead3b1515181c60ad704ffc6c2624a46eda0eea2\"" Sep 4 23:59:56.582011 containerd[1543]: time="2025-09-04T23:59:56.581987054Z" level=info msg="StartContainer for \"18083213fe0352ccf9ba476dead3b1515181c60ad704ffc6c2624a46eda0eea2\"" Sep 4 23:59:56.583486 containerd[1543]: time="2025-09-04T23:59:56.583428966Z" level=info msg="connecting to shim 18083213fe0352ccf9ba476dead3b1515181c60ad704ffc6c2624a46eda0eea2" address="unix:///run/containerd/s/5b4ef8a1b41ea5cb8e579d9f9cc31557166390c9e538de011b59bd7c5de90876" protocol=ttrpc version=3 Sep 4 23:59:56.598284 systemd[1]: Started cri-containerd-18083213fe0352ccf9ba476dead3b1515181c60ad704ffc6c2624a46eda0eea2.scope - libcontainer container 18083213fe0352ccf9ba476dead3b1515181c60ad704ffc6c2624a46eda0eea2. Sep 4 23:59:56.620359 containerd[1543]: time="2025-09-04T23:59:56.620328211Z" level=info msg="StartContainer for \"18083213fe0352ccf9ba476dead3b1515181c60ad704ffc6c2624a46eda0eea2\" returns successfully" Sep 4 23:59:56.626747 systemd[1]: cri-containerd-18083213fe0352ccf9ba476dead3b1515181c60ad704ffc6c2624a46eda0eea2.scope: Deactivated successfully. Sep 4 23:59:56.627927 containerd[1543]: time="2025-09-04T23:59:56.627878439Z" level=info msg="received exit event container_id:\"18083213fe0352ccf9ba476dead3b1515181c60ad704ffc6c2624a46eda0eea2\" id:\"18083213fe0352ccf9ba476dead3b1515181c60ad704ffc6c2624a46eda0eea2\" pid:4518 exited_at:{seconds:1757030396 nanos:627597928}" Sep 4 23:59:56.628099 containerd[1543]: time="2025-09-04T23:59:56.628080152Z" level=info msg="TaskExit event in podsandbox handler container_id:\"18083213fe0352ccf9ba476dead3b1515181c60ad704ffc6c2624a46eda0eea2\" id:\"18083213fe0352ccf9ba476dead3b1515181c60ad704ffc6c2624a46eda0eea2\" pid:4518 exited_at:{seconds:1757030396 nanos:627597928}" Sep 4 23:59:57.404435 kubelet[2646]: E0904 23:59:57.404372 2646 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 23:59:57.576327 containerd[1543]: time="2025-09-04T23:59:57.576032261Z" level=info msg="CreateContainer within sandbox \"f18f6ff8201d3dc2f306004fc70c1c9516584a8f80dc4c9bbb8cb922081edbeb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 23:59:57.596422 containerd[1543]: time="2025-09-04T23:59:57.596374388Z" level=info msg="Container 380e91b5eab57be06e89a854681712adbaa2925d1a24886bd829f8620e648ea1: CDI devices from CRI Config.CDIDevices: []" Sep 4 23:59:57.610161 containerd[1543]: time="2025-09-04T23:59:57.610088081Z" level=info msg="CreateContainer within sandbox \"f18f6ff8201d3dc2f306004fc70c1c9516584a8f80dc4c9bbb8cb922081edbeb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"380e91b5eab57be06e89a854681712adbaa2925d1a24886bd829f8620e648ea1\"" Sep 4 23:59:57.611180 containerd[1543]: time="2025-09-04T23:59:57.611077290Z" level=info msg="StartContainer for \"380e91b5eab57be06e89a854681712adbaa2925d1a24886bd829f8620e648ea1\"" Sep 4 23:59:57.615442 containerd[1543]: time="2025-09-04T23:59:57.614669738Z" level=info msg="connecting to shim 380e91b5eab57be06e89a854681712adbaa2925d1a24886bd829f8620e648ea1" address="unix:///run/containerd/s/5b4ef8a1b41ea5cb8e579d9f9cc31557166390c9e538de011b59bd7c5de90876" protocol=ttrpc version=3 Sep 4 23:59:57.636644 systemd[1]: Started cri-containerd-380e91b5eab57be06e89a854681712adbaa2925d1a24886bd829f8620e648ea1.scope - libcontainer container 380e91b5eab57be06e89a854681712adbaa2925d1a24886bd829f8620e648ea1. Sep 4 23:59:57.679007 containerd[1543]: time="2025-09-04T23:59:57.678353076Z" level=info msg="StartContainer for \"380e91b5eab57be06e89a854681712adbaa2925d1a24886bd829f8620e648ea1\" returns successfully" Sep 4 23:59:57.678434 systemd[1]: cri-containerd-380e91b5eab57be06e89a854681712adbaa2925d1a24886bd829f8620e648ea1.scope: Deactivated successfully. Sep 4 23:59:57.681830 containerd[1543]: time="2025-09-04T23:59:57.681724411Z" level=info msg="received exit event container_id:\"380e91b5eab57be06e89a854681712adbaa2925d1a24886bd829f8620e648ea1\" id:\"380e91b5eab57be06e89a854681712adbaa2925d1a24886bd829f8620e648ea1\" pid:4563 exited_at:{seconds:1757030397 nanos:681412341}" Sep 4 23:59:57.681954 containerd[1543]: time="2025-09-04T23:59:57.681795769Z" level=info msg="TaskExit event in podsandbox handler container_id:\"380e91b5eab57be06e89a854681712adbaa2925d1a24886bd829f8620e648ea1\" id:\"380e91b5eab57be06e89a854681712adbaa2925d1a24886bd829f8620e648ea1\" pid:4563 exited_at:{seconds:1757030397 nanos:681412341}" Sep 4 23:59:58.582048 containerd[1543]: time="2025-09-04T23:59:58.582002496Z" level=info msg="CreateContainer within sandbox \"f18f6ff8201d3dc2f306004fc70c1c9516584a8f80dc4c9bbb8cb922081edbeb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 23:59:58.589428 containerd[1543]: time="2025-09-04T23:59:58.588725302Z" level=info msg="Container 167acfe023481f09bc4484fa136886ef270a33e93fc48d7498c00efdb88a8df5: CDI devices from CRI Config.CDIDevices: []" Sep 4 23:59:58.592579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1480791093.mount: Deactivated successfully. Sep 4 23:59:58.601828 containerd[1543]: time="2025-09-04T23:59:58.601777765Z" level=info msg="CreateContainer within sandbox \"f18f6ff8201d3dc2f306004fc70c1c9516584a8f80dc4c9bbb8cb922081edbeb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"167acfe023481f09bc4484fa136886ef270a33e93fc48d7498c00efdb88a8df5\"" Sep 4 23:59:58.602361 containerd[1543]: time="2025-09-04T23:59:58.602331469Z" level=info msg="StartContainer for \"167acfe023481f09bc4484fa136886ef270a33e93fc48d7498c00efdb88a8df5\"" Sep 4 23:59:58.603348 containerd[1543]: time="2025-09-04T23:59:58.603314281Z" level=info msg="connecting to shim 167acfe023481f09bc4484fa136886ef270a33e93fc48d7498c00efdb88a8df5" address="unix:///run/containerd/s/5b4ef8a1b41ea5cb8e579d9f9cc31557166390c9e538de011b59bd7c5de90876" protocol=ttrpc version=3 Sep 4 23:59:58.630318 systemd[1]: Started cri-containerd-167acfe023481f09bc4484fa136886ef270a33e93fc48d7498c00efdb88a8df5.scope - libcontainer container 167acfe023481f09bc4484fa136886ef270a33e93fc48d7498c00efdb88a8df5. Sep 4 23:59:58.653925 systemd[1]: cri-containerd-167acfe023481f09bc4484fa136886ef270a33e93fc48d7498c00efdb88a8df5.scope: Deactivated successfully. Sep 4 23:59:58.655650 containerd[1543]: time="2025-09-04T23:59:58.655488334Z" level=info msg="TaskExit event in podsandbox handler container_id:\"167acfe023481f09bc4484fa136886ef270a33e93fc48d7498c00efdb88a8df5\" id:\"167acfe023481f09bc4484fa136886ef270a33e93fc48d7498c00efdb88a8df5\" pid:4602 exited_at:{seconds:1757030398 nanos:655123905}" Sep 4 23:59:58.656030 containerd[1543]: time="2025-09-04T23:59:58.656008759Z" level=info msg="received exit event container_id:\"167acfe023481f09bc4484fa136886ef270a33e93fc48d7498c00efdb88a8df5\" id:\"167acfe023481f09bc4484fa136886ef270a33e93fc48d7498c00efdb88a8df5\" pid:4602 exited_at:{seconds:1757030398 nanos:655123905}" Sep 4 23:59:58.662461 containerd[1543]: time="2025-09-04T23:59:58.662428934Z" level=info msg="StartContainer for \"167acfe023481f09bc4484fa136886ef270a33e93fc48d7498c00efdb88a8df5\" returns successfully" Sep 4 23:59:58.673251 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-167acfe023481f09bc4484fa136886ef270a33e93fc48d7498c00efdb88a8df5-rootfs.mount: Deactivated successfully. Sep 4 23:59:59.588020 containerd[1543]: time="2025-09-04T23:59:59.587919807Z" level=info msg="CreateContainer within sandbox \"f18f6ff8201d3dc2f306004fc70c1c9516584a8f80dc4c9bbb8cb922081edbeb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 23:59:59.601933 containerd[1543]: time="2025-09-04T23:59:59.601890994Z" level=info msg="Container dfd89bc445e0f24c98caf0526d9506d3e2c73377024fec07ae1a3e7ad5cb9920: CDI devices from CRI Config.CDIDevices: []" Sep 4 23:59:59.602764 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1868265940.mount: Deactivated successfully. Sep 4 23:59:59.613146 containerd[1543]: time="2025-09-04T23:59:59.613093655Z" level=info msg="CreateContainer within sandbox \"f18f6ff8201d3dc2f306004fc70c1c9516584a8f80dc4c9bbb8cb922081edbeb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"dfd89bc445e0f24c98caf0526d9506d3e2c73377024fec07ae1a3e7ad5cb9920\"" Sep 4 23:59:59.613556 containerd[1543]: time="2025-09-04T23:59:59.613535523Z" level=info msg="StartContainer for \"dfd89bc445e0f24c98caf0526d9506d3e2c73377024fec07ae1a3e7ad5cb9920\"" Sep 4 23:59:59.614784 containerd[1543]: time="2025-09-04T23:59:59.614406900Z" level=info msg="connecting to shim dfd89bc445e0f24c98caf0526d9506d3e2c73377024fec07ae1a3e7ad5cb9920" address="unix:///run/containerd/s/5b4ef8a1b41ea5cb8e579d9f9cc31557166390c9e538de011b59bd7c5de90876" protocol=ttrpc version=3 Sep 4 23:59:59.650358 systemd[1]: Started cri-containerd-dfd89bc445e0f24c98caf0526d9506d3e2c73377024fec07ae1a3e7ad5cb9920.scope - libcontainer container dfd89bc445e0f24c98caf0526d9506d3e2c73377024fec07ae1a3e7ad5cb9920. Sep 4 23:59:59.694384 containerd[1543]: time="2025-09-04T23:59:59.694339086Z" level=info msg="StartContainer for \"dfd89bc445e0f24c98caf0526d9506d3e2c73377024fec07ae1a3e7ad5cb9920\" returns successfully" Sep 4 23:59:59.749850 containerd[1543]: time="2025-09-04T23:59:59.749813885Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dfd89bc445e0f24c98caf0526d9506d3e2c73377024fec07ae1a3e7ad5cb9920\" id:\"87f1596893a89274f52e1b3d3f1eb24d9ffb0ecc64bb151b7260e6469bc407d6\" pid:4673 exited_at:{seconds:1757030399 nanos:749554692}" Sep 4 23:59:59.958339 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 5 00:00:02.406623 containerd[1543]: time="2025-09-05T00:00:02.406585190Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dfd89bc445e0f24c98caf0526d9506d3e2c73377024fec07ae1a3e7ad5cb9920\" id:\"d67b2c96ae408a7ea252c518814b3181f8e0410f32a673162c06b604914d20d0\" pid:5067 exit_status:1 exited_at:{seconds:1757030402 nanos:406177838}" Sep 5 00:00:02.842227 systemd-networkd[1445]: lxc_health: Link UP Sep 5 00:00:02.852166 systemd-networkd[1445]: lxc_health: Gained carrier Sep 5 00:00:04.179257 kubelet[2646]: I0905 00:00:04.179188 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lmmwv" podStartSLOduration=9.179172097 podStartE2EDuration="9.179172097s" podCreationTimestamp="2025-09-04 23:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:00:00.609338662 +0000 UTC m=+78.349208502" watchObservedRunningTime="2025-09-05 00:00:04.179172097 +0000 UTC m=+81.919041897" Sep 5 00:00:04.558014 containerd[1543]: time="2025-09-05T00:00:04.557958900Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dfd89bc445e0f24c98caf0526d9506d3e2c73377024fec07ae1a3e7ad5cb9920\" id:\"557d4020381b9b188986890aec2f1f772a561952b90eca3482d03b479e6ce824\" pid:5204 exited_at:{seconds:1757030404 nanos:557582666}" Sep 5 00:00:04.575391 systemd-networkd[1445]: lxc_health: Gained IPv6LL Sep 5 00:00:06.667949 containerd[1543]: time="2025-09-05T00:00:06.667901776Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dfd89bc445e0f24c98caf0526d9506d3e2c73377024fec07ae1a3e7ad5cb9920\" id:\"62905f13036d698d0d6f3f7936c2188fef01116f7130832f7efaf1881ab9cec1\" pid:5238 exited_at:{seconds:1757030406 nanos:667521581}" Sep 5 00:00:08.795686 containerd[1543]: time="2025-09-05T00:00:08.795627413Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dfd89bc445e0f24c98caf0526d9506d3e2c73377024fec07ae1a3e7ad5cb9920\" id:\"afda9b42e17a90222303448b39eba779a261a94b9cecfd7e9fb45dfffd38092c\" pid:5266 exited_at:{seconds:1757030408 nanos:795261897}" Sep 5 00:00:08.801169 sshd[4407]: Connection closed by 10.0.0.1 port 50638 Sep 5 00:00:08.801064 sshd-session[4400]: pam_unix(sshd:session): session closed for user core Sep 5 00:00:08.803982 systemd[1]: sshd@24-10.0.0.113:22-10.0.0.1:50638.service: Deactivated successfully. Sep 5 00:00:08.806311 systemd[1]: session-25.scope: Deactivated successfully. Sep 5 00:00:08.810209 systemd-logind[1519]: Session 25 logged out. Waiting for processes to exit. Sep 5 00:00:08.812381 systemd-logind[1519]: Removed session 25.