Sep 10 23:45:45.747761 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 10 23:45:45.747782 kernel: Linux version 6.12.46-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Wed Sep 10 22:24:03 -00 2025 Sep 10 23:45:45.747792 kernel: KASLR enabled Sep 10 23:45:45.747798 kernel: efi: EFI v2.7 by EDK II Sep 10 23:45:45.747803 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Sep 10 23:45:45.747809 kernel: random: crng init done Sep 10 23:45:45.747816 kernel: secureboot: Secure boot disabled Sep 10 23:45:45.747822 kernel: ACPI: Early table checksum verification disabled Sep 10 23:45:45.747828 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Sep 10 23:45:45.747835 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 10 23:45:45.747842 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:45:45.747848 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:45:45.747853 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:45:45.747859 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:45:45.747866 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:45:45.747874 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:45:45.747880 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:45:45.747887 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:45:45.747893 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:45:45.747905 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 10 23:45:45.747912 kernel: ACPI: Use ACPI SPCR as default console: No Sep 10 23:45:45.747918 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 10 23:45:45.747924 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Sep 10 23:45:45.747931 kernel: Zone ranges: Sep 10 23:45:45.747939 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 10 23:45:45.747947 kernel: DMA32 empty Sep 10 23:45:45.747953 kernel: Normal empty Sep 10 23:45:45.747959 kernel: Device empty Sep 10 23:45:45.747965 kernel: Movable zone start for each node Sep 10 23:45:45.747972 kernel: Early memory node ranges Sep 10 23:45:45.747978 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Sep 10 23:45:45.747984 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Sep 10 23:45:45.747990 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Sep 10 23:45:45.747996 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Sep 10 23:45:45.748002 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Sep 10 23:45:45.748011 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Sep 10 23:45:45.748017 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Sep 10 23:45:45.748025 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Sep 10 23:45:45.748034 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Sep 10 23:45:45.748041 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 10 23:45:45.748057 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 10 23:45:45.748063 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 10 23:45:45.748070 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 10 23:45:45.748081 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 10 23:45:45.748088 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 10 23:45:45.748094 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Sep 10 23:45:45.748100 kernel: psci: probing for conduit method from ACPI. Sep 10 23:45:45.748107 kernel: psci: PSCIv1.1 detected in firmware. Sep 10 23:45:45.748114 kernel: psci: Using standard PSCI v0.2 function IDs Sep 10 23:45:45.748120 kernel: psci: Trusted OS migration not required Sep 10 23:45:45.748126 kernel: psci: SMC Calling Convention v1.1 Sep 10 23:45:45.748133 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 10 23:45:45.748139 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 10 23:45:45.748147 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 10 23:45:45.748154 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 10 23:45:45.748160 kernel: Detected PIPT I-cache on CPU0 Sep 10 23:45:45.748167 kernel: CPU features: detected: GIC system register CPU interface Sep 10 23:45:45.748173 kernel: CPU features: detected: Spectre-v4 Sep 10 23:45:45.748180 kernel: CPU features: detected: Spectre-BHB Sep 10 23:45:45.748186 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 10 23:45:45.748193 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 10 23:45:45.748199 kernel: CPU features: detected: ARM erratum 1418040 Sep 10 23:45:45.748205 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 10 23:45:45.748212 kernel: alternatives: applying boot alternatives Sep 10 23:45:45.748219 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=dd9c14cce645c634e06a91b09405eea80057f02909b9267c482dc457df1cddec Sep 10 23:45:45.748227 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 10 23:45:45.748234 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 10 23:45:45.748240 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 10 23:45:45.748247 kernel: Fallback order for Node 0: 0 Sep 10 23:45:45.748265 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 10 23:45:45.748271 kernel: Policy zone: DMA Sep 10 23:45:45.748278 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 10 23:45:45.748284 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 10 23:45:45.748294 kernel: software IO TLB: area num 4. Sep 10 23:45:45.748301 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 10 23:45:45.748363 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Sep 10 23:45:45.748374 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 10 23:45:45.748380 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 10 23:45:45.748387 kernel: rcu: RCU event tracing is enabled. Sep 10 23:45:45.748394 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 10 23:45:45.748401 kernel: Trampoline variant of Tasks RCU enabled. Sep 10 23:45:45.748407 kernel: Tracing variant of Tasks RCU enabled. Sep 10 23:45:45.748414 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 10 23:45:45.748420 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 10 23:45:45.748430 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 23:45:45.748437 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 23:45:45.748443 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 10 23:45:45.748451 kernel: GICv3: 256 SPIs implemented Sep 10 23:45:45.748457 kernel: GICv3: 0 Extended SPIs implemented Sep 10 23:45:45.748464 kernel: Root IRQ handler: gic_handle_irq Sep 10 23:45:45.748470 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 10 23:45:45.748476 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 10 23:45:45.748483 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 10 23:45:45.748489 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 10 23:45:45.748496 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 10 23:45:45.748503 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 10 23:45:45.748509 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 10 23:45:45.748521 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 10 23:45:45.748528 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 10 23:45:45.748536 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 23:45:45.748542 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 10 23:45:45.748549 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 10 23:45:45.748556 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 10 23:45:45.748568 kernel: arm-pv: using stolen time PV Sep 10 23:45:45.748578 kernel: Console: colour dummy device 80x25 Sep 10 23:45:45.748585 kernel: ACPI: Core revision 20240827 Sep 10 23:45:45.748593 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 10 23:45:45.748600 kernel: pid_max: default: 32768 minimum: 301 Sep 10 23:45:45.748606 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 10 23:45:45.748615 kernel: landlock: Up and running. Sep 10 23:45:45.748622 kernel: SELinux: Initializing. Sep 10 23:45:45.748628 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 23:45:45.748635 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 23:45:45.748641 kernel: rcu: Hierarchical SRCU implementation. Sep 10 23:45:45.748648 kernel: rcu: Max phase no-delay instances is 400. Sep 10 23:45:45.748655 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 10 23:45:45.748662 kernel: Remapping and enabling EFI services. Sep 10 23:45:45.748669 kernel: smp: Bringing up secondary CPUs ... Sep 10 23:45:45.748682 kernel: Detected PIPT I-cache on CPU1 Sep 10 23:45:45.748689 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 10 23:45:45.748696 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 10 23:45:45.748710 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 23:45:45.748717 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 10 23:45:45.748724 kernel: Detected PIPT I-cache on CPU2 Sep 10 23:45:45.748732 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 10 23:45:45.748739 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 10 23:45:45.748748 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 23:45:45.748755 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 10 23:45:45.748762 kernel: Detected PIPT I-cache on CPU3 Sep 10 23:45:45.748770 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 10 23:45:45.748777 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 10 23:45:45.748784 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 23:45:45.748793 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 10 23:45:45.748801 kernel: smp: Brought up 1 node, 4 CPUs Sep 10 23:45:45.748808 kernel: SMP: Total of 4 processors activated. Sep 10 23:45:45.748817 kernel: CPU: All CPU(s) started at EL1 Sep 10 23:45:45.748824 kernel: CPU features: detected: 32-bit EL0 Support Sep 10 23:45:45.748831 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 10 23:45:45.748838 kernel: CPU features: detected: Common not Private translations Sep 10 23:45:45.748845 kernel: CPU features: detected: CRC32 instructions Sep 10 23:45:45.748852 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 10 23:45:45.748858 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 10 23:45:45.748865 kernel: CPU features: detected: LSE atomic instructions Sep 10 23:45:45.748872 kernel: CPU features: detected: Privileged Access Never Sep 10 23:45:45.748880 kernel: CPU features: detected: RAS Extension Support Sep 10 23:45:45.748887 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 10 23:45:45.748894 kernel: alternatives: applying system-wide alternatives Sep 10 23:45:45.748902 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 10 23:45:45.748912 kernel: Memory: 2424480K/2572288K available (11136K kernel code, 2436K rwdata, 9084K rodata, 38976K init, 1038K bss, 125472K reserved, 16384K cma-reserved) Sep 10 23:45:45.748919 kernel: devtmpfs: initialized Sep 10 23:45:45.748926 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 10 23:45:45.748933 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 10 23:45:45.748940 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 10 23:45:45.748951 kernel: 0 pages in range for non-PLT usage Sep 10 23:45:45.748958 kernel: 508560 pages in range for PLT usage Sep 10 23:45:45.748965 kernel: pinctrl core: initialized pinctrl subsystem Sep 10 23:45:45.748972 kernel: SMBIOS 3.0.0 present. Sep 10 23:45:45.748979 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 10 23:45:45.748985 kernel: DMI: Memory slots populated: 1/1 Sep 10 23:45:45.748992 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 10 23:45:45.748999 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 10 23:45:45.749006 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 10 23:45:45.749014 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 10 23:45:45.749021 kernel: audit: initializing netlink subsys (disabled) Sep 10 23:45:45.749032 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Sep 10 23:45:45.749039 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 10 23:45:45.749046 kernel: cpuidle: using governor menu Sep 10 23:45:45.749053 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 10 23:45:45.749061 kernel: ASID allocator initialised with 32768 entries Sep 10 23:45:45.749068 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 10 23:45:45.749074 kernel: Serial: AMBA PL011 UART driver Sep 10 23:45:45.749083 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 10 23:45:45.749091 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 10 23:45:45.749098 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 10 23:45:45.749105 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 10 23:45:45.749112 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 10 23:45:45.749120 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 10 23:45:45.749126 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 10 23:45:45.749133 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 10 23:45:45.749140 kernel: ACPI: Added _OSI(Module Device) Sep 10 23:45:45.749148 kernel: ACPI: Added _OSI(Processor Device) Sep 10 23:45:45.749155 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 10 23:45:45.749174 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 10 23:45:45.749181 kernel: ACPI: Interpreter enabled Sep 10 23:45:45.749188 kernel: ACPI: Using GIC for interrupt routing Sep 10 23:45:45.749206 kernel: ACPI: MCFG table detected, 1 entries Sep 10 23:45:45.749213 kernel: ACPI: CPU0 has been hot-added Sep 10 23:45:45.749220 kernel: ACPI: CPU1 has been hot-added Sep 10 23:45:45.749227 kernel: ACPI: CPU2 has been hot-added Sep 10 23:45:45.749234 kernel: ACPI: CPU3 has been hot-added Sep 10 23:45:45.749242 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 10 23:45:45.749249 kernel: printk: legacy console [ttyAMA0] enabled Sep 10 23:45:45.749256 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 10 23:45:45.749420 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 10 23:45:45.749492 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 10 23:45:45.749552 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 10 23:45:45.749651 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 10 23:45:45.749731 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 10 23:45:45.749742 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 10 23:45:45.749750 kernel: PCI host bridge to bus 0000:00 Sep 10 23:45:45.749823 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 10 23:45:45.749880 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 10 23:45:45.749939 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 10 23:45:45.749997 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 10 23:45:45.750090 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 10 23:45:45.750265 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 10 23:45:45.750404 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 10 23:45:45.750474 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 10 23:45:45.750550 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 10 23:45:45.750703 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 10 23:45:45.750776 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 10 23:45:45.750849 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 10 23:45:45.750913 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 10 23:45:45.750979 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 10 23:45:45.751040 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 10 23:45:45.751050 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 10 23:45:45.751057 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 10 23:45:45.751064 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 10 23:45:45.751073 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 10 23:45:45.751080 kernel: iommu: Default domain type: Translated Sep 10 23:45:45.751087 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 10 23:45:45.751094 kernel: efivars: Registered efivars operations Sep 10 23:45:45.751101 kernel: vgaarb: loaded Sep 10 23:45:45.751108 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 10 23:45:45.751115 kernel: VFS: Disk quotas dquot_6.6.0 Sep 10 23:45:45.751122 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 10 23:45:45.751129 kernel: pnp: PnP ACPI init Sep 10 23:45:45.751214 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 10 23:45:45.751225 kernel: pnp: PnP ACPI: found 1 devices Sep 10 23:45:45.751232 kernel: NET: Registered PF_INET protocol family Sep 10 23:45:45.751239 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 10 23:45:45.751246 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 10 23:45:45.751254 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 10 23:45:45.751261 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 10 23:45:45.751268 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 10 23:45:45.751277 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 10 23:45:45.751285 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 23:45:45.751292 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 23:45:45.751299 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 10 23:45:45.751326 kernel: PCI: CLS 0 bytes, default 64 Sep 10 23:45:45.751334 kernel: kvm [1]: HYP mode not available Sep 10 23:45:45.751341 kernel: Initialise system trusted keyrings Sep 10 23:45:45.751348 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 10 23:45:45.751355 kernel: Key type asymmetric registered Sep 10 23:45:45.751365 kernel: Asymmetric key parser 'x509' registered Sep 10 23:45:45.751372 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 10 23:45:45.751379 kernel: io scheduler mq-deadline registered Sep 10 23:45:45.751385 kernel: io scheduler kyber registered Sep 10 23:45:45.751392 kernel: io scheduler bfq registered Sep 10 23:45:45.751399 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 10 23:45:45.751410 kernel: ACPI: button: Power Button [PWRB] Sep 10 23:45:45.751417 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 10 23:45:45.751489 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 10 23:45:45.751502 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 10 23:45:45.751509 kernel: thunder_xcv, ver 1.0 Sep 10 23:45:45.751517 kernel: thunder_bgx, ver 1.0 Sep 10 23:45:45.751524 kernel: nicpf, ver 1.0 Sep 10 23:45:45.751536 kernel: nicvf, ver 1.0 Sep 10 23:45:45.751623 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 10 23:45:45.751684 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-10T23:45:45 UTC (1757547945) Sep 10 23:45:45.751694 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 10 23:45:45.751704 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 10 23:45:45.751711 kernel: watchdog: NMI not fully supported Sep 10 23:45:45.751718 kernel: watchdog: Hard watchdog permanently disabled Sep 10 23:45:45.751725 kernel: NET: Registered PF_INET6 protocol family Sep 10 23:45:45.751732 kernel: Segment Routing with IPv6 Sep 10 23:45:45.751740 kernel: In-situ OAM (IOAM) with IPv6 Sep 10 23:45:45.751747 kernel: NET: Registered PF_PACKET protocol family Sep 10 23:45:45.751755 kernel: Key type dns_resolver registered Sep 10 23:45:45.751761 kernel: registered taskstats version 1 Sep 10 23:45:45.751768 kernel: Loading compiled-in X.509 certificates Sep 10 23:45:45.751786 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.46-flatcar: 3c20aab1105575c84ea94c1a59a27813fcebdea7' Sep 10 23:45:45.751794 kernel: Demotion targets for Node 0: null Sep 10 23:45:45.751801 kernel: Key type .fscrypt registered Sep 10 23:45:45.751808 kernel: Key type fscrypt-provisioning registered Sep 10 23:45:45.751815 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 10 23:45:45.751822 kernel: ima: Allocated hash algorithm: sha1 Sep 10 23:45:45.751829 kernel: ima: No architecture policies found Sep 10 23:45:45.751836 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 10 23:45:45.751844 kernel: clk: Disabling unused clocks Sep 10 23:45:45.751851 kernel: PM: genpd: Disabling unused power domains Sep 10 23:45:45.751858 kernel: Warning: unable to open an initial console. Sep 10 23:45:45.751866 kernel: Freeing unused kernel memory: 38976K Sep 10 23:45:45.751873 kernel: Run /init as init process Sep 10 23:45:45.751879 kernel: with arguments: Sep 10 23:45:45.751887 kernel: /init Sep 10 23:45:45.751896 kernel: with environment: Sep 10 23:45:45.751904 kernel: HOME=/ Sep 10 23:45:45.751913 kernel: TERM=linux Sep 10 23:45:45.751920 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 10 23:45:45.751928 systemd[1]: Successfully made /usr/ read-only. Sep 10 23:45:45.751941 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 10 23:45:45.751949 systemd[1]: Detected virtualization kvm. Sep 10 23:45:45.751957 systemd[1]: Detected architecture arm64. Sep 10 23:45:45.751964 systemd[1]: Running in initrd. Sep 10 23:45:45.751974 systemd[1]: No hostname configured, using default hostname. Sep 10 23:45:45.751984 systemd[1]: Hostname set to . Sep 10 23:45:45.751991 systemd[1]: Initializing machine ID from VM UUID. Sep 10 23:45:45.751998 systemd[1]: Queued start job for default target initrd.target. Sep 10 23:45:45.752006 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 23:45:45.752013 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 23:45:45.752021 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 10 23:45:45.752032 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 23:45:45.752040 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 10 23:45:45.752050 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 10 23:45:45.752058 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 10 23:45:45.752066 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 10 23:45:45.752073 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 23:45:45.752083 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 23:45:45.752092 systemd[1]: Reached target paths.target - Path Units. Sep 10 23:45:45.752099 systemd[1]: Reached target slices.target - Slice Units. Sep 10 23:45:45.752109 systemd[1]: Reached target swap.target - Swaps. Sep 10 23:45:45.752117 systemd[1]: Reached target timers.target - Timer Units. Sep 10 23:45:45.752125 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 23:45:45.752132 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 23:45:45.752140 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 10 23:45:45.752148 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 10 23:45:45.752156 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 23:45:45.752163 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 23:45:45.752172 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 23:45:45.752180 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 23:45:45.752187 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 10 23:45:45.752195 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 23:45:45.752203 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 10 23:45:45.752211 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 10 23:45:45.752218 systemd[1]: Starting systemd-fsck-usr.service... Sep 10 23:45:45.752226 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 23:45:45.752234 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 23:45:45.752243 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 23:45:45.752251 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 10 23:45:45.752259 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 23:45:45.752266 systemd[1]: Finished systemd-fsck-usr.service. Sep 10 23:45:45.752275 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 10 23:45:45.752301 systemd-journald[244]: Collecting audit messages is disabled. Sep 10 23:45:45.752338 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:45:45.752346 systemd-journald[244]: Journal started Sep 10 23:45:45.752367 systemd-journald[244]: Runtime Journal (/run/log/journal/22eb13bec65842cab0dbeab3c841dbd1) is 6M, max 48.5M, 42.4M free. Sep 10 23:45:45.762421 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 10 23:45:45.762453 kernel: Bridge firewalling registered Sep 10 23:45:45.742388 systemd-modules-load[245]: Inserted module 'overlay' Sep 10 23:45:45.756637 systemd-modules-load[245]: Inserted module 'br_netfilter' Sep 10 23:45:45.766116 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 10 23:45:45.767321 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 23:45:45.768550 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 23:45:45.771707 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 23:45:45.773177 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 23:45:45.774600 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 23:45:45.780943 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 23:45:45.788006 systemd-tmpfiles[266]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 10 23:45:45.789889 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 23:45:45.791682 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 23:45:45.795368 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 23:45:45.796705 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 23:45:45.800145 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 10 23:45:45.802845 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 23:45:45.824978 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=dd9c14cce645c634e06a91b09405eea80057f02909b9267c482dc457df1cddec Sep 10 23:45:45.839252 systemd-resolved[288]: Positive Trust Anchors: Sep 10 23:45:45.839272 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 23:45:45.839303 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 23:45:45.844120 systemd-resolved[288]: Defaulting to hostname 'linux'. Sep 10 23:45:45.845510 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 23:45:45.849011 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 23:45:45.909366 kernel: SCSI subsystem initialized Sep 10 23:45:45.913322 kernel: Loading iSCSI transport class v2.0-870. Sep 10 23:45:45.921320 kernel: iscsi: registered transport (tcp) Sep 10 23:45:45.934331 kernel: iscsi: registered transport (qla4xxx) Sep 10 23:45:45.934362 kernel: QLogic iSCSI HBA Driver Sep 10 23:45:45.952133 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 10 23:45:45.970371 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 23:45:45.972593 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 10 23:45:46.018394 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 10 23:45:46.020466 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 10 23:45:46.077336 kernel: raid6: neonx8 gen() 15770 MB/s Sep 10 23:45:46.094323 kernel: raid6: neonx4 gen() 15799 MB/s Sep 10 23:45:46.111320 kernel: raid6: neonx2 gen() 13135 MB/s Sep 10 23:45:46.128322 kernel: raid6: neonx1 gen() 10406 MB/s Sep 10 23:45:46.145323 kernel: raid6: int64x8 gen() 6889 MB/s Sep 10 23:45:46.162318 kernel: raid6: int64x4 gen() 7343 MB/s Sep 10 23:45:46.179322 kernel: raid6: int64x2 gen() 6096 MB/s Sep 10 23:45:46.196331 kernel: raid6: int64x1 gen() 5040 MB/s Sep 10 23:45:46.196361 kernel: raid6: using algorithm neonx4 gen() 15799 MB/s Sep 10 23:45:46.213325 kernel: raid6: .... xor() 12361 MB/s, rmw enabled Sep 10 23:45:46.213343 kernel: raid6: using neon recovery algorithm Sep 10 23:45:46.218681 kernel: xor: measuring software checksum speed Sep 10 23:45:46.218702 kernel: 8regs : 21550 MB/sec Sep 10 23:45:46.219328 kernel: 32regs : 21687 MB/sec Sep 10 23:45:46.219344 kernel: arm64_neon : 26604 MB/sec Sep 10 23:45:46.220327 kernel: xor: using function: arm64_neon (26604 MB/sec) Sep 10 23:45:46.272335 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 10 23:45:46.280388 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 10 23:45:46.282754 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 23:45:46.309852 systemd-udevd[497]: Using default interface naming scheme 'v255'. Sep 10 23:45:46.314220 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 23:45:46.316072 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 10 23:45:46.346986 dracut-pre-trigger[504]: rd.md=0: removing MD RAID activation Sep 10 23:45:46.370332 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 23:45:46.372446 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 23:45:46.429191 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 23:45:46.431628 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 10 23:45:46.492127 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 23:45:46.492255 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:45:46.503080 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 10 23:45:46.503280 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 10 23:45:46.501752 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 23:45:46.504821 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 23:45:46.508405 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 10 23:45:46.508435 kernel: GPT:9289727 != 19775487 Sep 10 23:45:46.508446 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 10 23:45:46.509819 kernel: GPT:9289727 != 19775487 Sep 10 23:45:46.509835 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 10 23:45:46.510394 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 23:45:46.530790 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 10 23:45:46.532323 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:45:46.540882 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 10 23:45:46.542879 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 10 23:45:46.563058 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 10 23:45:46.564100 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 10 23:45:46.572260 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 10 23:45:46.573350 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 23:45:46.575052 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 23:45:46.576687 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 23:45:46.578949 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 10 23:45:46.580587 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 10 23:45:46.600141 disk-uuid[589]: Primary Header is updated. Sep 10 23:45:46.600141 disk-uuid[589]: Secondary Entries is updated. Sep 10 23:45:46.600141 disk-uuid[589]: Secondary Header is updated. Sep 10 23:45:46.604254 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 10 23:45:46.607273 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 23:45:47.611467 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 23:45:47.611530 disk-uuid[593]: The operation has completed successfully. Sep 10 23:45:47.636296 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 10 23:45:47.636410 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 10 23:45:47.665208 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 10 23:45:47.690347 sh[609]: Success Sep 10 23:45:47.704539 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 10 23:45:47.704604 kernel: device-mapper: uevent: version 1.0.3 Sep 10 23:45:47.706016 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 10 23:45:47.714338 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 10 23:45:47.739759 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 10 23:45:47.742005 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 10 23:45:47.757402 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 10 23:45:47.764295 kernel: BTRFS: device fsid 3b17f37f-d395-4116-a46d-e07f86112ade devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (621) Sep 10 23:45:47.764356 kernel: BTRFS info (device dm-0): first mount of filesystem 3b17f37f-d395-4116-a46d-e07f86112ade Sep 10 23:45:47.764377 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 10 23:45:47.768797 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 10 23:45:47.768830 kernel: BTRFS info (device dm-0): enabling free space tree Sep 10 23:45:47.769757 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 10 23:45:47.770960 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 10 23:45:47.772101 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 10 23:45:47.772907 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 10 23:45:47.775803 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 10 23:45:47.800346 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (652) Sep 10 23:45:47.802793 kernel: BTRFS info (device vda6): first mount of filesystem 538ffae8-60fb-4c82-9100-efc4d2404f73 Sep 10 23:45:47.802828 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 10 23:45:47.804933 kernel: BTRFS info (device vda6): turning on async discard Sep 10 23:45:47.804970 kernel: BTRFS info (device vda6): enabling free space tree Sep 10 23:45:47.809321 kernel: BTRFS info (device vda6): last unmount of filesystem 538ffae8-60fb-4c82-9100-efc4d2404f73 Sep 10 23:45:47.809814 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 10 23:45:47.811988 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 10 23:45:47.882832 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 23:45:47.886123 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 23:45:47.921091 ignition[698]: Ignition 2.21.0 Sep 10 23:45:47.921106 ignition[698]: Stage: fetch-offline Sep 10 23:45:47.921226 ignition[698]: no configs at "/usr/lib/ignition/base.d" Sep 10 23:45:47.921241 ignition[698]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:45:47.921429 ignition[698]: parsed url from cmdline: "" Sep 10 23:45:47.921432 ignition[698]: no config URL provided Sep 10 23:45:47.921437 ignition[698]: reading system config file "/usr/lib/ignition/user.ign" Sep 10 23:45:47.921445 ignition[698]: no config at "/usr/lib/ignition/user.ign" Sep 10 23:45:47.921466 ignition[698]: op(1): [started] loading QEMU firmware config module Sep 10 23:45:47.921470 ignition[698]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 10 23:45:47.927209 ignition[698]: op(1): [finished] loading QEMU firmware config module Sep 10 23:45:47.929133 systemd-networkd[800]: lo: Link UP Sep 10 23:45:47.929137 systemd-networkd[800]: lo: Gained carrier Sep 10 23:45:47.929872 systemd-networkd[800]: Enumeration completed Sep 10 23:45:47.929988 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 23:45:47.930275 systemd-networkd[800]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:45:47.930278 systemd-networkd[800]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 23:45:47.930768 systemd-networkd[800]: eth0: Link UP Sep 10 23:45:47.931109 systemd-networkd[800]: eth0: Gained carrier Sep 10 23:45:47.931119 systemd-networkd[800]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:45:47.931910 systemd[1]: Reached target network.target - Network. Sep 10 23:45:47.943362 systemd-networkd[800]: eth0: DHCPv4 address 10.0.0.38/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 23:45:47.979218 ignition[698]: parsing config with SHA512: 184b7cc18c6d5d4edb7db1d9ccf849aceabdba6a0b95dc5b6cb891e8a720dca234380c4b16872f1e089f33ca3f20d00b8c64883900ca1403273aaa6c086d8560 Sep 10 23:45:47.983446 unknown[698]: fetched base config from "system" Sep 10 23:45:47.983461 unknown[698]: fetched user config from "qemu" Sep 10 23:45:47.983894 ignition[698]: fetch-offline: fetch-offline passed Sep 10 23:45:47.983951 ignition[698]: Ignition finished successfully Sep 10 23:45:47.986329 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 23:45:47.988709 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 10 23:45:47.989627 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 10 23:45:48.032831 ignition[811]: Ignition 2.21.0 Sep 10 23:45:48.032848 ignition[811]: Stage: kargs Sep 10 23:45:48.032996 ignition[811]: no configs at "/usr/lib/ignition/base.d" Sep 10 23:45:48.033005 ignition[811]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:45:48.035000 ignition[811]: kargs: kargs passed Sep 10 23:45:48.035068 ignition[811]: Ignition finished successfully Sep 10 23:45:48.037798 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 10 23:45:48.039663 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 10 23:45:48.065266 ignition[820]: Ignition 2.21.0 Sep 10 23:45:48.065283 ignition[820]: Stage: disks Sep 10 23:45:48.065515 ignition[820]: no configs at "/usr/lib/ignition/base.d" Sep 10 23:45:48.065525 ignition[820]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:45:48.067512 ignition[820]: disks: disks passed Sep 10 23:45:48.067586 ignition[820]: Ignition finished successfully Sep 10 23:45:48.069981 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 10 23:45:48.071262 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 10 23:45:48.072837 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 10 23:45:48.074703 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 23:45:48.076337 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 23:45:48.077815 systemd[1]: Reached target basic.target - Basic System. Sep 10 23:45:48.080203 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 10 23:45:48.109476 systemd-fsck[831]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 10 23:45:48.114028 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 10 23:45:48.116278 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 10 23:45:48.188350 kernel: EXT4-fs (vda9): mounted filesystem fcae628f-5f9a-4539-a638-93fb1399b5d7 r/w with ordered data mode. Quota mode: none. Sep 10 23:45:48.188159 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 10 23:45:48.189348 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 10 23:45:48.191563 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 23:45:48.195445 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 10 23:45:48.196233 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 10 23:45:48.196274 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 10 23:45:48.196297 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 23:45:48.206256 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 10 23:45:48.208749 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 10 23:45:48.213016 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (840) Sep 10 23:45:48.213053 kernel: BTRFS info (device vda6): first mount of filesystem 538ffae8-60fb-4c82-9100-efc4d2404f73 Sep 10 23:45:48.213909 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 10 23:45:48.216743 kernel: BTRFS info (device vda6): turning on async discard Sep 10 23:45:48.216783 kernel: BTRFS info (device vda6): enabling free space tree Sep 10 23:45:48.218617 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 23:45:48.250149 initrd-setup-root[864]: cut: /sysroot/etc/passwd: No such file or directory Sep 10 23:45:48.254013 initrd-setup-root[871]: cut: /sysroot/etc/group: No such file or directory Sep 10 23:45:48.258343 initrd-setup-root[878]: cut: /sysroot/etc/shadow: No such file or directory Sep 10 23:45:48.261409 initrd-setup-root[885]: cut: /sysroot/etc/gshadow: No such file or directory Sep 10 23:45:48.336380 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 10 23:45:48.338086 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 10 23:45:48.339519 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 10 23:45:48.360395 kernel: BTRFS info (device vda6): last unmount of filesystem 538ffae8-60fb-4c82-9100-efc4d2404f73 Sep 10 23:45:48.373816 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 10 23:45:48.384484 ignition[954]: INFO : Ignition 2.21.0 Sep 10 23:45:48.384484 ignition[954]: INFO : Stage: mount Sep 10 23:45:48.386680 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 23:45:48.386680 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:45:48.388632 ignition[954]: INFO : mount: mount passed Sep 10 23:45:48.390088 ignition[954]: INFO : Ignition finished successfully Sep 10 23:45:48.390925 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 10 23:45:48.392686 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 10 23:45:48.763160 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 10 23:45:48.764761 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 23:45:48.794619 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (967) Sep 10 23:45:48.794672 kernel: BTRFS info (device vda6): first mount of filesystem 538ffae8-60fb-4c82-9100-efc4d2404f73 Sep 10 23:45:48.794684 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 10 23:45:48.798462 kernel: BTRFS info (device vda6): turning on async discard Sep 10 23:45:48.798521 kernel: BTRFS info (device vda6): enabling free space tree Sep 10 23:45:48.800128 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 23:45:48.828050 ignition[984]: INFO : Ignition 2.21.0 Sep 10 23:45:48.828050 ignition[984]: INFO : Stage: files Sep 10 23:45:48.829414 ignition[984]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 23:45:48.829414 ignition[984]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:45:48.830987 ignition[984]: DEBUG : files: compiled without relabeling support, skipping Sep 10 23:45:48.832385 ignition[984]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 10 23:45:48.832385 ignition[984]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 10 23:45:48.835261 ignition[984]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 10 23:45:48.836324 ignition[984]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 10 23:45:48.836324 ignition[984]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 10 23:45:48.835834 unknown[984]: wrote ssh authorized keys file for user: core Sep 10 23:45:48.840390 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 10 23:45:48.841775 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 10 23:45:48.896990 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 10 23:45:49.024557 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 10 23:45:49.024557 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 10 23:45:49.027595 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 10 23:45:49.150451 systemd-networkd[800]: eth0: Gained IPv6LL Sep 10 23:45:49.211539 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 10 23:45:49.313155 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 10 23:45:49.313155 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 10 23:45:49.316531 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 10 23:45:49.316531 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 10 23:45:49.316531 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 10 23:45:49.316531 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 23:45:49.316531 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 23:45:49.316531 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 23:45:49.316531 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 23:45:49.316531 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 23:45:49.328295 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 23:45:49.328295 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 10 23:45:49.328295 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 10 23:45:49.328295 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 10 23:45:49.328295 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 10 23:45:49.676540 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 10 23:45:50.086189 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 10 23:45:50.086189 ignition[984]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 10 23:45:50.089580 ignition[984]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 23:45:50.091270 ignition[984]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 23:45:50.091270 ignition[984]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 10 23:45:50.091270 ignition[984]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 10 23:45:50.091270 ignition[984]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 23:45:50.091270 ignition[984]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 23:45:50.091270 ignition[984]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 10 23:45:50.091270 ignition[984]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 10 23:45:50.108754 ignition[984]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 23:45:50.112485 ignition[984]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 23:45:50.114801 ignition[984]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 10 23:45:50.114801 ignition[984]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 10 23:45:50.114801 ignition[984]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 10 23:45:50.114801 ignition[984]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 10 23:45:50.114801 ignition[984]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 10 23:45:50.114801 ignition[984]: INFO : files: files passed Sep 10 23:45:50.114801 ignition[984]: INFO : Ignition finished successfully Sep 10 23:45:50.116229 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 10 23:45:50.120507 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 10 23:45:50.123045 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 10 23:45:50.139474 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 10 23:45:50.140431 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 10 23:45:50.142924 initrd-setup-root-after-ignition[1012]: grep: /sysroot/oem/oem-release: No such file or directory Sep 10 23:45:50.144496 initrd-setup-root-after-ignition[1015]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 23:45:50.144496 initrd-setup-root-after-ignition[1015]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 10 23:45:50.147611 initrd-setup-root-after-ignition[1019]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 23:45:50.148378 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 23:45:50.150129 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 10 23:45:50.152899 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 10 23:45:50.241020 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 10 23:45:50.241153 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 10 23:45:50.243157 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 10 23:45:50.244576 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 10 23:45:50.245954 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 10 23:45:50.246804 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 10 23:45:50.290089 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 23:45:50.292353 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 10 23:45:50.318583 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 10 23:45:50.319635 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 23:45:50.321272 systemd[1]: Stopped target timers.target - Timer Units. Sep 10 23:45:50.322775 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 10 23:45:50.322900 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 23:45:50.324990 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 10 23:45:50.326525 systemd[1]: Stopped target basic.target - Basic System. Sep 10 23:45:50.327782 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 10 23:45:50.329153 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 23:45:50.330675 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 10 23:45:50.332238 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 10 23:45:50.333821 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 10 23:45:50.335183 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 23:45:50.336732 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 10 23:45:50.338187 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 10 23:45:50.339643 systemd[1]: Stopped target swap.target - Swaps. Sep 10 23:45:50.340827 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 10 23:45:50.340952 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 10 23:45:50.342872 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 10 23:45:50.344412 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 23:45:50.345988 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 10 23:45:50.349390 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 23:45:50.350368 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 10 23:45:50.350490 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 10 23:45:50.352789 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 10 23:45:50.352908 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 23:45:50.354416 systemd[1]: Stopped target paths.target - Path Units. Sep 10 23:45:50.355669 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 10 23:45:50.355775 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 23:45:50.357353 systemd[1]: Stopped target slices.target - Slice Units. Sep 10 23:45:50.358564 systemd[1]: Stopped target sockets.target - Socket Units. Sep 10 23:45:50.359883 systemd[1]: iscsid.socket: Deactivated successfully. Sep 10 23:45:50.359965 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 23:45:50.361663 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 10 23:45:50.361737 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 23:45:50.362967 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 10 23:45:50.363074 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 23:45:50.364439 systemd[1]: ignition-files.service: Deactivated successfully. Sep 10 23:45:50.364533 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 10 23:45:50.366472 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 10 23:45:50.367507 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 10 23:45:50.367728 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 23:45:50.369855 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 10 23:45:50.371347 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 10 23:45:50.371461 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 23:45:50.372950 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 10 23:45:50.373044 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 23:45:50.379261 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 10 23:45:50.379370 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 10 23:45:50.387110 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 10 23:45:50.399102 ignition[1039]: INFO : Ignition 2.21.0 Sep 10 23:45:50.399102 ignition[1039]: INFO : Stage: umount Sep 10 23:45:50.400597 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 23:45:50.400597 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:45:50.400597 ignition[1039]: INFO : umount: umount passed Sep 10 23:45:50.400597 ignition[1039]: INFO : Ignition finished successfully Sep 10 23:45:50.402451 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 10 23:45:50.404357 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 10 23:45:50.406218 systemd[1]: Stopped target network.target - Network. Sep 10 23:45:50.407219 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 10 23:45:50.407282 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 10 23:45:50.408569 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 10 23:45:50.408610 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 10 23:45:50.409914 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 10 23:45:50.409957 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 10 23:45:50.411253 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 10 23:45:50.411291 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 10 23:45:50.412947 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 10 23:45:50.414261 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 10 23:45:50.421071 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 10 23:45:50.421189 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 10 23:45:50.424681 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 10 23:45:50.424868 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 10 23:45:50.424970 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 10 23:45:50.427891 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 10 23:45:50.428509 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 10 23:45:50.429974 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 10 23:45:50.430012 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 10 23:45:50.432973 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 10 23:45:50.434003 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 10 23:45:50.434060 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 23:45:50.435627 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 10 23:45:50.435668 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 10 23:45:50.437969 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 10 23:45:50.438011 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 10 23:45:50.439377 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 10 23:45:50.439416 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 23:45:50.441798 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 23:45:50.445169 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 10 23:45:50.445225 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 10 23:45:50.462007 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 10 23:45:50.462177 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 23:45:50.464054 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 10 23:45:50.464100 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 10 23:45:50.465814 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 10 23:45:50.465852 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 23:45:50.467328 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 10 23:45:50.467403 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 10 23:45:50.469787 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 10 23:45:50.469841 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 10 23:45:50.473998 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 10 23:45:50.474064 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 23:45:50.477300 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 10 23:45:50.479210 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 10 23:45:50.479292 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 23:45:50.482128 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 10 23:45:50.482180 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 23:45:50.485106 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 23:45:50.485165 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:45:50.488885 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 10 23:45:50.488952 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 10 23:45:50.488987 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 10 23:45:50.489277 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 10 23:45:50.490777 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 10 23:45:50.492120 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 10 23:45:50.492232 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 10 23:45:50.494453 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 10 23:45:50.494502 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 10 23:45:50.496610 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 10 23:45:50.498365 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 10 23:45:50.502575 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 10 23:45:50.506716 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 10 23:45:50.537031 systemd[1]: Switching root. Sep 10 23:45:50.572201 systemd-journald[244]: Journal stopped Sep 10 23:45:51.318623 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Sep 10 23:45:51.318678 kernel: SELinux: policy capability network_peer_controls=1 Sep 10 23:45:51.318695 kernel: SELinux: policy capability open_perms=1 Sep 10 23:45:51.318705 kernel: SELinux: policy capability extended_socket_class=1 Sep 10 23:45:51.318715 kernel: SELinux: policy capability always_check_network=0 Sep 10 23:45:51.318725 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 10 23:45:51.318736 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 10 23:45:51.318746 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 10 23:45:51.318759 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 10 23:45:51.318772 kernel: SELinux: policy capability userspace_initial_context=0 Sep 10 23:45:51.318782 kernel: audit: type=1403 audit(1757547950.752:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 10 23:45:51.318798 systemd[1]: Successfully loaded SELinux policy in 45.083ms. Sep 10 23:45:51.318815 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.660ms. Sep 10 23:45:51.318827 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 10 23:45:51.318839 systemd[1]: Detected virtualization kvm. Sep 10 23:45:51.318851 systemd[1]: Detected architecture arm64. Sep 10 23:45:51.318861 systemd[1]: Detected first boot. Sep 10 23:45:51.318871 systemd[1]: Initializing machine ID from VM UUID. Sep 10 23:45:51.318881 kernel: NET: Registered PF_VSOCK protocol family Sep 10 23:45:51.318891 zram_generator::config[1084]: No configuration found. Sep 10 23:45:51.318902 systemd[1]: Populated /etc with preset unit settings. Sep 10 23:45:51.318913 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 10 23:45:51.318924 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 10 23:45:51.318935 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 10 23:45:51.318945 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 10 23:45:51.318956 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 10 23:45:51.318966 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 10 23:45:51.318976 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 10 23:45:51.318986 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 10 23:45:51.318997 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 10 23:45:51.319007 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 10 23:45:51.319018 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 10 23:45:51.319030 systemd[1]: Created slice user.slice - User and Session Slice. Sep 10 23:45:51.319041 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 23:45:51.319051 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 23:45:51.319062 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 10 23:45:51.319073 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 10 23:45:51.319083 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 10 23:45:51.319093 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 23:45:51.319103 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 10 23:45:51.319114 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 23:45:51.319125 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 23:45:51.319136 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 10 23:45:51.319146 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 10 23:45:51.319156 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 10 23:45:51.319171 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 10 23:45:51.319181 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 23:45:51.319191 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 23:45:51.319203 systemd[1]: Reached target slices.target - Slice Units. Sep 10 23:45:51.319213 systemd[1]: Reached target swap.target - Swaps. Sep 10 23:45:51.319224 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 10 23:45:51.319234 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 10 23:45:51.319244 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 10 23:45:51.319254 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 23:45:51.319264 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 23:45:51.319275 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 23:45:51.319285 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 10 23:45:51.319296 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 10 23:45:51.319320 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 10 23:45:51.319331 systemd[1]: Mounting media.mount - External Media Directory... Sep 10 23:45:51.319342 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 10 23:45:51.319352 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 10 23:45:51.319363 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 10 23:45:51.319373 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 10 23:45:51.319384 systemd[1]: Reached target machines.target - Containers. Sep 10 23:45:51.319395 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 10 23:45:51.319407 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 23:45:51.319417 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 23:45:51.319428 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 10 23:45:51.319438 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 23:45:51.319448 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 23:45:51.319458 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 23:45:51.319469 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 10 23:45:51.319479 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 23:45:51.319489 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 10 23:45:51.319501 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 10 23:45:51.319511 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 10 23:45:51.319521 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 10 23:45:51.319531 systemd[1]: Stopped systemd-fsck-usr.service. Sep 10 23:45:51.319551 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 23:45:51.319568 kernel: fuse: init (API version 7.41) Sep 10 23:45:51.319578 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 23:45:51.319588 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 23:45:51.319599 kernel: loop: module loaded Sep 10 23:45:51.319609 kernel: ACPI: bus type drm_connector registered Sep 10 23:45:51.319618 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 10 23:45:51.319629 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 10 23:45:51.319640 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 10 23:45:51.319650 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 23:45:51.319662 systemd[1]: verity-setup.service: Deactivated successfully. Sep 10 23:45:51.319673 systemd[1]: Stopped verity-setup.service. Sep 10 23:45:51.319683 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 10 23:45:51.319693 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 10 23:45:51.319790 systemd[1]: Mounted media.mount - External Media Directory. Sep 10 23:45:51.319838 systemd-journald[1159]: Collecting audit messages is disabled. Sep 10 23:45:51.319863 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 10 23:45:51.319877 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 10 23:45:51.319888 systemd-journald[1159]: Journal started Sep 10 23:45:51.319912 systemd-journald[1159]: Runtime Journal (/run/log/journal/22eb13bec65842cab0dbeab3c841dbd1) is 6M, max 48.5M, 42.4M free. Sep 10 23:45:51.123084 systemd[1]: Queued start job for default target multi-user.target. Sep 10 23:45:51.144260 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 10 23:45:51.144667 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 10 23:45:51.323405 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 23:45:51.324553 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 10 23:45:51.326668 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 10 23:45:51.328043 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 23:45:51.329363 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 10 23:45:51.329524 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 10 23:45:51.330764 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 23:45:51.330921 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 23:45:51.332171 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 23:45:51.333400 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 23:45:51.334610 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 23:45:51.334774 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 23:45:51.336072 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 10 23:45:51.336255 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 10 23:45:51.337497 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 23:45:51.337658 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 23:45:51.338810 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 23:45:51.339968 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 23:45:51.341462 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 10 23:45:51.342782 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 10 23:45:51.356605 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 10 23:45:51.358801 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 10 23:45:51.360851 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 10 23:45:51.361862 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 10 23:45:51.361896 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 23:45:51.363690 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 10 23:45:51.369467 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 10 23:45:51.370378 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 23:45:51.371503 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 10 23:45:51.373609 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 10 23:45:51.374841 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 23:45:51.377430 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 10 23:45:51.378642 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 23:45:51.381480 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 23:45:51.382200 systemd-journald[1159]: Time spent on flushing to /var/log/journal/22eb13bec65842cab0dbeab3c841dbd1 is 15.003ms for 890 entries. Sep 10 23:45:51.382200 systemd-journald[1159]: System Journal (/var/log/journal/22eb13bec65842cab0dbeab3c841dbd1) is 8M, max 195.6M, 187.6M free. Sep 10 23:45:51.411562 systemd-journald[1159]: Received client request to flush runtime journal. Sep 10 23:45:51.411623 kernel: loop0: detected capacity change from 0 to 107312 Sep 10 23:45:51.384550 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 10 23:45:51.387660 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 10 23:45:51.391869 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 23:45:51.393149 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 10 23:45:51.394388 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 10 23:45:51.402355 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 10 23:45:51.403712 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 10 23:45:51.407893 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 10 23:45:51.412725 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 10 23:45:51.427332 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 10 23:45:51.434626 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 23:45:51.443871 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 10 23:45:51.448378 kernel: loop1: detected capacity change from 0 to 211168 Sep 10 23:45:51.452320 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 10 23:45:51.454916 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 23:45:51.478345 kernel: loop2: detected capacity change from 0 to 138376 Sep 10 23:45:51.487630 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Sep 10 23:45:51.487648 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Sep 10 23:45:51.492391 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 23:45:51.504364 kernel: loop3: detected capacity change from 0 to 107312 Sep 10 23:45:51.511343 kernel: loop4: detected capacity change from 0 to 211168 Sep 10 23:45:51.517361 kernel: loop5: detected capacity change from 0 to 138376 Sep 10 23:45:51.523517 (sd-merge)[1223]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 10 23:45:51.523989 (sd-merge)[1223]: Merged extensions into '/usr'. Sep 10 23:45:51.527753 systemd[1]: Reload requested from client PID 1200 ('systemd-sysext') (unit systemd-sysext.service)... Sep 10 23:45:51.527768 systemd[1]: Reloading... Sep 10 23:45:51.586333 zram_generator::config[1250]: No configuration found. Sep 10 23:45:51.670166 ldconfig[1195]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 10 23:45:51.676076 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 23:45:51.740147 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 10 23:45:51.740273 systemd[1]: Reloading finished in 212 ms. Sep 10 23:45:51.779028 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 10 23:45:51.780296 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 10 23:45:51.795870 systemd[1]: Starting ensure-sysext.service... Sep 10 23:45:51.797591 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 23:45:51.810134 systemd[1]: Reload requested from client PID 1284 ('systemctl') (unit ensure-sysext.service)... Sep 10 23:45:51.810148 systemd[1]: Reloading... Sep 10 23:45:51.814171 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 10 23:45:51.814206 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 10 23:45:51.814468 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 10 23:45:51.814664 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 10 23:45:51.815265 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 10 23:45:51.815489 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Sep 10 23:45:51.815545 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Sep 10 23:45:51.818561 systemd-tmpfiles[1285]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 23:45:51.818571 systemd-tmpfiles[1285]: Skipping /boot Sep 10 23:45:51.829794 systemd-tmpfiles[1285]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 23:45:51.829809 systemd-tmpfiles[1285]: Skipping /boot Sep 10 23:45:51.859363 zram_generator::config[1313]: No configuration found. Sep 10 23:45:51.928483 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 23:45:51.991830 systemd[1]: Reloading finished in 181 ms. Sep 10 23:45:52.000894 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 10 23:45:52.006206 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 23:45:52.015518 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 10 23:45:52.017752 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 10 23:45:52.019896 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 10 23:45:52.022452 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 23:45:52.026490 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 23:45:52.028933 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 10 23:45:52.035015 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 23:45:52.038457 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 23:45:52.042442 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 23:45:52.044420 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 23:45:52.046750 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 23:45:52.046878 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 23:45:52.048747 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 10 23:45:52.052064 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 10 23:45:52.054761 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 23:45:52.054955 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 23:45:52.056752 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 23:45:52.056931 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 23:45:52.058541 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 23:45:52.058701 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 23:45:52.067231 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 23:45:52.069728 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 23:45:52.073444 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 23:45:52.074193 systemd-udevd[1353]: Using default interface naming scheme 'v255'. Sep 10 23:45:52.078491 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 23:45:52.079566 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 23:45:52.079806 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 23:45:52.081765 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 10 23:45:52.084477 augenrules[1384]: No rules Sep 10 23:45:52.085591 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 10 23:45:52.087396 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 23:45:52.087658 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 10 23:45:52.089216 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 10 23:45:52.090935 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 23:45:52.091072 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 23:45:52.092840 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 23:45:52.092992 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 23:45:52.094641 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 23:45:52.094818 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 23:45:52.101206 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 23:45:52.103729 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 10 23:45:52.111579 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 10 23:45:52.112962 systemd[1]: Finished ensure-sysext.service. Sep 10 23:45:52.142426 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 10 23:45:52.143269 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 23:45:52.144364 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 23:45:52.146263 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 23:45:52.154892 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 23:45:52.157420 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 23:45:52.158281 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 23:45:52.158333 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 23:45:52.162487 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 23:45:52.165033 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 10 23:45:52.166105 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 10 23:45:52.181475 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 23:45:52.181988 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 23:45:52.183668 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 23:45:52.183856 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 23:45:52.185470 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 23:45:52.185654 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 23:45:52.187469 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 23:45:52.187660 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 23:45:52.196105 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 10 23:45:52.196463 augenrules[1429]: /sbin/augenrules: No change Sep 10 23:45:52.196785 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 23:45:52.196877 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 23:45:52.205724 augenrules[1462]: No rules Sep 10 23:45:52.207322 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 23:45:52.207846 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 10 23:45:52.272295 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 10 23:45:52.278590 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 10 23:45:52.312752 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 10 23:45:52.313814 systemd[1]: Reached target time-set.target - System Time Set. Sep 10 23:45:52.318727 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 10 23:45:52.332216 systemd-networkd[1438]: lo: Link UP Sep 10 23:45:52.332227 systemd-networkd[1438]: lo: Gained carrier Sep 10 23:45:52.333103 systemd-networkd[1438]: Enumeration completed Sep 10 23:45:52.333206 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 23:45:52.334577 systemd-networkd[1438]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:45:52.334588 systemd-networkd[1438]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 23:45:52.336416 systemd-networkd[1438]: eth0: Link UP Sep 10 23:45:52.336562 systemd-networkd[1438]: eth0: Gained carrier Sep 10 23:45:52.336582 systemd-networkd[1438]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:45:52.336985 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 10 23:45:52.341507 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 10 23:45:52.350951 systemd-resolved[1351]: Positive Trust Anchors: Sep 10 23:45:52.351923 systemd-resolved[1351]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 23:45:52.351962 systemd-resolved[1351]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 23:45:52.360096 systemd-resolved[1351]: Defaulting to hostname 'linux'. Sep 10 23:45:52.362781 systemd-networkd[1438]: eth0: DHCPv4 address 10.0.0.38/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 23:45:52.363015 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 23:45:52.364472 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 23:45:52.366010 systemd[1]: Reached target network.target - Network. Sep 10 23:45:52.367446 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 23:45:52.367892 systemd-timesyncd[1441]: Network configuration changed, trying to establish connection. Sep 10 23:45:51.898814 systemd-timesyncd[1441]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 10 23:45:51.904523 systemd-journald[1159]: Time jumped backwards, rotating. Sep 10 23:45:51.898881 systemd-timesyncd[1441]: Initial clock synchronization to Wed 2025-09-10 23:45:51.898714 UTC. Sep 10 23:45:51.901128 systemd-resolved[1351]: Clock change detected. Flushing caches. Sep 10 23:45:51.907797 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 10 23:45:51.942287 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:45:51.943545 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 23:45:51.944568 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 10 23:45:51.945650 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 10 23:45:51.946844 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 10 23:45:51.947879 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 10 23:45:51.949004 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 10 23:45:51.950101 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 10 23:45:51.950143 systemd[1]: Reached target paths.target - Path Units. Sep 10 23:45:51.950844 systemd[1]: Reached target timers.target - Timer Units. Sep 10 23:45:51.952848 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 10 23:45:51.955225 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 10 23:45:51.958278 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 10 23:45:51.959507 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 10 23:45:51.960546 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 10 23:45:51.965127 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 10 23:45:51.966614 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 10 23:45:51.968257 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 10 23:45:51.969284 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 23:45:51.970106 systemd[1]: Reached target basic.target - Basic System. Sep 10 23:45:51.970911 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 10 23:45:51.970940 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 10 23:45:51.972092 systemd[1]: Starting containerd.service - containerd container runtime... Sep 10 23:45:51.974042 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 10 23:45:51.975855 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 10 23:45:51.977877 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 10 23:45:51.979736 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 10 23:45:51.980731 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 10 23:45:51.982328 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 10 23:45:51.984459 jq[1505]: false Sep 10 23:45:51.984866 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 10 23:45:51.986912 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 10 23:45:51.990273 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 10 23:45:51.993552 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 10 23:45:51.996337 extend-filesystems[1506]: Found /dev/vda6 Sep 10 23:45:51.995313 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 10 23:45:51.995841 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 10 23:45:51.997480 systemd[1]: Starting update-engine.service - Update Engine... Sep 10 23:45:51.999845 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 10 23:45:52.003218 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 10 23:45:52.004299 extend-filesystems[1506]: Found /dev/vda9 Sep 10 23:45:52.010863 extend-filesystems[1506]: Checking size of /dev/vda9 Sep 10 23:45:52.004842 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 10 23:45:52.005012 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 10 23:45:52.017705 jq[1522]: true Sep 10 23:45:52.006438 systemd[1]: motdgen.service: Deactivated successfully. Sep 10 23:45:52.006612 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 10 23:45:52.010622 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 10 23:45:52.011603 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 10 23:45:52.025839 update_engine[1521]: I20250910 23:45:52.025692 1521 main.cc:92] Flatcar Update Engine starting Sep 10 23:45:52.030830 tar[1526]: linux-arm64/LICENSE Sep 10 23:45:52.030830 tar[1526]: linux-arm64/helm Sep 10 23:45:52.032495 extend-filesystems[1506]: Resized partition /dev/vda9 Sep 10 23:45:52.036467 extend-filesystems[1544]: resize2fs 1.47.2 (1-Jan-2025) Sep 10 23:45:52.038664 jq[1531]: true Sep 10 23:45:52.043317 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 10 23:45:52.048739 (ntainerd)[1532]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 10 23:45:52.054777 dbus-daemon[1503]: [system] SELinux support is enabled Sep 10 23:45:52.054935 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 10 23:45:52.058465 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 10 23:45:52.058499 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 10 23:45:52.059949 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 10 23:45:52.059972 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 10 23:45:52.061439 update_engine[1521]: I20250910 23:45:52.061340 1521 update_check_scheduler.cc:74] Next update check in 8m38s Sep 10 23:45:52.062118 systemd-logind[1516]: Watching system buttons on /dev/input/event0 (Power Button) Sep 10 23:45:52.062354 systemd-logind[1516]: New seat seat0. Sep 10 23:45:52.062824 systemd[1]: Started update-engine.service - Update Engine. Sep 10 23:45:52.064077 systemd[1]: Started systemd-logind.service - User Login Management. Sep 10 23:45:52.076161 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 10 23:45:52.076913 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 10 23:45:52.095939 extend-filesystems[1544]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 10 23:45:52.095939 extend-filesystems[1544]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 10 23:45:52.095939 extend-filesystems[1544]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 10 23:45:52.108394 extend-filesystems[1506]: Resized filesystem in /dev/vda9 Sep 10 23:45:52.104804 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 10 23:45:52.104997 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 10 23:45:52.114651 bash[1564]: Updated "/home/core/.ssh/authorized_keys" Sep 10 23:45:52.111732 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 10 23:45:52.113833 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 10 23:45:52.190560 locksmithd[1549]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 10 23:45:52.251151 containerd[1532]: time="2025-09-10T23:45:52Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 10 23:45:52.252213 containerd[1532]: time="2025-09-10T23:45:52.252177487Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 10 23:45:52.263697 containerd[1532]: time="2025-09-10T23:45:52.263645047Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.16µs" Sep 10 23:45:52.263697 containerd[1532]: time="2025-09-10T23:45:52.263688487Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 10 23:45:52.263697 containerd[1532]: time="2025-09-10T23:45:52.263708527Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 10 23:45:52.263899 containerd[1532]: time="2025-09-10T23:45:52.263878287Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 10 23:45:52.263924 containerd[1532]: time="2025-09-10T23:45:52.263900087Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 10 23:45:52.263962 containerd[1532]: time="2025-09-10T23:45:52.263926607Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 10 23:45:52.264149 containerd[1532]: time="2025-09-10T23:45:52.264122527Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 10 23:45:52.264216 containerd[1532]: time="2025-09-10T23:45:52.264197367Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 10 23:45:52.264506 containerd[1532]: time="2025-09-10T23:45:52.264480327Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 10 23:45:52.264506 containerd[1532]: time="2025-09-10T23:45:52.264502567Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 10 23:45:52.264553 containerd[1532]: time="2025-09-10T23:45:52.264515287Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 10 23:45:52.264553 containerd[1532]: time="2025-09-10T23:45:52.264524567Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 10 23:45:52.264624 containerd[1532]: time="2025-09-10T23:45:52.264604927Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 10 23:45:52.264828 containerd[1532]: time="2025-09-10T23:45:52.264804207Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 10 23:45:52.264866 containerd[1532]: time="2025-09-10T23:45:52.264841407Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 10 23:45:52.264866 containerd[1532]: time="2025-09-10T23:45:52.264854087Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 10 23:45:52.264924 containerd[1532]: time="2025-09-10T23:45:52.264906567Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 10 23:45:52.265263 containerd[1532]: time="2025-09-10T23:45:52.265242087Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 10 23:45:52.265339 containerd[1532]: time="2025-09-10T23:45:52.265319687Z" level=info msg="metadata content store policy set" policy=shared Sep 10 23:45:52.268658 containerd[1532]: time="2025-09-10T23:45:52.268622007Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 10 23:45:52.268705 containerd[1532]: time="2025-09-10T23:45:52.268674407Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 10 23:45:52.268705 containerd[1532]: time="2025-09-10T23:45:52.268689447Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 10 23:45:52.268705 containerd[1532]: time="2025-09-10T23:45:52.268701607Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 10 23:45:52.268750 containerd[1532]: time="2025-09-10T23:45:52.268715287Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 10 23:45:52.268750 containerd[1532]: time="2025-09-10T23:45:52.268729487Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 10 23:45:52.268795 containerd[1532]: time="2025-09-10T23:45:52.268740887Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 10 23:45:52.268813 containerd[1532]: time="2025-09-10T23:45:52.268794247Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 10 23:45:52.268813 containerd[1532]: time="2025-09-10T23:45:52.268808007Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 10 23:45:52.268843 containerd[1532]: time="2025-09-10T23:45:52.268819167Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 10 23:45:52.268843 containerd[1532]: time="2025-09-10T23:45:52.268830527Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 10 23:45:52.268872 containerd[1532]: time="2025-09-10T23:45:52.268843367Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 10 23:45:52.268979 containerd[1532]: time="2025-09-10T23:45:52.268956047Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 10 23:45:52.269007 containerd[1532]: time="2025-09-10T23:45:52.268983847Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 10 23:45:52.269007 containerd[1532]: time="2025-09-10T23:45:52.269002407Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 10 23:45:52.269040 containerd[1532]: time="2025-09-10T23:45:52.269013847Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 10 23:45:52.269040 containerd[1532]: time="2025-09-10T23:45:52.269025247Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 10 23:45:52.269040 containerd[1532]: time="2025-09-10T23:45:52.269037287Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 10 23:45:52.269089 containerd[1532]: time="2025-09-10T23:45:52.269048607Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 10 23:45:52.269089 containerd[1532]: time="2025-09-10T23:45:52.269059247Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 10 23:45:52.269089 containerd[1532]: time="2025-09-10T23:45:52.269070527Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 10 23:45:52.269089 containerd[1532]: time="2025-09-10T23:45:52.269086367Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 10 23:45:52.269181 containerd[1532]: time="2025-09-10T23:45:52.269098247Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 10 23:45:52.269349 containerd[1532]: time="2025-09-10T23:45:52.269328207Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 10 23:45:52.269394 containerd[1532]: time="2025-09-10T23:45:52.269349767Z" level=info msg="Start snapshots syncer" Sep 10 23:45:52.269394 containerd[1532]: time="2025-09-10T23:45:52.269381127Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 10 23:45:52.269659 containerd[1532]: time="2025-09-10T23:45:52.269620847Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 10 23:45:52.269754 containerd[1532]: time="2025-09-10T23:45:52.269673687Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 10 23:45:52.269754 containerd[1532]: time="2025-09-10T23:45:52.269747887Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 10 23:45:52.269902 containerd[1532]: time="2025-09-10T23:45:52.269879047Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 10 23:45:52.269928 containerd[1532]: time="2025-09-10T23:45:52.269910647Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 10 23:45:52.269928 containerd[1532]: time="2025-09-10T23:45:52.269924287Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 10 23:45:52.269960 containerd[1532]: time="2025-09-10T23:45:52.269937487Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 10 23:45:52.269960 containerd[1532]: time="2025-09-10T23:45:52.269952767Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 10 23:45:52.269996 containerd[1532]: time="2025-09-10T23:45:52.269964007Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 10 23:45:52.269996 containerd[1532]: time="2025-09-10T23:45:52.269975807Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 10 23:45:52.270029 containerd[1532]: time="2025-09-10T23:45:52.270001807Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 10 23:45:52.270045 containerd[1532]: time="2025-09-10T23:45:52.270013087Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 10 23:45:52.270045 containerd[1532]: time="2025-09-10T23:45:52.270040687Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 10 23:45:52.271151 containerd[1532]: time="2025-09-10T23:45:52.270078807Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 10 23:45:52.271151 containerd[1532]: time="2025-09-10T23:45:52.270095807Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 10 23:45:52.271151 containerd[1532]: time="2025-09-10T23:45:52.270106247Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 10 23:45:52.271151 containerd[1532]: time="2025-09-10T23:45:52.270115607Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 10 23:45:52.271151 containerd[1532]: time="2025-09-10T23:45:52.270125327Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 10 23:45:52.271151 containerd[1532]: time="2025-09-10T23:45:52.270177767Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 10 23:45:52.271151 containerd[1532]: time="2025-09-10T23:45:52.270192927Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 10 23:45:52.271151 containerd[1532]: time="2025-09-10T23:45:52.270270047Z" level=info msg="runtime interface created" Sep 10 23:45:52.271151 containerd[1532]: time="2025-09-10T23:45:52.270275887Z" level=info msg="created NRI interface" Sep 10 23:45:52.271151 containerd[1532]: time="2025-09-10T23:45:52.270288047Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 10 23:45:52.271151 containerd[1532]: time="2025-09-10T23:45:52.270300727Z" level=info msg="Connect containerd service" Sep 10 23:45:52.271151 containerd[1532]: time="2025-09-10T23:45:52.270329327Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 10 23:45:52.271497 containerd[1532]: time="2025-09-10T23:45:52.271465367Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 23:45:52.354180 containerd[1532]: time="2025-09-10T23:45:52.352086487Z" level=info msg="Start subscribing containerd event" Sep 10 23:45:52.354275 containerd[1532]: time="2025-09-10T23:45:52.354193767Z" level=info msg="Start recovering state" Sep 10 23:45:52.354314 containerd[1532]: time="2025-09-10T23:45:52.354303607Z" level=info msg="Start event monitor" Sep 10 23:45:52.354334 containerd[1532]: time="2025-09-10T23:45:52.354319807Z" level=info msg="Start cni network conf syncer for default" Sep 10 23:45:52.354334 containerd[1532]: time="2025-09-10T23:45:52.354328127Z" level=info msg="Start streaming server" Sep 10 23:45:52.354374 containerd[1532]: time="2025-09-10T23:45:52.354337167Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 10 23:45:52.354374 containerd[1532]: time="2025-09-10T23:45:52.354345567Z" level=info msg="runtime interface starting up..." Sep 10 23:45:52.354374 containerd[1532]: time="2025-09-10T23:45:52.354350567Z" level=info msg="starting plugins..." Sep 10 23:45:52.354421 containerd[1532]: time="2025-09-10T23:45:52.354383247Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 10 23:45:52.354516 containerd[1532]: time="2025-09-10T23:45:52.352432007Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 10 23:45:52.354573 containerd[1532]: time="2025-09-10T23:45:52.354558127Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 10 23:45:52.354718 systemd[1]: Started containerd.service - containerd container runtime. Sep 10 23:45:52.355240 containerd[1532]: time="2025-09-10T23:45:52.355209847Z" level=info msg="containerd successfully booted in 0.105317s" Sep 10 23:45:52.416387 sshd_keygen[1529]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 10 23:45:52.437191 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 10 23:45:52.440470 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 10 23:45:52.461594 tar[1526]: linux-arm64/README.md Sep 10 23:45:52.463313 systemd[1]: issuegen.service: Deactivated successfully. Sep 10 23:45:52.463552 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 10 23:45:52.466659 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 10 23:45:52.472998 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 10 23:45:52.479230 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 10 23:45:52.482501 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 10 23:45:52.485011 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 10 23:45:52.486818 systemd[1]: Reached target getty.target - Login Prompts. Sep 10 23:45:53.608312 systemd-networkd[1438]: eth0: Gained IPv6LL Sep 10 23:45:53.611124 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 10 23:45:53.612557 systemd[1]: Reached target network-online.target - Network is Online. Sep 10 23:45:53.614744 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 10 23:45:53.616581 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:45:53.618494 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 10 23:45:53.638479 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 10 23:45:53.639905 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 10 23:45:53.640092 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 10 23:45:53.642024 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 10 23:45:54.193834 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:45:54.195399 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 10 23:45:54.196430 systemd[1]: Startup finished in 2.064s (kernel) + 5.155s (initrd) + 3.959s (userspace) = 11.179s. Sep 10 23:45:54.197676 (kubelet)[1637]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 23:45:54.567965 kubelet[1637]: E0910 23:45:54.567842 1637 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 23:45:54.570449 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 23:45:54.570582 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 23:45:54.570930 systemd[1]: kubelet.service: Consumed 763ms CPU time, 258.3M memory peak. Sep 10 23:45:58.454452 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 10 23:45:58.456097 systemd[1]: Started sshd@0-10.0.0.38:22-10.0.0.1:41966.service - OpenSSH per-connection server daemon (10.0.0.1:41966). Sep 10 23:45:58.552546 sshd[1650]: Accepted publickey for core from 10.0.0.1 port 41966 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:45:58.554929 sshd-session[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:45:58.561885 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 10 23:45:58.562937 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 10 23:45:58.574806 systemd-logind[1516]: New session 1 of user core. Sep 10 23:45:58.604212 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 10 23:45:58.607355 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 10 23:45:58.627197 (systemd)[1654]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 10 23:45:58.629922 systemd-logind[1516]: New session c1 of user core. Sep 10 23:45:58.759728 systemd[1654]: Queued start job for default target default.target. Sep 10 23:45:58.781200 systemd[1654]: Created slice app.slice - User Application Slice. Sep 10 23:45:58.781229 systemd[1654]: Reached target paths.target - Paths. Sep 10 23:45:58.781266 systemd[1654]: Reached target timers.target - Timers. Sep 10 23:45:58.782536 systemd[1654]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 10 23:45:58.794814 systemd[1654]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 10 23:45:58.794923 systemd[1654]: Reached target sockets.target - Sockets. Sep 10 23:45:58.794967 systemd[1654]: Reached target basic.target - Basic System. Sep 10 23:45:58.794997 systemd[1654]: Reached target default.target - Main User Target. Sep 10 23:45:58.795022 systemd[1654]: Startup finished in 158ms. Sep 10 23:45:58.795118 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 10 23:45:58.796879 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 10 23:45:58.867421 systemd[1]: Started sshd@1-10.0.0.38:22-10.0.0.1:41972.service - OpenSSH per-connection server daemon (10.0.0.1:41972). Sep 10 23:45:58.923534 sshd[1665]: Accepted publickey for core from 10.0.0.1 port 41972 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:45:58.924903 sshd-session[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:45:58.929801 systemd-logind[1516]: New session 2 of user core. Sep 10 23:45:58.949367 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 10 23:45:59.004165 sshd[1667]: Connection closed by 10.0.0.1 port 41972 Sep 10 23:45:59.005106 sshd-session[1665]: pam_unix(sshd:session): session closed for user core Sep 10 23:45:59.018399 systemd[1]: sshd@1-10.0.0.38:22-10.0.0.1:41972.service: Deactivated successfully. Sep 10 23:45:59.020200 systemd[1]: session-2.scope: Deactivated successfully. Sep 10 23:45:59.021282 systemd-logind[1516]: Session 2 logged out. Waiting for processes to exit. Sep 10 23:45:59.023888 systemd-logind[1516]: Removed session 2. Sep 10 23:45:59.029695 systemd[1]: Started sshd@2-10.0.0.38:22-10.0.0.1:41986.service - OpenSSH per-connection server daemon (10.0.0.1:41986). Sep 10 23:45:59.094394 sshd[1673]: Accepted publickey for core from 10.0.0.1 port 41986 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:45:59.095792 sshd-session[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:45:59.100648 systemd-logind[1516]: New session 3 of user core. Sep 10 23:45:59.116349 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 10 23:45:59.165993 sshd[1675]: Connection closed by 10.0.0.1 port 41986 Sep 10 23:45:59.165843 sshd-session[1673]: pam_unix(sshd:session): session closed for user core Sep 10 23:45:59.177329 systemd[1]: sshd@2-10.0.0.38:22-10.0.0.1:41986.service: Deactivated successfully. Sep 10 23:45:59.179444 systemd[1]: session-3.scope: Deactivated successfully. Sep 10 23:45:59.180078 systemd-logind[1516]: Session 3 logged out. Waiting for processes to exit. Sep 10 23:45:59.182609 systemd[1]: Started sshd@3-10.0.0.38:22-10.0.0.1:42000.service - OpenSSH per-connection server daemon (10.0.0.1:42000). Sep 10 23:45:59.187049 systemd-logind[1516]: Removed session 3. Sep 10 23:45:59.232336 sshd[1681]: Accepted publickey for core from 10.0.0.1 port 42000 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:45:59.233681 sshd-session[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:45:59.238402 systemd-logind[1516]: New session 4 of user core. Sep 10 23:45:59.256415 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 10 23:45:59.308759 sshd[1683]: Connection closed by 10.0.0.1 port 42000 Sep 10 23:45:59.309328 sshd-session[1681]: pam_unix(sshd:session): session closed for user core Sep 10 23:45:59.326261 systemd[1]: sshd@3-10.0.0.38:22-10.0.0.1:42000.service: Deactivated successfully. Sep 10 23:45:59.327614 systemd[1]: session-4.scope: Deactivated successfully. Sep 10 23:45:59.328384 systemd-logind[1516]: Session 4 logged out. Waiting for processes to exit. Sep 10 23:45:59.333209 systemd[1]: Started sshd@4-10.0.0.38:22-10.0.0.1:42012.service - OpenSSH per-connection server daemon (10.0.0.1:42012). Sep 10 23:45:59.334229 systemd-logind[1516]: Removed session 4. Sep 10 23:45:59.385653 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 42012 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:45:59.386966 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:45:59.391128 systemd-logind[1516]: New session 5 of user core. Sep 10 23:45:59.406414 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 10 23:45:59.464961 sudo[1692]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 10 23:45:59.465262 sudo[1692]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 23:45:59.485960 sudo[1692]: pam_unix(sudo:session): session closed for user root Sep 10 23:45:59.488087 sshd[1691]: Connection closed by 10.0.0.1 port 42012 Sep 10 23:45:59.488905 sshd-session[1689]: pam_unix(sshd:session): session closed for user core Sep 10 23:45:59.504515 systemd[1]: sshd@4-10.0.0.38:22-10.0.0.1:42012.service: Deactivated successfully. Sep 10 23:45:59.506088 systemd[1]: session-5.scope: Deactivated successfully. Sep 10 23:45:59.506864 systemd-logind[1516]: Session 5 logged out. Waiting for processes to exit. Sep 10 23:45:59.509377 systemd[1]: Started sshd@5-10.0.0.38:22-10.0.0.1:42028.service - OpenSSH per-connection server daemon (10.0.0.1:42028). Sep 10 23:45:59.510614 systemd-logind[1516]: Removed session 5. Sep 10 23:45:59.573871 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 42028 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:45:59.574494 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:45:59.579898 systemd-logind[1516]: New session 6 of user core. Sep 10 23:45:59.590373 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 10 23:45:59.645229 sudo[1702]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 10 23:45:59.645792 sudo[1702]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 23:45:59.720443 sudo[1702]: pam_unix(sudo:session): session closed for user root Sep 10 23:45:59.725510 sudo[1701]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 10 23:45:59.725786 sudo[1701]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 23:45:59.735185 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 10 23:45:59.775752 augenrules[1724]: No rules Sep 10 23:45:59.776941 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 23:45:59.777187 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 10 23:45:59.779376 sudo[1701]: pam_unix(sudo:session): session closed for user root Sep 10 23:45:59.781305 sshd[1700]: Connection closed by 10.0.0.1 port 42028 Sep 10 23:45:59.781225 sshd-session[1698]: pam_unix(sshd:session): session closed for user core Sep 10 23:45:59.795255 systemd[1]: sshd@5-10.0.0.38:22-10.0.0.1:42028.service: Deactivated successfully. Sep 10 23:45:59.797818 systemd[1]: session-6.scope: Deactivated successfully. Sep 10 23:45:59.798597 systemd-logind[1516]: Session 6 logged out. Waiting for processes to exit. Sep 10 23:45:59.801412 systemd[1]: Started sshd@6-10.0.0.38:22-10.0.0.1:42040.service - OpenSSH per-connection server daemon (10.0.0.1:42040). Sep 10 23:45:59.802465 systemd-logind[1516]: Removed session 6. Sep 10 23:45:59.856696 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 42040 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:45:59.859663 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:45:59.864188 systemd-logind[1516]: New session 7 of user core. Sep 10 23:45:59.878365 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 10 23:45:59.931093 sudo[1736]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 10 23:45:59.931698 sudo[1736]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 23:46:00.258502 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 10 23:46:00.281555 (dockerd)[1757]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 10 23:46:00.499614 dockerd[1757]: time="2025-09-10T23:46:00.499529367Z" level=info msg="Starting up" Sep 10 23:46:00.500938 dockerd[1757]: time="2025-09-10T23:46:00.500910447Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 10 23:46:00.543147 dockerd[1757]: time="2025-09-10T23:46:00.543020807Z" level=info msg="Loading containers: start." Sep 10 23:46:00.551261 kernel: Initializing XFRM netlink socket Sep 10 23:46:00.747486 systemd-networkd[1438]: docker0: Link UP Sep 10 23:46:00.751122 dockerd[1757]: time="2025-09-10T23:46:00.751076967Z" level=info msg="Loading containers: done." Sep 10 23:46:00.764010 dockerd[1757]: time="2025-09-10T23:46:00.763956087Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 10 23:46:00.764131 dockerd[1757]: time="2025-09-10T23:46:00.764046127Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 10 23:46:00.764180 dockerd[1757]: time="2025-09-10T23:46:00.764164367Z" level=info msg="Initializing buildkit" Sep 10 23:46:00.792659 dockerd[1757]: time="2025-09-10T23:46:00.792609167Z" level=info msg="Completed buildkit initialization" Sep 10 23:46:00.798759 dockerd[1757]: time="2025-09-10T23:46:00.798661007Z" level=info msg="Daemon has completed initialization" Sep 10 23:46:00.799192 dockerd[1757]: time="2025-09-10T23:46:00.799120647Z" level=info msg="API listen on /run/docker.sock" Sep 10 23:46:00.799278 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 10 23:46:01.323052 containerd[1532]: time="2025-09-10T23:46:01.323010727Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 10 23:46:01.902013 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4223171684.mount: Deactivated successfully. Sep 10 23:46:02.930654 containerd[1532]: time="2025-09-10T23:46:02.930601407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:46:02.931569 containerd[1532]: time="2025-09-10T23:46:02.931290447Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=27390230" Sep 10 23:46:02.932248 containerd[1532]: time="2025-09-10T23:46:02.932190447Z" level=info msg="ImageCreate event name:\"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:46:02.934786 containerd[1532]: time="2025-09-10T23:46:02.934753287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:46:02.935867 containerd[1532]: time="2025-09-10T23:46:02.935828367Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"27386827\" in 1.61277428s" Sep 10 23:46:02.935867 containerd[1532]: time="2025-09-10T23:46:02.935866207Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\"" Sep 10 23:46:02.937249 containerd[1532]: time="2025-09-10T23:46:02.937225127Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 10 23:46:04.108463 containerd[1532]: time="2025-09-10T23:46:04.108405807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:46:04.109501 containerd[1532]: time="2025-09-10T23:46:04.109476127Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=23547919" Sep 10 23:46:04.110380 containerd[1532]: time="2025-09-10T23:46:04.110344847Z" level=info msg="ImageCreate event name:\"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:46:04.113980 containerd[1532]: time="2025-09-10T23:46:04.113590407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:46:04.114729 containerd[1532]: time="2025-09-10T23:46:04.114704047Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"25135832\" in 1.17744852s" Sep 10 23:46:04.114788 containerd[1532]: time="2025-09-10T23:46:04.114731247Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\"" Sep 10 23:46:04.115212 containerd[1532]: time="2025-09-10T23:46:04.115192047Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 10 23:46:04.820965 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 10 23:46:04.822470 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:46:05.016690 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:46:05.027652 (kubelet)[2038]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 23:46:05.061680 kubelet[2038]: E0910 23:46:05.061632 2038 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 23:46:05.065575 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 23:46:05.065711 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 23:46:05.066658 systemd[1]: kubelet.service: Consumed 159ms CPU time, 107.2M memory peak. Sep 10 23:46:05.314646 containerd[1532]: time="2025-09-10T23:46:05.314520567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:46:05.315212 containerd[1532]: time="2025-09-10T23:46:05.315177127Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=18295979" Sep 10 23:46:05.316801 containerd[1532]: time="2025-09-10T23:46:05.316762367Z" level=info msg="ImageCreate event name:\"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:46:05.319394 containerd[1532]: time="2025-09-10T23:46:05.319355527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:46:05.321120 containerd[1532]: time="2025-09-10T23:46:05.321054047Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"19883910\" in 1.20526752s" Sep 10 23:46:05.321158 containerd[1532]: time="2025-09-10T23:46:05.321122367Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\"" Sep 10 23:46:05.321702 containerd[1532]: time="2025-09-10T23:46:05.321667007Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 10 23:46:06.414763 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2924955445.mount: Deactivated successfully. Sep 10 23:46:06.653993 containerd[1532]: time="2025-09-10T23:46:06.653939807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:46:06.655128 containerd[1532]: time="2025-09-10T23:46:06.655078127Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=28240108" Sep 10 23:46:06.656696 containerd[1532]: time="2025-09-10T23:46:06.656663807Z" level=info msg="ImageCreate event name:\"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:46:06.659263 containerd[1532]: time="2025-09-10T23:46:06.659198967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:46:06.660113 containerd[1532]: time="2025-09-10T23:46:06.660064967Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"28239125\" in 1.33836424s" Sep 10 23:46:06.660113 containerd[1532]: time="2025-09-10T23:46:06.660103887Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\"" Sep 10 23:46:06.660647 containerd[1532]: time="2025-09-10T23:46:06.660614167Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 10 23:46:07.226721 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3784286540.mount: Deactivated successfully. Sep 10 23:46:08.086042 containerd[1532]: time="2025-09-10T23:46:08.085987367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:46:08.087000 containerd[1532]: time="2025-09-10T23:46:08.086961367Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Sep 10 23:46:08.087876 containerd[1532]: time="2025-09-10T23:46:08.087840087Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:46:08.091359 containerd[1532]: time="2025-09-10T23:46:08.091302327Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:46:08.093514 containerd[1532]: time="2025-09-10T23:46:08.093297647Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.43251872s" Sep 10 23:46:08.093514 containerd[1532]: time="2025-09-10T23:46:08.093332647Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 10 23:46:08.093757 containerd[1532]: time="2025-09-10T23:46:08.093731807Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 10 23:46:08.558609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1675450890.mount: Deactivated successfully. Sep 10 23:46:08.567117 containerd[1532]: time="2025-09-10T23:46:08.567056327Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 23:46:08.568033 containerd[1532]: time="2025-09-10T23:46:08.567842007Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 10 23:46:08.568781 containerd[1532]: time="2025-09-10T23:46:08.568748807Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 23:46:08.570981 containerd[1532]: time="2025-09-10T23:46:08.570944007Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 23:46:08.571815 containerd[1532]: time="2025-09-10T23:46:08.571783247Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 478.02348ms" Sep 10 23:46:08.571815 containerd[1532]: time="2025-09-10T23:46:08.571814607Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 10 23:46:08.572500 containerd[1532]: time="2025-09-10T23:46:08.572355167Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 10 23:46:09.009727 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount118040690.mount: Deactivated successfully. Sep 10 23:46:10.499827 containerd[1532]: time="2025-09-10T23:46:10.499757927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:46:10.500418 containerd[1532]: time="2025-09-10T23:46:10.500386967Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465859" Sep 10 23:46:10.501668 containerd[1532]: time="2025-09-10T23:46:10.501631207Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:46:10.505510 containerd[1532]: time="2025-09-10T23:46:10.505457967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:46:10.507961 containerd[1532]: time="2025-09-10T23:46:10.507845207Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 1.93545016s" Sep 10 23:46:10.507961 containerd[1532]: time="2025-09-10T23:46:10.507906687Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 10 23:46:15.316725 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 10 23:46:15.320221 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:46:15.478910 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:46:15.482424 (kubelet)[2197]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 23:46:15.516873 kubelet[2197]: E0910 23:46:15.516815 2197 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 23:46:15.519779 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 23:46:15.520007 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 23:46:15.520392 systemd[1]: kubelet.service: Consumed 136ms CPU time, 107.2M memory peak. Sep 10 23:46:16.463209 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:46:16.463364 systemd[1]: kubelet.service: Consumed 136ms CPU time, 107.2M memory peak. Sep 10 23:46:16.465455 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:46:16.492513 systemd[1]: Reload requested from client PID 2212 ('systemctl') (unit session-7.scope)... Sep 10 23:46:16.492533 systemd[1]: Reloading... Sep 10 23:46:16.573425 zram_generator::config[2254]: No configuration found. Sep 10 23:46:16.827302 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 23:46:16.914764 systemd[1]: Reloading finished in 421 ms. Sep 10 23:46:16.964000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:46:16.966032 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:46:16.968729 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 23:46:16.968998 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:46:16.969045 systemd[1]: kubelet.service: Consumed 101ms CPU time, 95M memory peak. Sep 10 23:46:16.971760 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:46:17.103002 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:46:17.106997 (kubelet)[2303]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 23:46:17.141099 kubelet[2303]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 23:46:17.141099 kubelet[2303]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 10 23:46:17.141099 kubelet[2303]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 23:46:17.141479 kubelet[2303]: I0910 23:46:17.141187 2303 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 23:46:18.424204 kubelet[2303]: I0910 23:46:18.424151 2303 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 10 23:46:18.424204 kubelet[2303]: I0910 23:46:18.424183 2303 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 23:46:18.424549 kubelet[2303]: I0910 23:46:18.424445 2303 server.go:956] "Client rotation is on, will bootstrap in background" Sep 10 23:46:18.446037 kubelet[2303]: E0910 23:46:18.445701 2303 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.38:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 10 23:46:18.448717 kubelet[2303]: I0910 23:46:18.448681 2303 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 23:46:18.455528 kubelet[2303]: I0910 23:46:18.455505 2303 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 10 23:46:18.458362 kubelet[2303]: I0910 23:46:18.458338 2303 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 23:46:18.459460 kubelet[2303]: I0910 23:46:18.459390 2303 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 23:46:18.459616 kubelet[2303]: I0910 23:46:18.459444 2303 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 23:46:18.459702 kubelet[2303]: I0910 23:46:18.459685 2303 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 23:46:18.459702 kubelet[2303]: I0910 23:46:18.459694 2303 container_manager_linux.go:303] "Creating device plugin manager" Sep 10 23:46:18.459905 kubelet[2303]: I0910 23:46:18.459888 2303 state_mem.go:36] "Initialized new in-memory state store" Sep 10 23:46:18.462521 kubelet[2303]: I0910 23:46:18.462392 2303 kubelet.go:480] "Attempting to sync node with API server" Sep 10 23:46:18.462521 kubelet[2303]: I0910 23:46:18.462423 2303 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 23:46:18.462521 kubelet[2303]: I0910 23:46:18.462453 2303 kubelet.go:386] "Adding apiserver pod source" Sep 10 23:46:18.462521 kubelet[2303]: I0910 23:46:18.462469 2303 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 23:46:18.463624 kubelet[2303]: I0910 23:46:18.463602 2303 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 10 23:46:18.464398 kubelet[2303]: I0910 23:46:18.464356 2303 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 10 23:46:18.464517 kubelet[2303]: W0910 23:46:18.464502 2303 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 10 23:46:18.467122 kubelet[2303]: I0910 23:46:18.467086 2303 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 10 23:46:18.467122 kubelet[2303]: I0910 23:46:18.467125 2303 server.go:1289] "Started kubelet" Sep 10 23:46:18.469748 kubelet[2303]: E0910 23:46:18.468616 2303 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 10 23:46:18.469748 kubelet[2303]: I0910 23:46:18.468737 2303 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 23:46:18.472026 kubelet[2303]: E0910 23:46:18.471994 2303 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 10 23:46:18.473307 kubelet[2303]: E0910 23:46:18.470396 2303 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.38:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.38:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18641093143ccc6f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-10 23:46:18.467101807 +0000 UTC m=+1.356355721,LastTimestamp:2025-09-10 23:46:18.467101807 +0000 UTC m=+1.356355721,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 10 23:46:18.474008 kubelet[2303]: I0910 23:46:18.473884 2303 server.go:317] "Adding debug handlers to kubelet server" Sep 10 23:46:18.474268 kubelet[2303]: I0910 23:46:18.474239 2303 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 23:46:18.474591 kubelet[2303]: I0910 23:46:18.474005 2303 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 23:46:18.475713 kubelet[2303]: I0910 23:46:18.473942 2303 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 23:46:18.475980 kubelet[2303]: I0910 23:46:18.475961 2303 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 23:46:18.476446 kubelet[2303]: E0910 23:46:18.476426 2303 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 23:46:18.476573 kubelet[2303]: I0910 23:46:18.476561 2303 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 10 23:46:18.478488 kubelet[2303]: I0910 23:46:18.477458 2303 reconciler.go:26] "Reconciler: start to sync state" Sep 10 23:46:18.478568 kubelet[2303]: I0910 23:46:18.477696 2303 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 10 23:46:18.478630 kubelet[2303]: E0910 23:46:18.478175 2303 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 10 23:46:18.478683 kubelet[2303]: E0910 23:46:18.478405 2303 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.38:6443: connect: connection refused" interval="200ms" Sep 10 23:46:18.478938 kubelet[2303]: I0910 23:46:18.478906 2303 factory.go:223] Registration of the systemd container factory successfully Sep 10 23:46:18.479023 kubelet[2303]: I0910 23:46:18.479001 2303 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 23:46:18.480217 kubelet[2303]: I0910 23:46:18.480083 2303 factory.go:223] Registration of the containerd container factory successfully Sep 10 23:46:18.490585 kubelet[2303]: I0910 23:46:18.490550 2303 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 10 23:46:18.490585 kubelet[2303]: I0910 23:46:18.490572 2303 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 10 23:46:18.490585 kubelet[2303]: I0910 23:46:18.490589 2303 state_mem.go:36] "Initialized new in-memory state store" Sep 10 23:46:18.492563 kubelet[2303]: I0910 23:46:18.492430 2303 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 10 23:46:18.493623 kubelet[2303]: I0910 23:46:18.493601 2303 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 10 23:46:18.493693 kubelet[2303]: I0910 23:46:18.493684 2303 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 10 23:46:18.493762 kubelet[2303]: I0910 23:46:18.493751 2303 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 10 23:46:18.493805 kubelet[2303]: I0910 23:46:18.493796 2303 kubelet.go:2436] "Starting kubelet main sync loop" Sep 10 23:46:18.493899 kubelet[2303]: E0910 23:46:18.493883 2303 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 23:46:18.570486 kubelet[2303]: I0910 23:46:18.570447 2303 policy_none.go:49] "None policy: Start" Sep 10 23:46:18.570486 kubelet[2303]: I0910 23:46:18.570482 2303 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 10 23:46:18.570486 kubelet[2303]: I0910 23:46:18.570496 2303 state_mem.go:35] "Initializing new in-memory state store" Sep 10 23:46:18.571201 kubelet[2303]: E0910 23:46:18.571171 2303 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 10 23:46:18.576835 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 10 23:46:18.577071 kubelet[2303]: E0910 23:46:18.577000 2303 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 23:46:18.588915 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 10 23:46:18.592298 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 10 23:46:18.595071 kubelet[2303]: E0910 23:46:18.595043 2303 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 10 23:46:18.599039 kubelet[2303]: E0910 23:46:18.599012 2303 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 10 23:46:18.599411 kubelet[2303]: I0910 23:46:18.599259 2303 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 23:46:18.599411 kubelet[2303]: I0910 23:46:18.599276 2303 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 23:46:18.599510 kubelet[2303]: I0910 23:46:18.599486 2303 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 23:46:18.601152 kubelet[2303]: E0910 23:46:18.601120 2303 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 10 23:46:18.601636 kubelet[2303]: E0910 23:46:18.601589 2303 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 10 23:46:18.679642 kubelet[2303]: E0910 23:46:18.679514 2303 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.38:6443: connect: connection refused" interval="400ms" Sep 10 23:46:18.701233 kubelet[2303]: I0910 23:46:18.701196 2303 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 23:46:18.701784 kubelet[2303]: E0910 23:46:18.701753 2303 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.38:6443/api/v1/nodes\": dial tcp 10.0.0.38:6443: connect: connection refused" node="localhost" Sep 10 23:46:18.821452 systemd[1]: Created slice kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice - libcontainer container kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice. Sep 10 23:46:18.831984 kubelet[2303]: E0910 23:46:18.831935 2303 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:46:18.834380 systemd[1]: Created slice kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice - libcontainer container kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice. Sep 10 23:46:18.855586 kubelet[2303]: E0910 23:46:18.855551 2303 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:46:18.858048 systemd[1]: Created slice kubepods-burstable-pod42f95a78601fb0505bbf153754e6d391.slice - libcontainer container kubepods-burstable-pod42f95a78601fb0505bbf153754e6d391.slice. Sep 10 23:46:18.860001 kubelet[2303]: E0910 23:46:18.859978 2303 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:46:18.881406 kubelet[2303]: I0910 23:46:18.881377 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:46:18.881474 kubelet[2303]: I0910 23:46:18.881414 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:46:18.881474 kubelet[2303]: I0910 23:46:18.881434 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:46:18.881474 kubelet[2303]: I0910 23:46:18.881453 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:46:18.881474 kubelet[2303]: I0910 23:46:18.881469 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:46:18.881569 kubelet[2303]: I0910 23:46:18.881485 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 10 23:46:18.881569 kubelet[2303]: I0910 23:46:18.881499 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42f95a78601fb0505bbf153754e6d391-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"42f95a78601fb0505bbf153754e6d391\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:46:18.881569 kubelet[2303]: I0910 23:46:18.881512 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42f95a78601fb0505bbf153754e6d391-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"42f95a78601fb0505bbf153754e6d391\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:46:18.881569 kubelet[2303]: I0910 23:46:18.881527 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42f95a78601fb0505bbf153754e6d391-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"42f95a78601fb0505bbf153754e6d391\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:46:18.903776 kubelet[2303]: I0910 23:46:18.903752 2303 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 23:46:18.904100 kubelet[2303]: E0910 23:46:18.904076 2303 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.38:6443/api/v1/nodes\": dial tcp 10.0.0.38:6443: connect: connection refused" node="localhost" Sep 10 23:46:19.081003 kubelet[2303]: E0910 23:46:19.080880 2303 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.38:6443: connect: connection refused" interval="800ms" Sep 10 23:46:19.133033 containerd[1532]: time="2025-09-10T23:46:19.132993207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,}" Sep 10 23:46:19.156668 containerd[1532]: time="2025-09-10T23:46:19.156625367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,}" Sep 10 23:46:19.161325 containerd[1532]: time="2025-09-10T23:46:19.161295127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:42f95a78601fb0505bbf153754e6d391,Namespace:kube-system,Attempt:0,}" Sep 10 23:46:19.190935 containerd[1532]: time="2025-09-10T23:46:19.190876007Z" level=info msg="connecting to shim cace9b32e8f6f0b3c07836b09cbbdb1005e2c3f873e73aacde9289a42e7ca4ff" address="unix:///run/containerd/s/3c212f1b8659f2892594b834427089b0172b88a6c17d710762f9c83065b0a030" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:46:19.192065 containerd[1532]: time="2025-09-10T23:46:19.192031367Z" level=info msg="connecting to shim d2c5df7aecb49370cc9934ffc15c7a4e7b494c2953dd4d7f9f5ff4e2362c3405" address="unix:///run/containerd/s/41c926855c33746cb0f51c39fa31e6e3a8f8329d245f46683083642bdf7f704a" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:46:19.201363 containerd[1532]: time="2025-09-10T23:46:19.201326927Z" level=info msg="connecting to shim 1b8f98527e9982176d3f39e28def3cfd07f08ac2ce72944cee74e43bff06ebc6" address="unix:///run/containerd/s/f5299684eb3770ac4a1896d250141e220a71c2d47fb98f78037a2c27840ceb89" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:46:19.220304 systemd[1]: Started cri-containerd-d2c5df7aecb49370cc9934ffc15c7a4e7b494c2953dd4d7f9f5ff4e2362c3405.scope - libcontainer container d2c5df7aecb49370cc9934ffc15c7a4e7b494c2953dd4d7f9f5ff4e2362c3405. Sep 10 23:46:19.224946 systemd[1]: Started cri-containerd-1b8f98527e9982176d3f39e28def3cfd07f08ac2ce72944cee74e43bff06ebc6.scope - libcontainer container 1b8f98527e9982176d3f39e28def3cfd07f08ac2ce72944cee74e43bff06ebc6. Sep 10 23:46:19.226706 systemd[1]: Started cri-containerd-cace9b32e8f6f0b3c07836b09cbbdb1005e2c3f873e73aacde9289a42e7ca4ff.scope - libcontainer container cace9b32e8f6f0b3c07836b09cbbdb1005e2c3f873e73aacde9289a42e7ca4ff. Sep 10 23:46:19.267374 containerd[1532]: time="2025-09-10T23:46:19.267336367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,} returns sandbox id \"d2c5df7aecb49370cc9934ffc15c7a4e7b494c2953dd4d7f9f5ff4e2362c3405\"" Sep 10 23:46:19.271151 containerd[1532]: time="2025-09-10T23:46:19.270780967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:42f95a78601fb0505bbf153754e6d391,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b8f98527e9982176d3f39e28def3cfd07f08ac2ce72944cee74e43bff06ebc6\"" Sep 10 23:46:19.274276 containerd[1532]: time="2025-09-10T23:46:19.274219287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,} returns sandbox id \"cace9b32e8f6f0b3c07836b09cbbdb1005e2c3f873e73aacde9289a42e7ca4ff\"" Sep 10 23:46:19.275701 containerd[1532]: time="2025-09-10T23:46:19.275363567Z" level=info msg="CreateContainer within sandbox \"d2c5df7aecb49370cc9934ffc15c7a4e7b494c2953dd4d7f9f5ff4e2362c3405\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 10 23:46:19.276527 containerd[1532]: time="2025-09-10T23:46:19.276497007Z" level=info msg="CreateContainer within sandbox \"1b8f98527e9982176d3f39e28def3cfd07f08ac2ce72944cee74e43bff06ebc6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 10 23:46:19.283300 containerd[1532]: time="2025-09-10T23:46:19.283272567Z" level=info msg="CreateContainer within sandbox \"cace9b32e8f6f0b3c07836b09cbbdb1005e2c3f873e73aacde9289a42e7ca4ff\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 10 23:46:19.283768 containerd[1532]: time="2025-09-10T23:46:19.283740647Z" level=info msg="Container 0db81a10371b288ec00ccd3a33271caa0398efe307fe5ad2a673374d5256daf3: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:46:19.287594 containerd[1532]: time="2025-09-10T23:46:19.287565567Z" level=info msg="Container a4e8f6f98525f15b526789886328a014c8f48fb2a7429ad547610821dfee989b: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:46:19.291452 containerd[1532]: time="2025-09-10T23:46:19.291421447Z" level=info msg="CreateContainer within sandbox \"1b8f98527e9982176d3f39e28def3cfd07f08ac2ce72944cee74e43bff06ebc6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0db81a10371b288ec00ccd3a33271caa0398efe307fe5ad2a673374d5256daf3\"" Sep 10 23:46:19.292033 containerd[1532]: time="2025-09-10T23:46:19.292011847Z" level=info msg="StartContainer for \"0db81a10371b288ec00ccd3a33271caa0398efe307fe5ad2a673374d5256daf3\"" Sep 10 23:46:19.292410 containerd[1532]: time="2025-09-10T23:46:19.292384207Z" level=info msg="Container 4aa8fb6cb351d384ec7e5bf9adf5132d5452378da377b02987dbfeb3570e398c: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:46:19.293337 containerd[1532]: time="2025-09-10T23:46:19.293309887Z" level=info msg="connecting to shim 0db81a10371b288ec00ccd3a33271caa0398efe307fe5ad2a673374d5256daf3" address="unix:///run/containerd/s/f5299684eb3770ac4a1896d250141e220a71c2d47fb98f78037a2c27840ceb89" protocol=ttrpc version=3 Sep 10 23:46:19.297031 containerd[1532]: time="2025-09-10T23:46:19.296937327Z" level=info msg="CreateContainer within sandbox \"d2c5df7aecb49370cc9934ffc15c7a4e7b494c2953dd4d7f9f5ff4e2362c3405\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a4e8f6f98525f15b526789886328a014c8f48fb2a7429ad547610821dfee989b\"" Sep 10 23:46:19.297477 containerd[1532]: time="2025-09-10T23:46:19.297449287Z" level=info msg="StartContainer for \"a4e8f6f98525f15b526789886328a014c8f48fb2a7429ad547610821dfee989b\"" Sep 10 23:46:19.298425 containerd[1532]: time="2025-09-10T23:46:19.298400047Z" level=info msg="connecting to shim a4e8f6f98525f15b526789886328a014c8f48fb2a7429ad547610821dfee989b" address="unix:///run/containerd/s/41c926855c33746cb0f51c39fa31e6e3a8f8329d245f46683083642bdf7f704a" protocol=ttrpc version=3 Sep 10 23:46:19.303544 containerd[1532]: time="2025-09-10T23:46:19.303510527Z" level=info msg="CreateContainer within sandbox \"cace9b32e8f6f0b3c07836b09cbbdb1005e2c3f873e73aacde9289a42e7ca4ff\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4aa8fb6cb351d384ec7e5bf9adf5132d5452378da377b02987dbfeb3570e398c\"" Sep 10 23:46:19.304024 containerd[1532]: time="2025-09-10T23:46:19.303941367Z" level=info msg="StartContainer for \"4aa8fb6cb351d384ec7e5bf9adf5132d5452378da377b02987dbfeb3570e398c\"" Sep 10 23:46:19.304942 containerd[1532]: time="2025-09-10T23:46:19.304915927Z" level=info msg="connecting to shim 4aa8fb6cb351d384ec7e5bf9adf5132d5452378da377b02987dbfeb3570e398c" address="unix:///run/containerd/s/3c212f1b8659f2892594b834427089b0172b88a6c17d710762f9c83065b0a030" protocol=ttrpc version=3 Sep 10 23:46:19.306409 kubelet[2303]: I0910 23:46:19.306385 2303 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 23:46:19.307411 kubelet[2303]: E0910 23:46:19.307160 2303 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.38:6443/api/v1/nodes\": dial tcp 10.0.0.38:6443: connect: connection refused" node="localhost" Sep 10 23:46:19.315333 systemd[1]: Started cri-containerd-0db81a10371b288ec00ccd3a33271caa0398efe307fe5ad2a673374d5256daf3.scope - libcontainer container 0db81a10371b288ec00ccd3a33271caa0398efe307fe5ad2a673374d5256daf3. Sep 10 23:46:19.318145 systemd[1]: Started cri-containerd-a4e8f6f98525f15b526789886328a014c8f48fb2a7429ad547610821dfee989b.scope - libcontainer container a4e8f6f98525f15b526789886328a014c8f48fb2a7429ad547610821dfee989b. Sep 10 23:46:19.324237 systemd[1]: Started cri-containerd-4aa8fb6cb351d384ec7e5bf9adf5132d5452378da377b02987dbfeb3570e398c.scope - libcontainer container 4aa8fb6cb351d384ec7e5bf9adf5132d5452378da377b02987dbfeb3570e398c. Sep 10 23:46:19.363320 containerd[1532]: time="2025-09-10T23:46:19.363200527Z" level=info msg="StartContainer for \"0db81a10371b288ec00ccd3a33271caa0398efe307fe5ad2a673374d5256daf3\" returns successfully" Sep 10 23:46:19.374072 containerd[1532]: time="2025-09-10T23:46:19.373718247Z" level=info msg="StartContainer for \"a4e8f6f98525f15b526789886328a014c8f48fb2a7429ad547610821dfee989b\" returns successfully" Sep 10 23:46:19.375299 containerd[1532]: time="2025-09-10T23:46:19.375277167Z" level=info msg="StartContainer for \"4aa8fb6cb351d384ec7e5bf9adf5132d5452378da377b02987dbfeb3570e398c\" returns successfully" Sep 10 23:46:19.507156 kubelet[2303]: E0910 23:46:19.506714 2303 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:46:19.509492 kubelet[2303]: E0910 23:46:19.509460 2303 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:46:19.512178 kubelet[2303]: E0910 23:46:19.511342 2303 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:46:20.109897 kubelet[2303]: I0910 23:46:20.109841 2303 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 23:46:20.513032 kubelet[2303]: E0910 23:46:20.512893 2303 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:46:20.514331 kubelet[2303]: E0910 23:46:20.514311 2303 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:46:21.011888 kubelet[2303]: E0910 23:46:21.011827 2303 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 10 23:46:21.117684 kubelet[2303]: I0910 23:46:21.117635 2303 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 10 23:46:21.178856 kubelet[2303]: I0910 23:46:21.178817 2303 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 10 23:46:21.188179 kubelet[2303]: E0910 23:46:21.188114 2303 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 10 23:46:21.188315 kubelet[2303]: I0910 23:46:21.188195 2303 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 10 23:46:21.189925 kubelet[2303]: E0910 23:46:21.189883 2303 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 10 23:46:21.189925 kubelet[2303]: I0910 23:46:21.189906 2303 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 10 23:46:21.191635 kubelet[2303]: E0910 23:46:21.191597 2303 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 10 23:46:21.470797 kubelet[2303]: I0910 23:46:21.470750 2303 apiserver.go:52] "Watching apiserver" Sep 10 23:46:21.479581 kubelet[2303]: I0910 23:46:21.479543 2303 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 10 23:46:22.007943 kubelet[2303]: I0910 23:46:22.007753 2303 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 10 23:46:23.060211 systemd[1]: Reload requested from client PID 2587 ('systemctl') (unit session-7.scope)... Sep 10 23:46:23.060230 systemd[1]: Reloading... Sep 10 23:46:23.157425 zram_generator::config[2633]: No configuration found. Sep 10 23:46:23.230065 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 23:46:23.332032 systemd[1]: Reloading finished in 271 ms. Sep 10 23:46:23.367528 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:46:23.381319 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 23:46:23.381633 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:46:23.381729 systemd[1]: kubelet.service: Consumed 1.751s CPU time, 129.8M memory peak. Sep 10 23:46:23.383782 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:46:23.544256 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:46:23.549574 (kubelet)[2672]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 23:46:23.591257 kubelet[2672]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 23:46:23.591257 kubelet[2672]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 10 23:46:23.591257 kubelet[2672]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 23:46:23.591257 kubelet[2672]: I0910 23:46:23.590999 2672 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 23:46:23.598165 kubelet[2672]: I0910 23:46:23.597454 2672 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 10 23:46:23.598165 kubelet[2672]: I0910 23:46:23.597487 2672 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 23:46:23.598165 kubelet[2672]: I0910 23:46:23.597721 2672 server.go:956] "Client rotation is on, will bootstrap in background" Sep 10 23:46:23.599852 kubelet[2672]: I0910 23:46:23.599818 2672 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 10 23:46:23.602464 kubelet[2672]: I0910 23:46:23.602419 2672 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 23:46:23.606422 kubelet[2672]: I0910 23:46:23.606369 2672 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 10 23:46:23.609484 kubelet[2672]: I0910 23:46:23.609453 2672 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 23:46:23.609724 kubelet[2672]: I0910 23:46:23.609682 2672 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 23:46:23.609867 kubelet[2672]: I0910 23:46:23.609708 2672 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 23:46:23.609946 kubelet[2672]: I0910 23:46:23.609875 2672 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 23:46:23.609946 kubelet[2672]: I0910 23:46:23.609884 2672 container_manager_linux.go:303] "Creating device plugin manager" Sep 10 23:46:23.609988 kubelet[2672]: I0910 23:46:23.609949 2672 state_mem.go:36] "Initialized new in-memory state store" Sep 10 23:46:23.610104 kubelet[2672]: I0910 23:46:23.610092 2672 kubelet.go:480] "Attempting to sync node with API server" Sep 10 23:46:23.610166 kubelet[2672]: I0910 23:46:23.610107 2672 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 23:46:23.610166 kubelet[2672]: I0910 23:46:23.610133 2672 kubelet.go:386] "Adding apiserver pod source" Sep 10 23:46:23.610166 kubelet[2672]: I0910 23:46:23.610165 2672 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 23:46:23.611077 kubelet[2672]: I0910 23:46:23.611047 2672 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 10 23:46:23.612765 kubelet[2672]: I0910 23:46:23.612729 2672 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 10 23:46:23.616215 kubelet[2672]: I0910 23:46:23.616174 2672 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 10 23:46:23.616308 kubelet[2672]: I0910 23:46:23.616234 2672 server.go:1289] "Started kubelet" Sep 10 23:46:23.618527 kubelet[2672]: I0910 23:46:23.617271 2672 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 23:46:23.618990 kubelet[2672]: I0910 23:46:23.618909 2672 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 23:46:23.619276 kubelet[2672]: I0910 23:46:23.619257 2672 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 23:46:23.621652 kubelet[2672]: I0910 23:46:23.621605 2672 server.go:317] "Adding debug handlers to kubelet server" Sep 10 23:46:23.622248 kubelet[2672]: I0910 23:46:23.622229 2672 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 23:46:23.623833 kubelet[2672]: I0910 23:46:23.623809 2672 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 23:46:23.624938 kubelet[2672]: E0910 23:46:23.624908 2672 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 23:46:23.625577 kubelet[2672]: E0910 23:46:23.625554 2672 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 23:46:23.625629 kubelet[2672]: I0910 23:46:23.625589 2672 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 10 23:46:23.625798 kubelet[2672]: I0910 23:46:23.625780 2672 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 10 23:46:23.625924 kubelet[2672]: I0910 23:46:23.625912 2672 reconciler.go:26] "Reconciler: start to sync state" Sep 10 23:46:23.631903 kubelet[2672]: I0910 23:46:23.631862 2672 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 23:46:23.633012 kubelet[2672]: I0910 23:46:23.632929 2672 factory.go:223] Registration of the containerd container factory successfully Sep 10 23:46:23.633012 kubelet[2672]: I0910 23:46:23.632944 2672 factory.go:223] Registration of the systemd container factory successfully Sep 10 23:46:23.656405 kubelet[2672]: I0910 23:46:23.656347 2672 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 10 23:46:23.658910 kubelet[2672]: I0910 23:46:23.658876 2672 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 10 23:46:23.658910 kubelet[2672]: I0910 23:46:23.658908 2672 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 10 23:46:23.659509 kubelet[2672]: I0910 23:46:23.658949 2672 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 10 23:46:23.659509 kubelet[2672]: I0910 23:46:23.658969 2672 kubelet.go:2436] "Starting kubelet main sync loop" Sep 10 23:46:23.659509 kubelet[2672]: E0910 23:46:23.659027 2672 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 23:46:23.681430 kubelet[2672]: I0910 23:46:23.681398 2672 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 10 23:46:23.682215 kubelet[2672]: I0910 23:46:23.681574 2672 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 10 23:46:23.682215 kubelet[2672]: I0910 23:46:23.681600 2672 state_mem.go:36] "Initialized new in-memory state store" Sep 10 23:46:23.682215 kubelet[2672]: I0910 23:46:23.681727 2672 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 10 23:46:23.682215 kubelet[2672]: I0910 23:46:23.681737 2672 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 10 23:46:23.682215 kubelet[2672]: I0910 23:46:23.681753 2672 policy_none.go:49] "None policy: Start" Sep 10 23:46:23.682215 kubelet[2672]: I0910 23:46:23.681762 2672 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 10 23:46:23.682215 kubelet[2672]: I0910 23:46:23.681770 2672 state_mem.go:35] "Initializing new in-memory state store" Sep 10 23:46:23.682215 kubelet[2672]: I0910 23:46:23.681855 2672 state_mem.go:75] "Updated machine memory state" Sep 10 23:46:23.686396 kubelet[2672]: E0910 23:46:23.686362 2672 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 10 23:46:23.686937 kubelet[2672]: I0910 23:46:23.686551 2672 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 23:46:23.686937 kubelet[2672]: I0910 23:46:23.686605 2672 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 23:46:23.686937 kubelet[2672]: I0910 23:46:23.686829 2672 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 23:46:23.687946 kubelet[2672]: E0910 23:46:23.687911 2672 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 10 23:46:23.760268 kubelet[2672]: I0910 23:46:23.760228 2672 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 10 23:46:23.760395 kubelet[2672]: I0910 23:46:23.760242 2672 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 10 23:46:23.760395 kubelet[2672]: I0910 23:46:23.760359 2672 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 10 23:46:23.767706 kubelet[2672]: E0910 23:46:23.767673 2672 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 10 23:46:23.788026 kubelet[2672]: I0910 23:46:23.787992 2672 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 23:46:23.796194 kubelet[2672]: I0910 23:46:23.795794 2672 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 10 23:46:23.796194 kubelet[2672]: I0910 23:46:23.795876 2672 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 10 23:46:23.827074 kubelet[2672]: I0910 23:46:23.827020 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:46:23.827074 kubelet[2672]: I0910 23:46:23.827063 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 10 23:46:23.827074 kubelet[2672]: I0910 23:46:23.827080 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42f95a78601fb0505bbf153754e6d391-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"42f95a78601fb0505bbf153754e6d391\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:46:23.827269 kubelet[2672]: I0910 23:46:23.827097 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42f95a78601fb0505bbf153754e6d391-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"42f95a78601fb0505bbf153754e6d391\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:46:23.827269 kubelet[2672]: I0910 23:46:23.827129 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42f95a78601fb0505bbf153754e6d391-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"42f95a78601fb0505bbf153754e6d391\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:46:23.827269 kubelet[2672]: I0910 23:46:23.827171 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:46:23.827269 kubelet[2672]: I0910 23:46:23.827194 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:46:23.827269 kubelet[2672]: I0910 23:46:23.827216 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:46:23.827375 kubelet[2672]: I0910 23:46:23.827275 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:46:24.052212 sudo[2715]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 10 23:46:24.052513 sudo[2715]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 10 23:46:24.516335 sudo[2715]: pam_unix(sudo:session): session closed for user root Sep 10 23:46:24.613055 kubelet[2672]: I0910 23:46:24.612687 2672 apiserver.go:52] "Watching apiserver" Sep 10 23:46:24.627713 kubelet[2672]: I0910 23:46:24.626259 2672 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 10 23:46:24.735456 kubelet[2672]: I0910 23:46:24.734737 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.734719567 podStartE2EDuration="2.734719567s" podCreationTimestamp="2025-09-10 23:46:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:46:24.731271727 +0000 UTC m=+1.174295761" watchObservedRunningTime="2025-09-10 23:46:24.734719567 +0000 UTC m=+1.177743601" Sep 10 23:46:24.735456 kubelet[2672]: I0910 23:46:24.735014 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.7350031270000001 podStartE2EDuration="1.735003127s" podCreationTimestamp="2025-09-10 23:46:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:46:24.717620567 +0000 UTC m=+1.160644641" watchObservedRunningTime="2025-09-10 23:46:24.735003127 +0000 UTC m=+1.178027161" Sep 10 23:46:24.745852 kubelet[2672]: I0910 23:46:24.745786 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.745770007 podStartE2EDuration="1.745770007s" podCreationTimestamp="2025-09-10 23:46:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:46:24.745366087 +0000 UTC m=+1.188390121" watchObservedRunningTime="2025-09-10 23:46:24.745770007 +0000 UTC m=+1.188794041" Sep 10 23:46:26.589831 sudo[1736]: pam_unix(sudo:session): session closed for user root Sep 10 23:46:26.591119 sshd[1735]: Connection closed by 10.0.0.1 port 42040 Sep 10 23:46:26.591711 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Sep 10 23:46:26.595310 systemd[1]: sshd@6-10.0.0.38:22-10.0.0.1:42040.service: Deactivated successfully. Sep 10 23:46:26.597450 systemd[1]: session-7.scope: Deactivated successfully. Sep 10 23:46:26.597717 systemd[1]: session-7.scope: Consumed 8.555s CPU time, 262.4M memory peak. Sep 10 23:46:26.598844 systemd-logind[1516]: Session 7 logged out. Waiting for processes to exit. Sep 10 23:46:26.600264 systemd-logind[1516]: Removed session 7. Sep 10 23:46:28.631185 kubelet[2672]: I0910 23:46:28.631103 2672 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 10 23:46:28.631997 containerd[1532]: time="2025-09-10T23:46:28.631886417Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 10 23:46:28.632321 kubelet[2672]: I0910 23:46:28.632095 2672 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 10 23:46:29.307061 systemd[1]: Created slice kubepods-besteffort-pod2d195009_2e73_4c2e_8101_5d65b830e439.slice - libcontainer container kubepods-besteffort-pod2d195009_2e73_4c2e_8101_5d65b830e439.slice. Sep 10 23:46:29.321682 systemd[1]: Created slice kubepods-burstable-pod7ec97403_52b2_4394_b361_4cd9617f584d.slice - libcontainer container kubepods-burstable-pod7ec97403_52b2_4394_b361_4cd9617f584d.slice. Sep 10 23:46:29.367441 kubelet[2672]: I0910 23:46:29.367404 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ec97403-52b2-4394-b361-4cd9617f584d-lib-modules\") pod \"cilium-4dn9w\" (UID: \"7ec97403-52b2-4394-b361-4cd9617f584d\") " pod="kube-system/cilium-4dn9w" Sep 10 23:46:29.367441 kubelet[2672]: I0910 23:46:29.367443 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7ec97403-52b2-4394-b361-4cd9617f584d-bpf-maps\") pod \"cilium-4dn9w\" (UID: \"7ec97403-52b2-4394-b361-4cd9617f584d\") " pod="kube-system/cilium-4dn9w" Sep 10 23:46:29.367585 kubelet[2672]: I0910 23:46:29.367464 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7ec97403-52b2-4394-b361-4cd9617f584d-cilium-cgroup\") pod \"cilium-4dn9w\" (UID: \"7ec97403-52b2-4394-b361-4cd9617f584d\") " pod="kube-system/cilium-4dn9w" Sep 10 23:46:29.367585 kubelet[2672]: I0910 23:46:29.367482 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7ec97403-52b2-4394-b361-4cd9617f584d-clustermesh-secrets\") pod \"cilium-4dn9w\" (UID: \"7ec97403-52b2-4394-b361-4cd9617f584d\") " pod="kube-system/cilium-4dn9w" Sep 10 23:46:29.367585 kubelet[2672]: I0910 23:46:29.367522 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7ec97403-52b2-4394-b361-4cd9617f584d-cilium-config-path\") pod \"cilium-4dn9w\" (UID: \"7ec97403-52b2-4394-b361-4cd9617f584d\") " pod="kube-system/cilium-4dn9w" Sep 10 23:46:29.367655 kubelet[2672]: I0910 23:46:29.367581 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7ec97403-52b2-4394-b361-4cd9617f584d-host-proc-sys-kernel\") pod \"cilium-4dn9w\" (UID: \"7ec97403-52b2-4394-b361-4cd9617f584d\") " pod="kube-system/cilium-4dn9w" Sep 10 23:46:29.367655 kubelet[2672]: I0910 23:46:29.367619 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7ec97403-52b2-4394-b361-4cd9617f584d-hubble-tls\") pod \"cilium-4dn9w\" (UID: \"7ec97403-52b2-4394-b361-4cd9617f584d\") " pod="kube-system/cilium-4dn9w" Sep 10 23:46:29.367655 kubelet[2672]: I0910 23:46:29.367637 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7ec97403-52b2-4394-b361-4cd9617f584d-cilium-run\") pod \"cilium-4dn9w\" (UID: \"7ec97403-52b2-4394-b361-4cd9617f584d\") " pod="kube-system/cilium-4dn9w" Sep 10 23:46:29.367716 kubelet[2672]: I0910 23:46:29.367666 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67gtj\" (UniqueName: \"kubernetes.io/projected/2d195009-2e73-4c2e-8101-5d65b830e439-kube-api-access-67gtj\") pod \"kube-proxy-tzpvg\" (UID: \"2d195009-2e73-4c2e-8101-5d65b830e439\") " pod="kube-system/kube-proxy-tzpvg" Sep 10 23:46:29.367716 kubelet[2672]: I0910 23:46:29.367689 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7ec97403-52b2-4394-b361-4cd9617f584d-hostproc\") pod \"cilium-4dn9w\" (UID: \"7ec97403-52b2-4394-b361-4cd9617f584d\") " pod="kube-system/cilium-4dn9w" Sep 10 23:46:29.367765 kubelet[2672]: I0910 23:46:29.367730 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7ec97403-52b2-4394-b361-4cd9617f584d-etc-cni-netd\") pod \"cilium-4dn9w\" (UID: \"7ec97403-52b2-4394-b361-4cd9617f584d\") " pod="kube-system/cilium-4dn9w" Sep 10 23:46:29.367765 kubelet[2672]: I0910 23:46:29.367753 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7ec97403-52b2-4394-b361-4cd9617f584d-xtables-lock\") pod \"cilium-4dn9w\" (UID: \"7ec97403-52b2-4394-b361-4cd9617f584d\") " pod="kube-system/cilium-4dn9w" Sep 10 23:46:29.367808 kubelet[2672]: I0910 23:46:29.367769 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7ec97403-52b2-4394-b361-4cd9617f584d-host-proc-sys-net\") pod \"cilium-4dn9w\" (UID: \"7ec97403-52b2-4394-b361-4cd9617f584d\") " pod="kube-system/cilium-4dn9w" Sep 10 23:46:29.368027 kubelet[2672]: I0910 23:46:29.367812 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88cpp\" (UniqueName: \"kubernetes.io/projected/7ec97403-52b2-4394-b361-4cd9617f584d-kube-api-access-88cpp\") pod \"cilium-4dn9w\" (UID: \"7ec97403-52b2-4394-b361-4cd9617f584d\") " pod="kube-system/cilium-4dn9w" Sep 10 23:46:29.368027 kubelet[2672]: I0910 23:46:29.367871 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d195009-2e73-4c2e-8101-5d65b830e439-lib-modules\") pod \"kube-proxy-tzpvg\" (UID: \"2d195009-2e73-4c2e-8101-5d65b830e439\") " pod="kube-system/kube-proxy-tzpvg" Sep 10 23:46:29.368027 kubelet[2672]: I0910 23:46:29.367890 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7ec97403-52b2-4394-b361-4cd9617f584d-cni-path\") pod \"cilium-4dn9w\" (UID: \"7ec97403-52b2-4394-b361-4cd9617f584d\") " pod="kube-system/cilium-4dn9w" Sep 10 23:46:29.368027 kubelet[2672]: I0910 23:46:29.367919 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2d195009-2e73-4c2e-8101-5d65b830e439-kube-proxy\") pod \"kube-proxy-tzpvg\" (UID: \"2d195009-2e73-4c2e-8101-5d65b830e439\") " pod="kube-system/kube-proxy-tzpvg" Sep 10 23:46:29.368027 kubelet[2672]: I0910 23:46:29.367944 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2d195009-2e73-4c2e-8101-5d65b830e439-xtables-lock\") pod \"kube-proxy-tzpvg\" (UID: \"2d195009-2e73-4c2e-8101-5d65b830e439\") " pod="kube-system/kube-proxy-tzpvg" Sep 10 23:46:29.485988 kubelet[2672]: E0910 23:46:29.485543 2672 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 10 23:46:29.485988 kubelet[2672]: E0910 23:46:29.485591 2672 projected.go:194] Error preparing data for projected volume kube-api-access-88cpp for pod kube-system/cilium-4dn9w: configmap "kube-root-ca.crt" not found Sep 10 23:46:29.485988 kubelet[2672]: E0910 23:46:29.485670 2672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7ec97403-52b2-4394-b361-4cd9617f584d-kube-api-access-88cpp podName:7ec97403-52b2-4394-b361-4cd9617f584d nodeName:}" failed. No retries permitted until 2025-09-10 23:46:29.985648609 +0000 UTC m=+6.428672643 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-88cpp" (UniqueName: "kubernetes.io/projected/7ec97403-52b2-4394-b361-4cd9617f584d-kube-api-access-88cpp") pod "cilium-4dn9w" (UID: "7ec97403-52b2-4394-b361-4cd9617f584d") : configmap "kube-root-ca.crt" not found Sep 10 23:46:29.488190 kubelet[2672]: E0910 23:46:29.488167 2672 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 10 23:46:29.488267 kubelet[2672]: E0910 23:46:29.488217 2672 projected.go:194] Error preparing data for projected volume kube-api-access-67gtj for pod kube-system/kube-proxy-tzpvg: configmap "kube-root-ca.crt" not found Sep 10 23:46:29.488371 kubelet[2672]: E0910 23:46:29.488353 2672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2d195009-2e73-4c2e-8101-5d65b830e439-kube-api-access-67gtj podName:2d195009-2e73-4c2e-8101-5d65b830e439 nodeName:}" failed. No retries permitted until 2025-09-10 23:46:29.988335352 +0000 UTC m=+6.431359426 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-67gtj" (UniqueName: "kubernetes.io/projected/2d195009-2e73-4c2e-8101-5d65b830e439-kube-api-access-67gtj") pod "kube-proxy-tzpvg" (UID: "2d195009-2e73-4c2e-8101-5d65b830e439") : configmap "kube-root-ca.crt" not found Sep 10 23:46:29.896522 systemd[1]: Created slice kubepods-besteffort-pod1cf795db_4294_4e84_98fc_db007d34dc3a.slice - libcontainer container kubepods-besteffort-pod1cf795db_4294_4e84_98fc_db007d34dc3a.slice. Sep 10 23:46:29.972360 kubelet[2672]: I0910 23:46:29.972294 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1cf795db-4294-4e84-98fc-db007d34dc3a-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-r54v4\" (UID: \"1cf795db-4294-4e84-98fc-db007d34dc3a\") " pod="kube-system/cilium-operator-6c4d7847fc-r54v4" Sep 10 23:46:29.972746 kubelet[2672]: I0910 23:46:29.972395 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqtrl\" (UniqueName: \"kubernetes.io/projected/1cf795db-4294-4e84-98fc-db007d34dc3a-kube-api-access-rqtrl\") pod \"cilium-operator-6c4d7847fc-r54v4\" (UID: \"1cf795db-4294-4e84-98fc-db007d34dc3a\") " pod="kube-system/cilium-operator-6c4d7847fc-r54v4" Sep 10 23:46:30.205560 containerd[1532]: time="2025-09-10T23:46:30.205428531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-r54v4,Uid:1cf795db-4294-4e84-98fc-db007d34dc3a,Namespace:kube-system,Attempt:0,}" Sep 10 23:46:30.219379 containerd[1532]: time="2025-09-10T23:46:30.219320163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tzpvg,Uid:2d195009-2e73-4c2e-8101-5d65b830e439,Namespace:kube-system,Attempt:0,}" Sep 10 23:46:30.225926 containerd[1532]: time="2025-09-10T23:46:30.225655253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4dn9w,Uid:7ec97403-52b2-4394-b361-4cd9617f584d,Namespace:kube-system,Attempt:0,}" Sep 10 23:46:30.248938 containerd[1532]: time="2025-09-10T23:46:30.248882479Z" level=info msg="connecting to shim bcde258ccc8eccb3858b55178d6268186da7506192676e84e7f80d461bfcd301" address="unix:///run/containerd/s/9ee48c20152a287bc379dd2564b4c775939de5c32291758418b9523e4688fc04" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:46:30.260270 containerd[1532]: time="2025-09-10T23:46:30.260219970Z" level=info msg="connecting to shim 18c24259b90037b0f2e841da9314c681019dea5a55e5685df2af7037f1eca887" address="unix:///run/containerd/s/9d667bf9c1e10f2eaa7f3a7748cda0d36d8d0452ad2fb2a4f6dcf52b41c8bdce" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:46:30.264767 containerd[1532]: time="2025-09-10T23:46:30.264722286Z" level=info msg="connecting to shim 9efe52db0ea7b38789f2d4bcc1f5c5d3102a686e85ca3c4b22f75be11bfce8c1" address="unix:///run/containerd/s/a5b1468f9a692228fa98fcb353d8bf40f9acd2efcbad9d21dc87920388542feb" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:46:30.284632 systemd[1]: Started cri-containerd-bcde258ccc8eccb3858b55178d6268186da7506192676e84e7f80d461bfcd301.scope - libcontainer container bcde258ccc8eccb3858b55178d6268186da7506192676e84e7f80d461bfcd301. Sep 10 23:46:30.295809 systemd[1]: Started cri-containerd-18c24259b90037b0f2e841da9314c681019dea5a55e5685df2af7037f1eca887.scope - libcontainer container 18c24259b90037b0f2e841da9314c681019dea5a55e5685df2af7037f1eca887. Sep 10 23:46:30.322492 systemd[1]: Started cri-containerd-9efe52db0ea7b38789f2d4bcc1f5c5d3102a686e85ca3c4b22f75be11bfce8c1.scope - libcontainer container 9efe52db0ea7b38789f2d4bcc1f5c5d3102a686e85ca3c4b22f75be11bfce8c1. Sep 10 23:46:30.337663 containerd[1532]: time="2025-09-10T23:46:30.337484149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tzpvg,Uid:2d195009-2e73-4c2e-8101-5d65b830e439,Namespace:kube-system,Attempt:0,} returns sandbox id \"bcde258ccc8eccb3858b55178d6268186da7506192676e84e7f80d461bfcd301\"" Sep 10 23:46:30.340049 containerd[1532]: time="2025-09-10T23:46:30.339920008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4dn9w,Uid:7ec97403-52b2-4394-b361-4cd9617f584d,Namespace:kube-system,Attempt:0,} returns sandbox id \"18c24259b90037b0f2e841da9314c681019dea5a55e5685df2af7037f1eca887\"" Sep 10 23:46:30.356179 containerd[1532]: time="2025-09-10T23:46:30.356085938Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 10 23:46:30.369317 containerd[1532]: time="2025-09-10T23:46:30.369266883Z" level=info msg="CreateContainer within sandbox \"bcde258ccc8eccb3858b55178d6268186da7506192676e84e7f80d461bfcd301\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 10 23:46:30.372854 containerd[1532]: time="2025-09-10T23:46:30.372787831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-r54v4,Uid:1cf795db-4294-4e84-98fc-db007d34dc3a,Namespace:kube-system,Attempt:0,} returns sandbox id \"9efe52db0ea7b38789f2d4bcc1f5c5d3102a686e85ca3c4b22f75be11bfce8c1\"" Sep 10 23:46:30.381168 containerd[1532]: time="2025-09-10T23:46:30.379993369Z" level=info msg="Container b4e385a0202d9d1018045204f107a305c045c603f528d97982a2da0b9cedae9a: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:46:30.386976 containerd[1532]: time="2025-09-10T23:46:30.386927065Z" level=info msg="CreateContainer within sandbox \"bcde258ccc8eccb3858b55178d6268186da7506192676e84e7f80d461bfcd301\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b4e385a0202d9d1018045204f107a305c045c603f528d97982a2da0b9cedae9a\"" Sep 10 23:46:30.389424 containerd[1532]: time="2025-09-10T23:46:30.389381804Z" level=info msg="StartContainer for \"b4e385a0202d9d1018045204f107a305c045c603f528d97982a2da0b9cedae9a\"" Sep 10 23:46:30.392990 containerd[1532]: time="2025-09-10T23:46:30.392951833Z" level=info msg="connecting to shim b4e385a0202d9d1018045204f107a305c045c603f528d97982a2da0b9cedae9a" address="unix:///run/containerd/s/9ee48c20152a287bc379dd2564b4c775939de5c32291758418b9523e4688fc04" protocol=ttrpc version=3 Sep 10 23:46:30.417366 systemd[1]: Started cri-containerd-b4e385a0202d9d1018045204f107a305c045c603f528d97982a2da0b9cedae9a.scope - libcontainer container b4e385a0202d9d1018045204f107a305c045c603f528d97982a2da0b9cedae9a. Sep 10 23:46:30.455037 containerd[1532]: time="2025-09-10T23:46:30.454993570Z" level=info msg="StartContainer for \"b4e385a0202d9d1018045204f107a305c045c603f528d97982a2da0b9cedae9a\" returns successfully" Sep 10 23:46:31.699812 kubelet[2672]: I0910 23:46:31.699736 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tzpvg" podStartSLOduration=2.69970239 podStartE2EDuration="2.69970239s" podCreationTimestamp="2025-09-10 23:46:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:46:30.706963707 +0000 UTC m=+7.149987781" watchObservedRunningTime="2025-09-10 23:46:31.69970239 +0000 UTC m=+8.142726424" Sep 10 23:46:34.041577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1104502243.mount: Deactivated successfully. Sep 10 23:46:37.024201 update_engine[1521]: I20250910 23:46:37.023792 1521 update_attempter.cc:509] Updating boot flags... Sep 10 23:46:40.620800 containerd[1532]: time="2025-09-10T23:46:40.620744825Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:46:40.621764 containerd[1532]: time="2025-09-10T23:46:40.621637629Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 10 23:46:40.622578 containerd[1532]: time="2025-09-10T23:46:40.622542673Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:46:40.631691 containerd[1532]: time="2025-09-10T23:46:40.631555111Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 10.275186451s" Sep 10 23:46:40.631691 containerd[1532]: time="2025-09-10T23:46:40.631599231Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 10 23:46:40.636072 containerd[1532]: time="2025-09-10T23:46:40.635964849Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 10 23:46:40.651899 containerd[1532]: time="2025-09-10T23:46:40.651859396Z" level=info msg="CreateContainer within sandbox \"18c24259b90037b0f2e841da9314c681019dea5a55e5685df2af7037f1eca887\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 10 23:46:40.659043 containerd[1532]: time="2025-09-10T23:46:40.658869065Z" level=info msg="Container 153022a457a6e404ef1a05b9f10a8832b08e4a4bfefa1781bd7f1153027f88c0: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:46:40.663655 containerd[1532]: time="2025-09-10T23:46:40.663619605Z" level=info msg="CreateContainer within sandbox \"18c24259b90037b0f2e841da9314c681019dea5a55e5685df2af7037f1eca887\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"153022a457a6e404ef1a05b9f10a8832b08e4a4bfefa1781bd7f1153027f88c0\"" Sep 10 23:46:40.664193 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2361234541.mount: Deactivated successfully. Sep 10 23:46:40.664530 containerd[1532]: time="2025-09-10T23:46:40.664353208Z" level=info msg="StartContainer for \"153022a457a6e404ef1a05b9f10a8832b08e4a4bfefa1781bd7f1153027f88c0\"" Sep 10 23:46:40.665104 containerd[1532]: time="2025-09-10T23:46:40.665075651Z" level=info msg="connecting to shim 153022a457a6e404ef1a05b9f10a8832b08e4a4bfefa1781bd7f1153027f88c0" address="unix:///run/containerd/s/9d667bf9c1e10f2eaa7f3a7748cda0d36d8d0452ad2fb2a4f6dcf52b41c8bdce" protocol=ttrpc version=3 Sep 10 23:46:40.711336 systemd[1]: Started cri-containerd-153022a457a6e404ef1a05b9f10a8832b08e4a4bfefa1781bd7f1153027f88c0.scope - libcontainer container 153022a457a6e404ef1a05b9f10a8832b08e4a4bfefa1781bd7f1153027f88c0. Sep 10 23:46:40.740192 containerd[1532]: time="2025-09-10T23:46:40.740103326Z" level=info msg="StartContainer for \"153022a457a6e404ef1a05b9f10a8832b08e4a4bfefa1781bd7f1153027f88c0\" returns successfully" Sep 10 23:46:40.756376 systemd[1]: cri-containerd-153022a457a6e404ef1a05b9f10a8832b08e4a4bfefa1781bd7f1153027f88c0.scope: Deactivated successfully. Sep 10 23:46:40.798581 containerd[1532]: time="2025-09-10T23:46:40.798531852Z" level=info msg="TaskExit event in podsandbox handler container_id:\"153022a457a6e404ef1a05b9f10a8832b08e4a4bfefa1781bd7f1153027f88c0\" id:\"153022a457a6e404ef1a05b9f10a8832b08e4a4bfefa1781bd7f1153027f88c0\" pid:3119 exited_at:{seconds:1757548000 nanos:783805950}" Sep 10 23:46:40.799362 containerd[1532]: time="2025-09-10T23:46:40.799316055Z" level=info msg="received exit event container_id:\"153022a457a6e404ef1a05b9f10a8832b08e4a4bfefa1781bd7f1153027f88c0\" id:\"153022a457a6e404ef1a05b9f10a8832b08e4a4bfefa1781bd7f1153027f88c0\" pid:3119 exited_at:{seconds:1757548000 nanos:783805950}" Sep 10 23:46:40.837983 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-153022a457a6e404ef1a05b9f10a8832b08e4a4bfefa1781bd7f1153027f88c0-rootfs.mount: Deactivated successfully. Sep 10 23:46:41.735001 containerd[1532]: time="2025-09-10T23:46:41.734927873Z" level=info msg="CreateContainer within sandbox \"18c24259b90037b0f2e841da9314c681019dea5a55e5685df2af7037f1eca887\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 10 23:46:41.749981 containerd[1532]: time="2025-09-10T23:46:41.749270330Z" level=info msg="Container c60b842b71f161a50f33e39fb8cd43f35ee9423e69fed3be499e44c991bdec4b: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:46:41.764013 containerd[1532]: time="2025-09-10T23:46:41.763965628Z" level=info msg="CreateContainer within sandbox \"18c24259b90037b0f2e841da9314c681019dea5a55e5685df2af7037f1eca887\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c60b842b71f161a50f33e39fb8cd43f35ee9423e69fed3be499e44c991bdec4b\"" Sep 10 23:46:41.764562 containerd[1532]: time="2025-09-10T23:46:41.764539430Z" level=info msg="StartContainer for \"c60b842b71f161a50f33e39fb8cd43f35ee9423e69fed3be499e44c991bdec4b\"" Sep 10 23:46:41.765530 containerd[1532]: time="2025-09-10T23:46:41.765499194Z" level=info msg="connecting to shim c60b842b71f161a50f33e39fb8cd43f35ee9423e69fed3be499e44c991bdec4b" address="unix:///run/containerd/s/9d667bf9c1e10f2eaa7f3a7748cda0d36d8d0452ad2fb2a4f6dcf52b41c8bdce" protocol=ttrpc version=3 Sep 10 23:46:41.784661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount481068332.mount: Deactivated successfully. Sep 10 23:46:41.818452 systemd[1]: Started cri-containerd-c60b842b71f161a50f33e39fb8cd43f35ee9423e69fed3be499e44c991bdec4b.scope - libcontainer container c60b842b71f161a50f33e39fb8cd43f35ee9423e69fed3be499e44c991bdec4b. Sep 10 23:46:41.857169 containerd[1532]: time="2025-09-10T23:46:41.857101234Z" level=info msg="StartContainer for \"c60b842b71f161a50f33e39fb8cd43f35ee9423e69fed3be499e44c991bdec4b\" returns successfully" Sep 10 23:46:41.869231 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 10 23:46:41.869647 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 10 23:46:41.869896 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 10 23:46:41.872314 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 23:46:41.872533 systemd[1]: cri-containerd-c60b842b71f161a50f33e39fb8cd43f35ee9423e69fed3be499e44c991bdec4b.scope: Deactivated successfully. Sep 10 23:46:41.874276 containerd[1532]: time="2025-09-10T23:46:41.872883616Z" level=info msg="received exit event container_id:\"c60b842b71f161a50f33e39fb8cd43f35ee9423e69fed3be499e44c991bdec4b\" id:\"c60b842b71f161a50f33e39fb8cd43f35ee9423e69fed3be499e44c991bdec4b\" pid:3172 exited_at:{seconds:1757548001 nanos:872370854}" Sep 10 23:46:41.874276 containerd[1532]: time="2025-09-10T23:46:41.872970697Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c60b842b71f161a50f33e39fb8cd43f35ee9423e69fed3be499e44c991bdec4b\" id:\"c60b842b71f161a50f33e39fb8cd43f35ee9423e69fed3be499e44c991bdec4b\" pid:3172 exited_at:{seconds:1757548001 nanos:872370854}" Sep 10 23:46:41.904291 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 23:46:42.110061 containerd[1532]: time="2025-09-10T23:46:42.109948083Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:46:42.110980 containerd[1532]: time="2025-09-10T23:46:42.110948647Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 10 23:46:42.111905 containerd[1532]: time="2025-09-10T23:46:42.111881570Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:46:42.113134 containerd[1532]: time="2025-09-10T23:46:42.113098375Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.477099846s" Sep 10 23:46:42.113134 containerd[1532]: time="2025-09-10T23:46:42.113129135Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 10 23:46:42.119062 containerd[1532]: time="2025-09-10T23:46:42.119001396Z" level=info msg="CreateContainer within sandbox \"9efe52db0ea7b38789f2d4bcc1f5c5d3102a686e85ca3c4b22f75be11bfce8c1\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 10 23:46:42.127216 containerd[1532]: time="2025-09-10T23:46:42.127170187Z" level=info msg="Container 42449d0fc3485823273acfe499333b5b794dadcbeb8618c06ee08be988b5bc4d: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:46:42.134430 containerd[1532]: time="2025-09-10T23:46:42.134378093Z" level=info msg="CreateContainer within sandbox \"9efe52db0ea7b38789f2d4bcc1f5c5d3102a686e85ca3c4b22f75be11bfce8c1\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"42449d0fc3485823273acfe499333b5b794dadcbeb8618c06ee08be988b5bc4d\"" Sep 10 23:46:42.134864 containerd[1532]: time="2025-09-10T23:46:42.134841255Z" level=info msg="StartContainer for \"42449d0fc3485823273acfe499333b5b794dadcbeb8618c06ee08be988b5bc4d\"" Sep 10 23:46:42.137173 containerd[1532]: time="2025-09-10T23:46:42.137036023Z" level=info msg="connecting to shim 42449d0fc3485823273acfe499333b5b794dadcbeb8618c06ee08be988b5bc4d" address="unix:///run/containerd/s/a5b1468f9a692228fa98fcb353d8bf40f9acd2efcbad9d21dc87920388542feb" protocol=ttrpc version=3 Sep 10 23:46:42.161385 systemd[1]: Started cri-containerd-42449d0fc3485823273acfe499333b5b794dadcbeb8618c06ee08be988b5bc4d.scope - libcontainer container 42449d0fc3485823273acfe499333b5b794dadcbeb8618c06ee08be988b5bc4d. Sep 10 23:46:42.192333 containerd[1532]: time="2025-09-10T23:46:42.192267427Z" level=info msg="StartContainer for \"42449d0fc3485823273acfe499333b5b794dadcbeb8618c06ee08be988b5bc4d\" returns successfully" Sep 10 23:46:42.737707 containerd[1532]: time="2025-09-10T23:46:42.737510199Z" level=info msg="CreateContainer within sandbox \"18c24259b90037b0f2e841da9314c681019dea5a55e5685df2af7037f1eca887\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 10 23:46:42.747133 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c60b842b71f161a50f33e39fb8cd43f35ee9423e69fed3be499e44c991bdec4b-rootfs.mount: Deactivated successfully. Sep 10 23:46:42.788212 containerd[1532]: time="2025-09-10T23:46:42.788160986Z" level=info msg="Container 8fa15c55be50683ef783aa18f109a123cfbfe8ec0edcd42646aa067152a64237: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:46:42.789334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1552251139.mount: Deactivated successfully. Sep 10 23:46:42.805334 containerd[1532]: time="2025-09-10T23:46:42.805284369Z" level=info msg="CreateContainer within sandbox \"18c24259b90037b0f2e841da9314c681019dea5a55e5685df2af7037f1eca887\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8fa15c55be50683ef783aa18f109a123cfbfe8ec0edcd42646aa067152a64237\"" Sep 10 23:46:42.806197 containerd[1532]: time="2025-09-10T23:46:42.806162893Z" level=info msg="StartContainer for \"8fa15c55be50683ef783aa18f109a123cfbfe8ec0edcd42646aa067152a64237\"" Sep 10 23:46:42.807670 containerd[1532]: time="2025-09-10T23:46:42.807639058Z" level=info msg="connecting to shim 8fa15c55be50683ef783aa18f109a123cfbfe8ec0edcd42646aa067152a64237" address="unix:///run/containerd/s/9d667bf9c1e10f2eaa7f3a7748cda0d36d8d0452ad2fb2a4f6dcf52b41c8bdce" protocol=ttrpc version=3 Sep 10 23:46:42.814647 kubelet[2672]: I0910 23:46:42.814565 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-r54v4" podStartSLOduration=2.075279353 podStartE2EDuration="13.814545524s" podCreationTimestamp="2025-09-10 23:46:29 +0000 UTC" firstStartedPulling="2025-09-10 23:46:30.374548806 +0000 UTC m=+6.817572840" lastFinishedPulling="2025-09-10 23:46:42.113814977 +0000 UTC m=+18.556839011" observedRunningTime="2025-09-10 23:46:42.813975561 +0000 UTC m=+19.256999595" watchObservedRunningTime="2025-09-10 23:46:42.814545524 +0000 UTC m=+19.257569638" Sep 10 23:46:42.843366 systemd[1]: Started cri-containerd-8fa15c55be50683ef783aa18f109a123cfbfe8ec0edcd42646aa067152a64237.scope - libcontainer container 8fa15c55be50683ef783aa18f109a123cfbfe8ec0edcd42646aa067152a64237. Sep 10 23:46:42.889100 containerd[1532]: time="2025-09-10T23:46:42.889062999Z" level=info msg="StartContainer for \"8fa15c55be50683ef783aa18f109a123cfbfe8ec0edcd42646aa067152a64237\" returns successfully" Sep 10 23:46:42.889583 systemd[1]: cri-containerd-8fa15c55be50683ef783aa18f109a123cfbfe8ec0edcd42646aa067152a64237.scope: Deactivated successfully. Sep 10 23:46:42.891150 containerd[1532]: time="2025-09-10T23:46:42.890512844Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8fa15c55be50683ef783aa18f109a123cfbfe8ec0edcd42646aa067152a64237\" id:\"8fa15c55be50683ef783aa18f109a123cfbfe8ec0edcd42646aa067152a64237\" pid:3267 exited_at:{seconds:1757548002 nanos:890191403}" Sep 10 23:46:42.899334 containerd[1532]: time="2025-09-10T23:46:42.899281876Z" level=info msg="received exit event container_id:\"8fa15c55be50683ef783aa18f109a123cfbfe8ec0edcd42646aa067152a64237\" id:\"8fa15c55be50683ef783aa18f109a123cfbfe8ec0edcd42646aa067152a64237\" pid:3267 exited_at:{seconds:1757548002 nanos:890191403}" Sep 10 23:46:43.746327 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8fa15c55be50683ef783aa18f109a123cfbfe8ec0edcd42646aa067152a64237-rootfs.mount: Deactivated successfully. Sep 10 23:46:43.750156 containerd[1532]: time="2025-09-10T23:46:43.749922243Z" level=info msg="CreateContainer within sandbox \"18c24259b90037b0f2e841da9314c681019dea5a55e5685df2af7037f1eca887\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 10 23:46:43.758793 containerd[1532]: time="2025-09-10T23:46:43.758177672Z" level=info msg="Container 460aba16a59dd9cd5bea1bb5b91d9a32ca6a245324694559b905a454fcbf3c2c: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:46:43.766693 containerd[1532]: time="2025-09-10T23:46:43.766622821Z" level=info msg="CreateContainer within sandbox \"18c24259b90037b0f2e841da9314c681019dea5a55e5685df2af7037f1eca887\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"460aba16a59dd9cd5bea1bb5b91d9a32ca6a245324694559b905a454fcbf3c2c\"" Sep 10 23:46:43.769493 containerd[1532]: time="2025-09-10T23:46:43.768348747Z" level=info msg="StartContainer for \"460aba16a59dd9cd5bea1bb5b91d9a32ca6a245324694559b905a454fcbf3c2c\"" Sep 10 23:46:43.769493 containerd[1532]: time="2025-09-10T23:46:43.769207030Z" level=info msg="connecting to shim 460aba16a59dd9cd5bea1bb5b91d9a32ca6a245324694559b905a454fcbf3c2c" address="unix:///run/containerd/s/9d667bf9c1e10f2eaa7f3a7748cda0d36d8d0452ad2fb2a4f6dcf52b41c8bdce" protocol=ttrpc version=3 Sep 10 23:46:43.790331 systemd[1]: Started cri-containerd-460aba16a59dd9cd5bea1bb5b91d9a32ca6a245324694559b905a454fcbf3c2c.scope - libcontainer container 460aba16a59dd9cd5bea1bb5b91d9a32ca6a245324694559b905a454fcbf3c2c. Sep 10 23:46:43.814841 systemd[1]: cri-containerd-460aba16a59dd9cd5bea1bb5b91d9a32ca6a245324694559b905a454fcbf3c2c.scope: Deactivated successfully. Sep 10 23:46:43.815794 containerd[1532]: time="2025-09-10T23:46:43.815511710Z" level=info msg="TaskExit event in podsandbox handler container_id:\"460aba16a59dd9cd5bea1bb5b91d9a32ca6a245324694559b905a454fcbf3c2c\" id:\"460aba16a59dd9cd5bea1bb5b91d9a32ca6a245324694559b905a454fcbf3c2c\" pid:3306 exited_at:{seconds:1757548003 nanos:815262309}" Sep 10 23:46:43.817411 containerd[1532]: time="2025-09-10T23:46:43.817294676Z" level=info msg="received exit event container_id:\"460aba16a59dd9cd5bea1bb5b91d9a32ca6a245324694559b905a454fcbf3c2c\" id:\"460aba16a59dd9cd5bea1bb5b91d9a32ca6a245324694559b905a454fcbf3c2c\" pid:3306 exited_at:{seconds:1757548003 nanos:815262309}" Sep 10 23:46:43.824734 containerd[1532]: time="2025-09-10T23:46:43.824690862Z" level=info msg="StartContainer for \"460aba16a59dd9cd5bea1bb5b91d9a32ca6a245324694559b905a454fcbf3c2c\" returns successfully" Sep 10 23:46:43.836616 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-460aba16a59dd9cd5bea1bb5b91d9a32ca6a245324694559b905a454fcbf3c2c-rootfs.mount: Deactivated successfully. Sep 10 23:46:44.756005 containerd[1532]: time="2025-09-10T23:46:44.755948601Z" level=info msg="CreateContainer within sandbox \"18c24259b90037b0f2e841da9314c681019dea5a55e5685df2af7037f1eca887\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 10 23:46:44.766166 containerd[1532]: time="2025-09-10T23:46:44.765555672Z" level=info msg="Container 042942955ae5c6075a233ccbc63cc4d879e59bb9f8253c28fd06d207cf02d580: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:46:44.771069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2759738750.mount: Deactivated successfully. Sep 10 23:46:44.776552 containerd[1532]: time="2025-09-10T23:46:44.776490828Z" level=info msg="CreateContainer within sandbox \"18c24259b90037b0f2e841da9314c681019dea5a55e5685df2af7037f1eca887\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"042942955ae5c6075a233ccbc63cc4d879e59bb9f8253c28fd06d207cf02d580\"" Sep 10 23:46:44.777385 containerd[1532]: time="2025-09-10T23:46:44.777034789Z" level=info msg="StartContainer for \"042942955ae5c6075a233ccbc63cc4d879e59bb9f8253c28fd06d207cf02d580\"" Sep 10 23:46:44.778431 containerd[1532]: time="2025-09-10T23:46:44.778396634Z" level=info msg="connecting to shim 042942955ae5c6075a233ccbc63cc4d879e59bb9f8253c28fd06d207cf02d580" address="unix:///run/containerd/s/9d667bf9c1e10f2eaa7f3a7748cda0d36d8d0452ad2fb2a4f6dcf52b41c8bdce" protocol=ttrpc version=3 Sep 10 23:46:44.798340 systemd[1]: Started cri-containerd-042942955ae5c6075a233ccbc63cc4d879e59bb9f8253c28fd06d207cf02d580.scope - libcontainer container 042942955ae5c6075a233ccbc63cc4d879e59bb9f8253c28fd06d207cf02d580. Sep 10 23:46:44.832770 containerd[1532]: time="2025-09-10T23:46:44.832731330Z" level=info msg="StartContainer for \"042942955ae5c6075a233ccbc63cc4d879e59bb9f8253c28fd06d207cf02d580\" returns successfully" Sep 10 23:46:44.946169 containerd[1532]: time="2025-09-10T23:46:44.946098698Z" level=info msg="TaskExit event in podsandbox handler container_id:\"042942955ae5c6075a233ccbc63cc4d879e59bb9f8253c28fd06d207cf02d580\" id:\"9e6d263783d6f9c93807bd166ce9862667d2727f3a0c75eb4bf5819ae5489a0d\" pid:3377 exited_at:{seconds:1757548004 nanos:945775257}" Sep 10 23:46:45.006702 kubelet[2672]: I0910 23:46:45.006392 2672 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 10 23:46:45.055681 systemd[1]: Created slice kubepods-burstable-poda95d2f63_fa9f_4cd3_b60c_948e9bb1b1b9.slice - libcontainer container kubepods-burstable-poda95d2f63_fa9f_4cd3_b60c_948e9bb1b1b9.slice. Sep 10 23:46:45.074241 systemd[1]: Created slice kubepods-burstable-pod5eb1e355_61da_4967_add1_d4f2dde1529d.slice - libcontainer container kubepods-burstable-pod5eb1e355_61da_4967_add1_d4f2dde1529d.slice. Sep 10 23:46:45.082510 kubelet[2672]: I0910 23:46:45.082463 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m67w\" (UniqueName: \"kubernetes.io/projected/a95d2f63-fa9f-4cd3-b60c-948e9bb1b1b9-kube-api-access-4m67w\") pod \"coredns-674b8bbfcf-cszl6\" (UID: \"a95d2f63-fa9f-4cd3-b60c-948e9bb1b1b9\") " pod="kube-system/coredns-674b8bbfcf-cszl6" Sep 10 23:46:45.082510 kubelet[2672]: I0910 23:46:45.082514 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a95d2f63-fa9f-4cd3-b60c-948e9bb1b1b9-config-volume\") pod \"coredns-674b8bbfcf-cszl6\" (UID: \"a95d2f63-fa9f-4cd3-b60c-948e9bb1b1b9\") " pod="kube-system/coredns-674b8bbfcf-cszl6" Sep 10 23:46:45.183206 kubelet[2672]: I0910 23:46:45.183086 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zccz\" (UniqueName: \"kubernetes.io/projected/5eb1e355-61da-4967-add1-d4f2dde1529d-kube-api-access-6zccz\") pod \"coredns-674b8bbfcf-cfvjm\" (UID: \"5eb1e355-61da-4967-add1-d4f2dde1529d\") " pod="kube-system/coredns-674b8bbfcf-cfvjm" Sep 10 23:46:45.183206 kubelet[2672]: I0910 23:46:45.183181 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5eb1e355-61da-4967-add1-d4f2dde1529d-config-volume\") pod \"coredns-674b8bbfcf-cfvjm\" (UID: \"5eb1e355-61da-4967-add1-d4f2dde1529d\") " pod="kube-system/coredns-674b8bbfcf-cfvjm" Sep 10 23:46:45.363072 containerd[1532]: time="2025-09-10T23:46:45.362744456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cszl6,Uid:a95d2f63-fa9f-4cd3-b60c-948e9bb1b1b9,Namespace:kube-system,Attempt:0,}" Sep 10 23:46:45.380430 containerd[1532]: time="2025-09-10T23:46:45.380388230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cfvjm,Uid:5eb1e355-61da-4967-add1-d4f2dde1529d,Namespace:kube-system,Attempt:0,}" Sep 10 23:46:46.869325 systemd-networkd[1438]: cilium_host: Link UP Sep 10 23:46:46.869437 systemd-networkd[1438]: cilium_net: Link UP Sep 10 23:46:46.869557 systemd-networkd[1438]: cilium_net: Gained carrier Sep 10 23:46:46.869671 systemd-networkd[1438]: cilium_host: Gained carrier Sep 10 23:46:46.959443 systemd-networkd[1438]: cilium_vxlan: Link UP Sep 10 23:46:46.959450 systemd-networkd[1438]: cilium_vxlan: Gained carrier Sep 10 23:46:47.128310 systemd-networkd[1438]: cilium_net: Gained IPv6LL Sep 10 23:46:47.246212 kernel: NET: Registered PF_ALG protocol family Sep 10 23:46:47.288287 systemd-networkd[1438]: cilium_host: Gained IPv6LL Sep 10 23:46:47.913109 systemd-networkd[1438]: lxc_health: Link UP Sep 10 23:46:47.915548 systemd-networkd[1438]: lxc_health: Gained carrier Sep 10 23:46:48.254468 kubelet[2672]: I0910 23:46:48.254320 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4dn9w" podStartSLOduration=8.974170125 podStartE2EDuration="19.254301079s" podCreationTimestamp="2025-09-10 23:46:29 +0000 UTC" firstStartedPulling="2025-09-10 23:46:30.355679294 +0000 UTC m=+6.798703288" lastFinishedPulling="2025-09-10 23:46:40.635810248 +0000 UTC m=+17.078834242" observedRunningTime="2025-09-10 23:46:45.775533512 +0000 UTC m=+22.218557546" watchObservedRunningTime="2025-09-10 23:46:48.254301079 +0000 UTC m=+24.697325113" Sep 10 23:46:48.413132 systemd-networkd[1438]: lxcc0071f26d08a: Link UP Sep 10 23:46:48.414207 kernel: eth0: renamed from tmp21bfc Sep 10 23:46:48.415657 systemd-networkd[1438]: lxcc0071f26d08a: Gained carrier Sep 10 23:46:48.428270 systemd-networkd[1438]: lxc1cba9c5583ca: Link UP Sep 10 23:46:48.442098 kernel: eth0: renamed from tmpaacd5 Sep 10 23:46:48.443201 systemd-networkd[1438]: lxc1cba9c5583ca: Gained carrier Sep 10 23:46:48.905314 systemd-networkd[1438]: cilium_vxlan: Gained IPv6LL Sep 10 23:46:49.672326 systemd-networkd[1438]: lxc_health: Gained IPv6LL Sep 10 23:46:49.864358 systemd-networkd[1438]: lxcc0071f26d08a: Gained IPv6LL Sep 10 23:46:49.928328 systemd-networkd[1438]: lxc1cba9c5583ca: Gained IPv6LL Sep 10 23:46:52.225751 containerd[1532]: time="2025-09-10T23:46:52.225681321Z" level=info msg="connecting to shim 21bfc43ee4f1ae5062f61ee036d8b86cf6b55da55cfe449b6804288408f93641" address="unix:///run/containerd/s/8c2b2cec1c1126c7cf407960c8b7a234aa27463682db2ab380d485de1020e80d" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:46:52.228157 containerd[1532]: time="2025-09-10T23:46:52.228108046Z" level=info msg="connecting to shim aacd5cf588c24fca4084ce89c27194dd45d48b26dc156d8ed77ac52372c49f6f" address="unix:///run/containerd/s/842734beb9d0b5e9d431b8530681d1849f59ac3fe08281932dcb1aded86d710e" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:46:52.267397 systemd[1]: Started cri-containerd-21bfc43ee4f1ae5062f61ee036d8b86cf6b55da55cfe449b6804288408f93641.scope - libcontainer container 21bfc43ee4f1ae5062f61ee036d8b86cf6b55da55cfe449b6804288408f93641. Sep 10 23:46:52.268762 systemd[1]: Started cri-containerd-aacd5cf588c24fca4084ce89c27194dd45d48b26dc156d8ed77ac52372c49f6f.scope - libcontainer container aacd5cf588c24fca4084ce89c27194dd45d48b26dc156d8ed77ac52372c49f6f. Sep 10 23:46:52.282695 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 23:46:52.286429 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 23:46:52.303152 containerd[1532]: time="2025-09-10T23:46:52.303093191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cszl6,Uid:a95d2f63-fa9f-4cd3-b60c-948e9bb1b1b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"21bfc43ee4f1ae5062f61ee036d8b86cf6b55da55cfe449b6804288408f93641\"" Sep 10 23:46:52.308285 containerd[1532]: time="2025-09-10T23:46:52.308131401Z" level=info msg="CreateContainer within sandbox \"21bfc43ee4f1ae5062f61ee036d8b86cf6b55da55cfe449b6804288408f93641\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 23:46:52.324436 containerd[1532]: time="2025-09-10T23:46:52.324384192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cfvjm,Uid:5eb1e355-61da-4967-add1-d4f2dde1529d,Namespace:kube-system,Attempt:0,} returns sandbox id \"aacd5cf588c24fca4084ce89c27194dd45d48b26dc156d8ed77ac52372c49f6f\"" Sep 10 23:46:52.327091 containerd[1532]: time="2025-09-10T23:46:52.327037877Z" level=info msg="Container ea092513bb9130c8a2fc3b65aed0c10083c187a4690092f4f1dd5bf08a0a43d7: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:46:52.327398 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2125752262.mount: Deactivated successfully. Sep 10 23:46:52.331442 containerd[1532]: time="2025-09-10T23:46:52.331402566Z" level=info msg="CreateContainer within sandbox \"aacd5cf588c24fca4084ce89c27194dd45d48b26dc156d8ed77ac52372c49f6f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 23:46:52.333892 containerd[1532]: time="2025-09-10T23:46:52.333847371Z" level=info msg="CreateContainer within sandbox \"21bfc43ee4f1ae5062f61ee036d8b86cf6b55da55cfe449b6804288408f93641\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ea092513bb9130c8a2fc3b65aed0c10083c187a4690092f4f1dd5bf08a0a43d7\"" Sep 10 23:46:52.334598 containerd[1532]: time="2025-09-10T23:46:52.334562372Z" level=info msg="StartContainer for \"ea092513bb9130c8a2fc3b65aed0c10083c187a4690092f4f1dd5bf08a0a43d7\"" Sep 10 23:46:52.335638 containerd[1532]: time="2025-09-10T23:46:52.335604894Z" level=info msg="connecting to shim ea092513bb9130c8a2fc3b65aed0c10083c187a4690092f4f1dd5bf08a0a43d7" address="unix:///run/containerd/s/8c2b2cec1c1126c7cf407960c8b7a234aa27463682db2ab380d485de1020e80d" protocol=ttrpc version=3 Sep 10 23:46:52.343024 containerd[1532]: time="2025-09-10T23:46:52.342785068Z" level=info msg="Container 132205b7ca308b97275acbe2cd0c78bb90d9b1e86836d1716abf9bded00ff390: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:46:52.349277 containerd[1532]: time="2025-09-10T23:46:52.349208680Z" level=info msg="CreateContainer within sandbox \"aacd5cf588c24fca4084ce89c27194dd45d48b26dc156d8ed77ac52372c49f6f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"132205b7ca308b97275acbe2cd0c78bb90d9b1e86836d1716abf9bded00ff390\"" Sep 10 23:46:52.351901 containerd[1532]: time="2025-09-10T23:46:52.351857206Z" level=info msg="StartContainer for \"132205b7ca308b97275acbe2cd0c78bb90d9b1e86836d1716abf9bded00ff390\"" Sep 10 23:46:52.355956 systemd[1]: Started cri-containerd-ea092513bb9130c8a2fc3b65aed0c10083c187a4690092f4f1dd5bf08a0a43d7.scope - libcontainer container ea092513bb9130c8a2fc3b65aed0c10083c187a4690092f4f1dd5bf08a0a43d7. Sep 10 23:46:52.357048 containerd[1532]: time="2025-09-10T23:46:52.356369894Z" level=info msg="connecting to shim 132205b7ca308b97275acbe2cd0c78bb90d9b1e86836d1716abf9bded00ff390" address="unix:///run/containerd/s/842734beb9d0b5e9d431b8530681d1849f59ac3fe08281932dcb1aded86d710e" protocol=ttrpc version=3 Sep 10 23:46:52.376376 systemd[1]: Started cri-containerd-132205b7ca308b97275acbe2cd0c78bb90d9b1e86836d1716abf9bded00ff390.scope - libcontainer container 132205b7ca308b97275acbe2cd0c78bb90d9b1e86836d1716abf9bded00ff390. Sep 10 23:46:52.413639 containerd[1532]: time="2025-09-10T23:46:52.413559445Z" level=info msg="StartContainer for \"132205b7ca308b97275acbe2cd0c78bb90d9b1e86836d1716abf9bded00ff390\" returns successfully" Sep 10 23:46:52.425816 containerd[1532]: time="2025-09-10T23:46:52.425767149Z" level=info msg="StartContainer for \"ea092513bb9130c8a2fc3b65aed0c10083c187a4690092f4f1dd5bf08a0a43d7\" returns successfully" Sep 10 23:46:52.813743 kubelet[2672]: I0910 23:46:52.811062 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-cszl6" podStartSLOduration=23.811043454 podStartE2EDuration="23.811043454s" podCreationTimestamp="2025-09-10 23:46:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:46:52.808325729 +0000 UTC m=+29.251349763" watchObservedRunningTime="2025-09-10 23:46:52.811043454 +0000 UTC m=+29.254067488" Sep 10 23:46:52.813743 kubelet[2672]: I0910 23:46:52.813418 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-cfvjm" podStartSLOduration=23.813400219000002 podStartE2EDuration="23.813400219s" podCreationTimestamp="2025-09-10 23:46:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:46:52.791730737 +0000 UTC m=+29.234754771" watchObservedRunningTime="2025-09-10 23:46:52.813400219 +0000 UTC m=+29.256424253" Sep 10 23:46:53.367753 systemd[1]: Started sshd@7-10.0.0.38:22-10.0.0.1:55366.service - OpenSSH per-connection server daemon (10.0.0.1:55366). Sep 10 23:46:53.409726 sshd[4022]: Accepted publickey for core from 10.0.0.1 port 55366 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:46:53.411205 sshd-session[4022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:46:53.418216 systemd-logind[1516]: New session 8 of user core. Sep 10 23:46:53.428425 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 10 23:46:53.571552 sshd[4024]: Connection closed by 10.0.0.1 port 55366 Sep 10 23:46:53.571899 sshd-session[4022]: pam_unix(sshd:session): session closed for user core Sep 10 23:46:53.574860 systemd[1]: sshd@7-10.0.0.38:22-10.0.0.1:55366.service: Deactivated successfully. Sep 10 23:46:53.576683 systemd[1]: session-8.scope: Deactivated successfully. Sep 10 23:46:53.577502 systemd-logind[1516]: Session 8 logged out. Waiting for processes to exit. Sep 10 23:46:53.579621 systemd-logind[1516]: Removed session 8. Sep 10 23:46:58.583579 systemd[1]: Started sshd@8-10.0.0.38:22-10.0.0.1:55378.service - OpenSSH per-connection server daemon (10.0.0.1:55378). Sep 10 23:46:58.634889 sshd[4040]: Accepted publickey for core from 10.0.0.1 port 55378 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:46:58.636640 sshd-session[4040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:46:58.642217 systemd-logind[1516]: New session 9 of user core. Sep 10 23:46:58.651323 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 10 23:46:58.779230 sshd[4042]: Connection closed by 10.0.0.1 port 55378 Sep 10 23:46:58.780126 sshd-session[4040]: pam_unix(sshd:session): session closed for user core Sep 10 23:46:58.784608 systemd[1]: sshd@8-10.0.0.38:22-10.0.0.1:55378.service: Deactivated successfully. Sep 10 23:46:58.786212 systemd[1]: session-9.scope: Deactivated successfully. Sep 10 23:46:58.786956 systemd-logind[1516]: Session 9 logged out. Waiting for processes to exit. Sep 10 23:46:58.788423 systemd-logind[1516]: Removed session 9. Sep 10 23:47:03.801547 systemd[1]: Started sshd@9-10.0.0.38:22-10.0.0.1:40502.service - OpenSSH per-connection server daemon (10.0.0.1:40502). Sep 10 23:47:03.849885 sshd[4063]: Accepted publickey for core from 10.0.0.1 port 40502 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:47:03.852263 sshd-session[4063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:47:03.859522 systemd-logind[1516]: New session 10 of user core. Sep 10 23:47:03.874420 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 10 23:47:04.027699 sshd[4065]: Connection closed by 10.0.0.1 port 40502 Sep 10 23:47:04.028021 sshd-session[4063]: pam_unix(sshd:session): session closed for user core Sep 10 23:47:04.031321 systemd[1]: sshd@9-10.0.0.38:22-10.0.0.1:40502.service: Deactivated successfully. Sep 10 23:47:04.034537 systemd[1]: session-10.scope: Deactivated successfully. Sep 10 23:47:04.035403 systemd-logind[1516]: Session 10 logged out. Waiting for processes to exit. Sep 10 23:47:04.036652 systemd-logind[1516]: Removed session 10. Sep 10 23:47:09.047556 systemd[1]: Started sshd@10-10.0.0.38:22-10.0.0.1:40506.service - OpenSSH per-connection server daemon (10.0.0.1:40506). Sep 10 23:47:09.103171 sshd[4080]: Accepted publickey for core from 10.0.0.1 port 40506 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:47:09.105063 sshd-session[4080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:47:09.109707 systemd-logind[1516]: New session 11 of user core. Sep 10 23:47:09.130380 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 10 23:47:09.286222 sshd[4082]: Connection closed by 10.0.0.1 port 40506 Sep 10 23:47:09.287400 sshd-session[4080]: pam_unix(sshd:session): session closed for user core Sep 10 23:47:09.297121 systemd[1]: sshd@10-10.0.0.38:22-10.0.0.1:40506.service: Deactivated successfully. Sep 10 23:47:09.300532 systemd[1]: session-11.scope: Deactivated successfully. Sep 10 23:47:09.302199 systemd-logind[1516]: Session 11 logged out. Waiting for processes to exit. Sep 10 23:47:09.308344 systemd[1]: Started sshd@11-10.0.0.38:22-10.0.0.1:40518.service - OpenSSH per-connection server daemon (10.0.0.1:40518). Sep 10 23:47:09.312110 systemd-logind[1516]: Removed session 11. Sep 10 23:47:09.369340 sshd[4096]: Accepted publickey for core from 10.0.0.1 port 40518 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:47:09.370741 sshd-session[4096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:47:09.375984 systemd-logind[1516]: New session 12 of user core. Sep 10 23:47:09.390346 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 10 23:47:09.591775 sshd[4098]: Connection closed by 10.0.0.1 port 40518 Sep 10 23:47:09.592335 sshd-session[4096]: pam_unix(sshd:session): session closed for user core Sep 10 23:47:09.605721 systemd[1]: sshd@11-10.0.0.38:22-10.0.0.1:40518.service: Deactivated successfully. Sep 10 23:47:09.607788 systemd[1]: session-12.scope: Deactivated successfully. Sep 10 23:47:09.610567 systemd-logind[1516]: Session 12 logged out. Waiting for processes to exit. Sep 10 23:47:09.615445 systemd[1]: Started sshd@12-10.0.0.38:22-10.0.0.1:40528.service - OpenSSH per-connection server daemon (10.0.0.1:40528). Sep 10 23:47:09.617827 systemd-logind[1516]: Removed session 12. Sep 10 23:47:09.668851 sshd[4110]: Accepted publickey for core from 10.0.0.1 port 40528 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:47:09.670281 sshd-session[4110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:47:09.674541 systemd-logind[1516]: New session 13 of user core. Sep 10 23:47:09.685359 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 10 23:47:09.800175 sshd[4112]: Connection closed by 10.0.0.1 port 40528 Sep 10 23:47:09.800683 sshd-session[4110]: pam_unix(sshd:session): session closed for user core Sep 10 23:47:09.804910 systemd[1]: sshd@12-10.0.0.38:22-10.0.0.1:40528.service: Deactivated successfully. Sep 10 23:47:09.808605 systemd[1]: session-13.scope: Deactivated successfully. Sep 10 23:47:09.809503 systemd-logind[1516]: Session 13 logged out. Waiting for processes to exit. Sep 10 23:47:09.810969 systemd-logind[1516]: Removed session 13. Sep 10 23:47:14.826720 systemd[1]: Started sshd@13-10.0.0.38:22-10.0.0.1:39696.service - OpenSSH per-connection server daemon (10.0.0.1:39696). Sep 10 23:47:14.881924 sshd[4126]: Accepted publickey for core from 10.0.0.1 port 39696 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:47:14.883350 sshd-session[4126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:47:14.893381 systemd-logind[1516]: New session 14 of user core. Sep 10 23:47:14.905193 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 10 23:47:15.028729 sshd[4128]: Connection closed by 10.0.0.1 port 39696 Sep 10 23:47:15.029415 sshd-session[4126]: pam_unix(sshd:session): session closed for user core Sep 10 23:47:15.033597 systemd[1]: sshd@13-10.0.0.38:22-10.0.0.1:39696.service: Deactivated successfully. Sep 10 23:47:15.036542 systemd[1]: session-14.scope: Deactivated successfully. Sep 10 23:47:15.038999 systemd-logind[1516]: Session 14 logged out. Waiting for processes to exit. Sep 10 23:47:15.042677 systemd-logind[1516]: Removed session 14. Sep 10 23:47:20.045988 systemd[1]: Started sshd@14-10.0.0.38:22-10.0.0.1:36776.service - OpenSSH per-connection server daemon (10.0.0.1:36776). Sep 10 23:47:20.135203 sshd[4142]: Accepted publickey for core from 10.0.0.1 port 36776 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:47:20.139493 sshd-session[4142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:47:20.149235 systemd-logind[1516]: New session 15 of user core. Sep 10 23:47:20.159328 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 10 23:47:20.297760 sshd[4144]: Connection closed by 10.0.0.1 port 36776 Sep 10 23:47:20.298441 sshd-session[4142]: pam_unix(sshd:session): session closed for user core Sep 10 23:47:20.312601 systemd[1]: sshd@14-10.0.0.38:22-10.0.0.1:36776.service: Deactivated successfully. Sep 10 23:47:20.317505 systemd[1]: session-15.scope: Deactivated successfully. Sep 10 23:47:20.322922 systemd-logind[1516]: Session 15 logged out. Waiting for processes to exit. Sep 10 23:47:20.327880 systemd-logind[1516]: Removed session 15. Sep 10 23:47:20.332438 systemd[1]: Started sshd@15-10.0.0.38:22-10.0.0.1:36784.service - OpenSSH per-connection server daemon (10.0.0.1:36784). Sep 10 23:47:20.383543 sshd[4158]: Accepted publickey for core from 10.0.0.1 port 36784 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:47:20.384664 sshd-session[4158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:47:20.388459 systemd-logind[1516]: New session 16 of user core. Sep 10 23:47:20.400521 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 10 23:47:20.595706 sshd[4160]: Connection closed by 10.0.0.1 port 36784 Sep 10 23:47:20.596774 sshd-session[4158]: pam_unix(sshd:session): session closed for user core Sep 10 23:47:20.611671 systemd[1]: sshd@15-10.0.0.38:22-10.0.0.1:36784.service: Deactivated successfully. Sep 10 23:47:20.616342 systemd[1]: session-16.scope: Deactivated successfully. Sep 10 23:47:20.619250 systemd-logind[1516]: Session 16 logged out. Waiting for processes to exit. Sep 10 23:47:20.621260 systemd[1]: Started sshd@16-10.0.0.38:22-10.0.0.1:36792.service - OpenSSH per-connection server daemon (10.0.0.1:36792). Sep 10 23:47:20.623517 systemd-logind[1516]: Removed session 16. Sep 10 23:47:20.672694 sshd[4172]: Accepted publickey for core from 10.0.0.1 port 36792 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:47:20.673872 sshd-session[4172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:47:20.677815 systemd-logind[1516]: New session 17 of user core. Sep 10 23:47:20.694312 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 10 23:47:21.308669 sshd[4174]: Connection closed by 10.0.0.1 port 36792 Sep 10 23:47:21.308996 sshd-session[4172]: pam_unix(sshd:session): session closed for user core Sep 10 23:47:21.317539 systemd[1]: sshd@16-10.0.0.38:22-10.0.0.1:36792.service: Deactivated successfully. Sep 10 23:47:21.319115 systemd[1]: session-17.scope: Deactivated successfully. Sep 10 23:47:21.320253 systemd-logind[1516]: Session 17 logged out. Waiting for processes to exit. Sep 10 23:47:21.327779 systemd[1]: Started sshd@17-10.0.0.38:22-10.0.0.1:36800.service - OpenSSH per-connection server daemon (10.0.0.1:36800). Sep 10 23:47:21.328968 systemd-logind[1516]: Removed session 17. Sep 10 23:47:21.384521 sshd[4193]: Accepted publickey for core from 10.0.0.1 port 36800 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:47:21.385919 sshd-session[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:47:21.389722 systemd-logind[1516]: New session 18 of user core. Sep 10 23:47:21.399305 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 10 23:47:21.628904 sshd[4196]: Connection closed by 10.0.0.1 port 36800 Sep 10 23:47:21.629485 sshd-session[4193]: pam_unix(sshd:session): session closed for user core Sep 10 23:47:21.639177 systemd[1]: sshd@17-10.0.0.38:22-10.0.0.1:36800.service: Deactivated successfully. Sep 10 23:47:21.641083 systemd[1]: session-18.scope: Deactivated successfully. Sep 10 23:47:21.642442 systemd-logind[1516]: Session 18 logged out. Waiting for processes to exit. Sep 10 23:47:21.644662 systemd[1]: Started sshd@18-10.0.0.38:22-10.0.0.1:36804.service - OpenSSH per-connection server daemon (10.0.0.1:36804). Sep 10 23:47:21.647350 systemd-logind[1516]: Removed session 18. Sep 10 23:47:21.704018 sshd[4208]: Accepted publickey for core from 10.0.0.1 port 36804 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:47:21.705280 sshd-session[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:47:21.710237 systemd-logind[1516]: New session 19 of user core. Sep 10 23:47:21.728339 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 10 23:47:21.852202 sshd[4210]: Connection closed by 10.0.0.1 port 36804 Sep 10 23:47:21.851750 sshd-session[4208]: pam_unix(sshd:session): session closed for user core Sep 10 23:47:21.855543 systemd[1]: sshd@18-10.0.0.38:22-10.0.0.1:36804.service: Deactivated successfully. Sep 10 23:47:21.857358 systemd[1]: session-19.scope: Deactivated successfully. Sep 10 23:47:21.859385 systemd-logind[1516]: Session 19 logged out. Waiting for processes to exit. Sep 10 23:47:21.860389 systemd-logind[1516]: Removed session 19. Sep 10 23:47:26.863777 systemd[1]: Started sshd@19-10.0.0.38:22-10.0.0.1:36820.service - OpenSSH per-connection server daemon (10.0.0.1:36820). Sep 10 23:47:26.921781 sshd[4227]: Accepted publickey for core from 10.0.0.1 port 36820 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:47:26.924251 sshd-session[4227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:47:26.929298 systemd-logind[1516]: New session 20 of user core. Sep 10 23:47:26.948379 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 10 23:47:27.065436 sshd[4229]: Connection closed by 10.0.0.1 port 36820 Sep 10 23:47:27.065762 sshd-session[4227]: pam_unix(sshd:session): session closed for user core Sep 10 23:47:27.070092 systemd[1]: sshd@19-10.0.0.38:22-10.0.0.1:36820.service: Deactivated successfully. Sep 10 23:47:27.071901 systemd[1]: session-20.scope: Deactivated successfully. Sep 10 23:47:27.072639 systemd-logind[1516]: Session 20 logged out. Waiting for processes to exit. Sep 10 23:47:27.074040 systemd-logind[1516]: Removed session 20. Sep 10 23:47:32.080526 systemd[1]: Started sshd@20-10.0.0.38:22-10.0.0.1:49738.service - OpenSSH per-connection server daemon (10.0.0.1:49738). Sep 10 23:47:32.135958 sshd[4245]: Accepted publickey for core from 10.0.0.1 port 49738 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:47:32.137991 sshd-session[4245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:47:32.142947 systemd-logind[1516]: New session 21 of user core. Sep 10 23:47:32.158086 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 10 23:47:32.285847 sshd[4247]: Connection closed by 10.0.0.1 port 49738 Sep 10 23:47:32.286296 sshd-session[4245]: pam_unix(sshd:session): session closed for user core Sep 10 23:47:32.299114 systemd[1]: sshd@20-10.0.0.38:22-10.0.0.1:49738.service: Deactivated successfully. Sep 10 23:47:32.301003 systemd[1]: session-21.scope: Deactivated successfully. Sep 10 23:47:32.302061 systemd-logind[1516]: Session 21 logged out. Waiting for processes to exit. Sep 10 23:47:32.304396 systemd-logind[1516]: Removed session 21. Sep 10 23:47:32.306090 systemd[1]: Started sshd@21-10.0.0.38:22-10.0.0.1:49748.service - OpenSSH per-connection server daemon (10.0.0.1:49748). Sep 10 23:47:32.357799 sshd[4260]: Accepted publickey for core from 10.0.0.1 port 49748 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:47:32.359019 sshd-session[4260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:47:32.363211 systemd-logind[1516]: New session 22 of user core. Sep 10 23:47:32.372300 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 10 23:47:34.952989 containerd[1532]: time="2025-09-10T23:47:34.952589291Z" level=info msg="StopContainer for \"42449d0fc3485823273acfe499333b5b794dadcbeb8618c06ee08be988b5bc4d\" with timeout 30 (s)" Sep 10 23:47:34.953365 containerd[1532]: time="2025-09-10T23:47:34.953236338Z" level=info msg="Stop container \"42449d0fc3485823273acfe499333b5b794dadcbeb8618c06ee08be988b5bc4d\" with signal terminated" Sep 10 23:47:34.967970 systemd[1]: cri-containerd-42449d0fc3485823273acfe499333b5b794dadcbeb8618c06ee08be988b5bc4d.scope: Deactivated successfully. Sep 10 23:47:34.969467 containerd[1532]: time="2025-09-10T23:47:34.969401749Z" level=info msg="received exit event container_id:\"42449d0fc3485823273acfe499333b5b794dadcbeb8618c06ee08be988b5bc4d\" id:\"42449d0fc3485823273acfe499333b5b794dadcbeb8618c06ee08be988b5bc4d\" pid:3230 exited_at:{seconds:1757548054 nanos:968802423}" Sep 10 23:47:34.969788 containerd[1532]: time="2025-09-10T23:47:34.969743553Z" level=info msg="TaskExit event in podsandbox handler container_id:\"42449d0fc3485823273acfe499333b5b794dadcbeb8618c06ee08be988b5bc4d\" id:\"42449d0fc3485823273acfe499333b5b794dadcbeb8618c06ee08be988b5bc4d\" pid:3230 exited_at:{seconds:1757548054 nanos:968802423}" Sep 10 23:47:34.981893 containerd[1532]: time="2025-09-10T23:47:34.981279755Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 23:47:34.986456 containerd[1532]: time="2025-09-10T23:47:34.986414729Z" level=info msg="TaskExit event in podsandbox handler container_id:\"042942955ae5c6075a233ccbc63cc4d879e59bb9f8253c28fd06d207cf02d580\" id:\"35d013d3c55f8f412df0da2e0a5b43b02258a49b7d42b3ece61b9871d9be3fca\" pid:4290 exited_at:{seconds:1757548054 nanos:986102806}" Sep 10 23:47:34.988817 containerd[1532]: time="2025-09-10T23:47:34.988778874Z" level=info msg="StopContainer for \"042942955ae5c6075a233ccbc63cc4d879e59bb9f8253c28fd06d207cf02d580\" with timeout 2 (s)" Sep 10 23:47:34.989807 containerd[1532]: time="2025-09-10T23:47:34.989700244Z" level=info msg="Stop container \"042942955ae5c6075a233ccbc63cc4d879e59bb9f8253c28fd06d207cf02d580\" with signal terminated" Sep 10 23:47:34.994282 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-42449d0fc3485823273acfe499333b5b794dadcbeb8618c06ee08be988b5bc4d-rootfs.mount: Deactivated successfully. Sep 10 23:47:35.000816 systemd-networkd[1438]: lxc_health: Link DOWN Sep 10 23:47:35.000829 systemd-networkd[1438]: lxc_health: Lost carrier Sep 10 23:47:35.015072 systemd[1]: cri-containerd-042942955ae5c6075a233ccbc63cc4d879e59bb9f8253c28fd06d207cf02d580.scope: Deactivated successfully. Sep 10 23:47:35.015699 systemd[1]: cri-containerd-042942955ae5c6075a233ccbc63cc4d879e59bb9f8253c28fd06d207cf02d580.scope: Consumed 6.598s CPU time, 123M memory peak, 128K read from disk, 12.9M written to disk. Sep 10 23:47:35.016496 containerd[1532]: time="2025-09-10T23:47:35.016432643Z" level=info msg="TaskExit event in podsandbox handler container_id:\"042942955ae5c6075a233ccbc63cc4d879e59bb9f8253c28fd06d207cf02d580\" id:\"042942955ae5c6075a233ccbc63cc4d879e59bb9f8253c28fd06d207cf02d580\" pid:3343 exited_at:{seconds:1757548055 nanos:15670635}" Sep 10 23:47:35.031183 containerd[1532]: time="2025-09-10T23:47:35.031098594Z" level=info msg="received exit event container_id:\"042942955ae5c6075a233ccbc63cc4d879e59bb9f8253c28fd06d207cf02d580\" id:\"042942955ae5c6075a233ccbc63cc4d879e59bb9f8253c28fd06d207cf02d580\" pid:3343 exited_at:{seconds:1757548055 nanos:15670635}" Sep 10 23:47:35.050387 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-042942955ae5c6075a233ccbc63cc4d879e59bb9f8253c28fd06d207cf02d580-rootfs.mount: Deactivated successfully. Sep 10 23:47:35.134445 containerd[1532]: time="2025-09-10T23:47:35.134294817Z" level=info msg="StopContainer for \"42449d0fc3485823273acfe499333b5b794dadcbeb8618c06ee08be988b5bc4d\" returns successfully" Sep 10 23:47:35.137558 containerd[1532]: time="2025-09-10T23:47:35.137489410Z" level=info msg="StopPodSandbox for \"9efe52db0ea7b38789f2d4bcc1f5c5d3102a686e85ca3c4b22f75be11bfce8c1\"" Sep 10 23:47:35.140711 containerd[1532]: time="2025-09-10T23:47:35.140611082Z" level=info msg="StopContainer for \"042942955ae5c6075a233ccbc63cc4d879e59bb9f8253c28fd06d207cf02d580\" returns successfully" Sep 10 23:47:35.140819 containerd[1532]: time="2025-09-10T23:47:35.140710803Z" level=info msg="Container to stop \"42449d0fc3485823273acfe499333b5b794dadcbeb8618c06ee08be988b5bc4d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 23:47:35.141423 containerd[1532]: time="2025-09-10T23:47:35.141349289Z" level=info msg="StopPodSandbox for \"18c24259b90037b0f2e841da9314c681019dea5a55e5685df2af7037f1eca887\"" Sep 10 23:47:35.141614 containerd[1532]: time="2025-09-10T23:47:35.141484211Z" level=info msg="Container to stop \"153022a457a6e404ef1a05b9f10a8832b08e4a4bfefa1781bd7f1153027f88c0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 23:47:35.141614 containerd[1532]: time="2025-09-10T23:47:35.141500691Z" level=info msg="Container to stop \"c60b842b71f161a50f33e39fb8cd43f35ee9423e69fed3be499e44c991bdec4b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 23:47:35.141614 containerd[1532]: time="2025-09-10T23:47:35.141509731Z" level=info msg="Container to stop \"460aba16a59dd9cd5bea1bb5b91d9a32ca6a245324694559b905a454fcbf3c2c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 23:47:35.141614 containerd[1532]: time="2025-09-10T23:47:35.141519131Z" level=info msg="Container to stop \"042942955ae5c6075a233ccbc63cc4d879e59bb9f8253c28fd06d207cf02d580\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 23:47:35.141614 containerd[1532]: time="2025-09-10T23:47:35.141540971Z" level=info msg="Container to stop \"8fa15c55be50683ef783aa18f109a123cfbfe8ec0edcd42646aa067152a64237\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 23:47:35.147928 systemd[1]: cri-containerd-9efe52db0ea7b38789f2d4bcc1f5c5d3102a686e85ca3c4b22f75be11bfce8c1.scope: Deactivated successfully. Sep 10 23:47:35.148775 systemd[1]: cri-containerd-18c24259b90037b0f2e841da9314c681019dea5a55e5685df2af7037f1eca887.scope: Deactivated successfully. Sep 10 23:47:35.149880 containerd[1532]: time="2025-09-10T23:47:35.149843017Z" level=info msg="TaskExit event in podsandbox handler container_id:\"18c24259b90037b0f2e841da9314c681019dea5a55e5685df2af7037f1eca887\" id:\"18c24259b90037b0f2e841da9314c681019dea5a55e5685df2af7037f1eca887\" pid:2860 exit_status:137 exited_at:{seconds:1757548055 nanos:149575534}" Sep 10 23:47:35.176423 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9efe52db0ea7b38789f2d4bcc1f5c5d3102a686e85ca3c4b22f75be11bfce8c1-rootfs.mount: Deactivated successfully. Sep 10 23:47:35.180325 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-18c24259b90037b0f2e841da9314c681019dea5a55e5685df2af7037f1eca887-rootfs.mount: Deactivated successfully. Sep 10 23:47:35.183160 containerd[1532]: time="2025-09-10T23:47:35.182943438Z" level=info msg="shim disconnected" id=9efe52db0ea7b38789f2d4bcc1f5c5d3102a686e85ca3c4b22f75be11bfce8c1 namespace=k8s.io Sep 10 23:47:35.183160 containerd[1532]: time="2025-09-10T23:47:35.182973198Z" level=warning msg="cleaning up after shim disconnected" id=9efe52db0ea7b38789f2d4bcc1f5c5d3102a686e85ca3c4b22f75be11bfce8c1 namespace=k8s.io Sep 10 23:47:35.183160 containerd[1532]: time="2025-09-10T23:47:35.183010999Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 23:47:35.184329 containerd[1532]: time="2025-09-10T23:47:35.184285212Z" level=info msg="shim disconnected" id=18c24259b90037b0f2e841da9314c681019dea5a55e5685df2af7037f1eca887 namespace=k8s.io Sep 10 23:47:35.184423 containerd[1532]: time="2025-09-10T23:47:35.184320772Z" level=warning msg="cleaning up after shim disconnected" id=18c24259b90037b0f2e841da9314c681019dea5a55e5685df2af7037f1eca887 namespace=k8s.io Sep 10 23:47:35.184423 containerd[1532]: time="2025-09-10T23:47:35.184383173Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 23:47:35.208726 containerd[1532]: time="2025-09-10T23:47:35.208621902Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9efe52db0ea7b38789f2d4bcc1f5c5d3102a686e85ca3c4b22f75be11bfce8c1\" id:\"9efe52db0ea7b38789f2d4bcc1f5c5d3102a686e85ca3c4b22f75be11bfce8c1\" pid:2876 exit_status:137 exited_at:{seconds:1757548055 nanos:153545895}" Sep 10 23:47:35.209568 containerd[1532]: time="2025-09-10T23:47:35.209332390Z" level=info msg="received exit event sandbox_id:\"18c24259b90037b0f2e841da9314c681019dea5a55e5685df2af7037f1eca887\" exit_status:137 exited_at:{seconds:1757548055 nanos:149575534}" Sep 10 23:47:35.211309 containerd[1532]: time="2025-09-10T23:47:35.210129398Z" level=info msg="TearDown network for sandbox \"18c24259b90037b0f2e841da9314c681019dea5a55e5685df2af7037f1eca887\" successfully" Sep 10 23:47:35.211309 containerd[1532]: time="2025-09-10T23:47:35.210187999Z" level=info msg="StopPodSandbox for \"18c24259b90037b0f2e841da9314c681019dea5a55e5685df2af7037f1eca887\" returns successfully" Sep 10 23:47:35.211309 containerd[1532]: time="2025-09-10T23:47:35.209513152Z" level=info msg="TearDown network for sandbox \"9efe52db0ea7b38789f2d4bcc1f5c5d3102a686e85ca3c4b22f75be11bfce8c1\" successfully" Sep 10 23:47:35.211309 containerd[1532]: time="2025-09-10T23:47:35.210310200Z" level=info msg="StopPodSandbox for \"9efe52db0ea7b38789f2d4bcc1f5c5d3102a686e85ca3c4b22f75be11bfce8c1\" returns successfully" Sep 10 23:47:35.211309 containerd[1532]: time="2025-09-10T23:47:35.210964807Z" level=info msg="received exit event sandbox_id:\"9efe52db0ea7b38789f2d4bcc1f5c5d3102a686e85ca3c4b22f75be11bfce8c1\" exit_status:137 exited_at:{seconds:1757548055 nanos:153545895}" Sep 10 23:47:35.210241 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9efe52db0ea7b38789f2d4bcc1f5c5d3102a686e85ca3c4b22f75be11bfce8c1-shm.mount: Deactivated successfully. Sep 10 23:47:35.210367 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-18c24259b90037b0f2e841da9314c681019dea5a55e5685df2af7037f1eca887-shm.mount: Deactivated successfully. Sep 10 23:47:35.321977 kubelet[2672]: I0910 23:47:35.321905 2672 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7ec97403-52b2-4394-b361-4cd9617f584d-host-proc-sys-net\") pod \"7ec97403-52b2-4394-b361-4cd9617f584d\" (UID: \"7ec97403-52b2-4394-b361-4cd9617f584d\") " Sep 10 23:47:35.321977 kubelet[2672]: I0910 23:47:35.321961 2672 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ec97403-52b2-4394-b361-4cd9617f584d-lib-modules\") pod \"7ec97403-52b2-4394-b361-4cd9617f584d\" (UID: \"7ec97403-52b2-4394-b361-4cd9617f584d\") " Sep 10 23:47:35.321977 kubelet[2672]: I0910 23:47:35.321985 2672 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7ec97403-52b2-4394-b361-4cd9617f584d-hubble-tls\") pod \"7ec97403-52b2-4394-b361-4cd9617f584d\" (UID: \"7ec97403-52b2-4394-b361-4cd9617f584d\") " Sep 10 23:47:35.322467 kubelet[2672]: I0910 23:47:35.322003 2672 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7ec97403-52b2-4394-b361-4cd9617f584d-hostproc\") pod \"7ec97403-52b2-4394-b361-4cd9617f584d\" (UID: \"7ec97403-52b2-4394-b361-4cd9617f584d\") " Sep 10 23:47:35.322467 kubelet[2672]: I0910 23:47:35.322017 2672 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7ec97403-52b2-4394-b361-4cd9617f584d-xtables-lock\") pod \"7ec97403-52b2-4394-b361-4cd9617f584d\" (UID: \"7ec97403-52b2-4394-b361-4cd9617f584d\") " Sep 10 23:47:35.322467 kubelet[2672]: I0910 23:47:35.322030 2672 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7ec97403-52b2-4394-b361-4cd9617f584d-cni-path\") pod \"7ec97403-52b2-4394-b361-4cd9617f584d\" (UID: \"7ec97403-52b2-4394-b361-4cd9617f584d\") " Sep 10 23:47:35.322467 kubelet[2672]: I0910 23:47:35.322044 2672 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7ec97403-52b2-4394-b361-4cd9617f584d-cilium-cgroup\") pod \"7ec97403-52b2-4394-b361-4cd9617f584d\" (UID: \"7ec97403-52b2-4394-b361-4cd9617f584d\") " Sep 10 23:47:35.322467 kubelet[2672]: I0910 23:47:35.322085 2672 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1cf795db-4294-4e84-98fc-db007d34dc3a-cilium-config-path\") pod \"1cf795db-4294-4e84-98fc-db007d34dc3a\" (UID: \"1cf795db-4294-4e84-98fc-db007d34dc3a\") " Sep 10 23:47:35.322467 kubelet[2672]: I0910 23:47:35.322106 2672 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88cpp\" (UniqueName: \"kubernetes.io/projected/7ec97403-52b2-4394-b361-4cd9617f584d-kube-api-access-88cpp\") pod \"7ec97403-52b2-4394-b361-4cd9617f584d\" (UID: \"7ec97403-52b2-4394-b361-4cd9617f584d\") " Sep 10 23:47:35.322590 kubelet[2672]: I0910 23:47:35.322128 2672 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqtrl\" (UniqueName: \"kubernetes.io/projected/1cf795db-4294-4e84-98fc-db007d34dc3a-kube-api-access-rqtrl\") pod \"1cf795db-4294-4e84-98fc-db007d34dc3a\" (UID: \"1cf795db-4294-4e84-98fc-db007d34dc3a\") " Sep 10 23:47:35.322590 kubelet[2672]: I0910 23:47:35.322174 2672 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7ec97403-52b2-4394-b361-4cd9617f584d-clustermesh-secrets\") pod \"7ec97403-52b2-4394-b361-4cd9617f584d\" (UID: \"7ec97403-52b2-4394-b361-4cd9617f584d\") " Sep 10 23:47:35.322590 kubelet[2672]: I0910 23:47:35.322191 2672 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7ec97403-52b2-4394-b361-4cd9617f584d-cilium-config-path\") pod \"7ec97403-52b2-4394-b361-4cd9617f584d\" (UID: \"7ec97403-52b2-4394-b361-4cd9617f584d\") " Sep 10 23:47:35.322590 kubelet[2672]: I0910 23:47:35.322206 2672 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7ec97403-52b2-4394-b361-4cd9617f584d-cilium-run\") pod \"7ec97403-52b2-4394-b361-4cd9617f584d\" (UID: \"7ec97403-52b2-4394-b361-4cd9617f584d\") " Sep 10 23:47:35.322590 kubelet[2672]: I0910 23:47:35.322223 2672 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7ec97403-52b2-4394-b361-4cd9617f584d-bpf-maps\") pod \"7ec97403-52b2-4394-b361-4cd9617f584d\" (UID: \"7ec97403-52b2-4394-b361-4cd9617f584d\") " Sep 10 23:47:35.322590 kubelet[2672]: I0910 23:47:35.322262 2672 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7ec97403-52b2-4394-b361-4cd9617f584d-host-proc-sys-kernel\") pod \"7ec97403-52b2-4394-b361-4cd9617f584d\" (UID: \"7ec97403-52b2-4394-b361-4cd9617f584d\") " Sep 10 23:47:35.322721 kubelet[2672]: I0910 23:47:35.322278 2672 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7ec97403-52b2-4394-b361-4cd9617f584d-etc-cni-netd\") pod \"7ec97403-52b2-4394-b361-4cd9617f584d\" (UID: \"7ec97403-52b2-4394-b361-4cd9617f584d\") " Sep 10 23:47:35.324456 kubelet[2672]: I0910 23:47:35.324176 2672 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ec97403-52b2-4394-b361-4cd9617f584d-hostproc" (OuterVolumeSpecName: "hostproc") pod "7ec97403-52b2-4394-b361-4cd9617f584d" (UID: "7ec97403-52b2-4394-b361-4cd9617f584d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 23:47:35.324456 kubelet[2672]: I0910 23:47:35.324210 2672 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ec97403-52b2-4394-b361-4cd9617f584d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7ec97403-52b2-4394-b361-4cd9617f584d" (UID: "7ec97403-52b2-4394-b361-4cd9617f584d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 23:47:35.324456 kubelet[2672]: I0910 23:47:35.324175 2672 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ec97403-52b2-4394-b361-4cd9617f584d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7ec97403-52b2-4394-b361-4cd9617f584d" (UID: "7ec97403-52b2-4394-b361-4cd9617f584d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 23:47:35.324456 kubelet[2672]: I0910 23:47:35.324174 2672 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ec97403-52b2-4394-b361-4cd9617f584d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7ec97403-52b2-4394-b361-4cd9617f584d" (UID: "7ec97403-52b2-4394-b361-4cd9617f584d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 23:47:35.324456 kubelet[2672]: I0910 23:47:35.324259 2672 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ec97403-52b2-4394-b361-4cd9617f584d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7ec97403-52b2-4394-b361-4cd9617f584d" (UID: "7ec97403-52b2-4394-b361-4cd9617f584d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 23:47:35.324659 kubelet[2672]: I0910 23:47:35.324414 2672 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ec97403-52b2-4394-b361-4cd9617f584d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7ec97403-52b2-4394-b361-4cd9617f584d" (UID: "7ec97403-52b2-4394-b361-4cd9617f584d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 23:47:35.324659 kubelet[2672]: I0910 23:47:35.324449 2672 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ec97403-52b2-4394-b361-4cd9617f584d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7ec97403-52b2-4394-b361-4cd9617f584d" (UID: "7ec97403-52b2-4394-b361-4cd9617f584d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 23:47:35.324659 kubelet[2672]: I0910 23:47:35.324503 2672 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ec97403-52b2-4394-b361-4cd9617f584d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7ec97403-52b2-4394-b361-4cd9617f584d" (UID: "7ec97403-52b2-4394-b361-4cd9617f584d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 23:47:35.324659 kubelet[2672]: I0910 23:47:35.324523 2672 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ec97403-52b2-4394-b361-4cd9617f584d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7ec97403-52b2-4394-b361-4cd9617f584d" (UID: "7ec97403-52b2-4394-b361-4cd9617f584d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 23:47:35.324659 kubelet[2672]: I0910 23:47:35.324538 2672 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ec97403-52b2-4394-b361-4cd9617f584d-cni-path" (OuterVolumeSpecName: "cni-path") pod "7ec97403-52b2-4394-b361-4cd9617f584d" (UID: "7ec97403-52b2-4394-b361-4cd9617f584d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 23:47:35.334251 kubelet[2672]: I0910 23:47:35.334006 2672 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cf795db-4294-4e84-98fc-db007d34dc3a-kube-api-access-rqtrl" (OuterVolumeSpecName: "kube-api-access-rqtrl") pod "1cf795db-4294-4e84-98fc-db007d34dc3a" (UID: "1cf795db-4294-4e84-98fc-db007d34dc3a"). InnerVolumeSpecName "kube-api-access-rqtrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 10 23:47:35.334608 kubelet[2672]: I0910 23:47:35.334581 2672 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ec97403-52b2-4394-b361-4cd9617f584d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7ec97403-52b2-4394-b361-4cd9617f584d" (UID: "7ec97403-52b2-4394-b361-4cd9617f584d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 10 23:47:35.334781 kubelet[2672]: I0910 23:47:35.334743 2672 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ec97403-52b2-4394-b361-4cd9617f584d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7ec97403-52b2-4394-b361-4cd9617f584d" (UID: "7ec97403-52b2-4394-b361-4cd9617f584d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 10 23:47:35.336606 kubelet[2672]: I0910 23:47:35.336563 2672 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cf795db-4294-4e84-98fc-db007d34dc3a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1cf795db-4294-4e84-98fc-db007d34dc3a" (UID: "1cf795db-4294-4e84-98fc-db007d34dc3a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 10 23:47:35.336908 kubelet[2672]: I0910 23:47:35.336861 2672 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ec97403-52b2-4394-b361-4cd9617f584d-kube-api-access-88cpp" (OuterVolumeSpecName: "kube-api-access-88cpp") pod "7ec97403-52b2-4394-b361-4cd9617f584d" (UID: "7ec97403-52b2-4394-b361-4cd9617f584d"). InnerVolumeSpecName "kube-api-access-88cpp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 10 23:47:35.338606 kubelet[2672]: I0910 23:47:35.338560 2672 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ec97403-52b2-4394-b361-4cd9617f584d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7ec97403-52b2-4394-b361-4cd9617f584d" (UID: "7ec97403-52b2-4394-b361-4cd9617f584d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 10 23:47:35.422542 kubelet[2672]: I0910 23:47:35.422489 2672 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ec97403-52b2-4394-b361-4cd9617f584d-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 10 23:47:35.422542 kubelet[2672]: I0910 23:47:35.422529 2672 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7ec97403-52b2-4394-b361-4cd9617f584d-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 10 23:47:35.422542 kubelet[2672]: I0910 23:47:35.422541 2672 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7ec97403-52b2-4394-b361-4cd9617f584d-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 10 23:47:35.422542 kubelet[2672]: I0910 23:47:35.422550 2672 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7ec97403-52b2-4394-b361-4cd9617f584d-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 10 23:47:35.422542 kubelet[2672]: I0910 23:47:35.422557 2672 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7ec97403-52b2-4394-b361-4cd9617f584d-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 10 23:47:35.422775 kubelet[2672]: I0910 23:47:35.422565 2672 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7ec97403-52b2-4394-b361-4cd9617f584d-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 10 23:47:35.422775 kubelet[2672]: I0910 23:47:35.422574 2672 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1cf795db-4294-4e84-98fc-db007d34dc3a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 10 23:47:35.422775 kubelet[2672]: I0910 23:47:35.422583 2672 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-88cpp\" (UniqueName: \"kubernetes.io/projected/7ec97403-52b2-4394-b361-4cd9617f584d-kube-api-access-88cpp\") on node \"localhost\" DevicePath \"\"" Sep 10 23:47:35.422775 kubelet[2672]: I0910 23:47:35.422591 2672 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rqtrl\" (UniqueName: \"kubernetes.io/projected/1cf795db-4294-4e84-98fc-db007d34dc3a-kube-api-access-rqtrl\") on node \"localhost\" DevicePath \"\"" Sep 10 23:47:35.422775 kubelet[2672]: I0910 23:47:35.422599 2672 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7ec97403-52b2-4394-b361-4cd9617f584d-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 10 23:47:35.422775 kubelet[2672]: I0910 23:47:35.422606 2672 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7ec97403-52b2-4394-b361-4cd9617f584d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 10 23:47:35.422775 kubelet[2672]: I0910 23:47:35.422614 2672 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7ec97403-52b2-4394-b361-4cd9617f584d-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 10 23:47:35.422775 kubelet[2672]: I0910 23:47:35.422621 2672 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7ec97403-52b2-4394-b361-4cd9617f584d-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 10 23:47:35.422932 kubelet[2672]: I0910 23:47:35.422629 2672 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7ec97403-52b2-4394-b361-4cd9617f584d-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 10 23:47:35.422932 kubelet[2672]: I0910 23:47:35.422639 2672 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7ec97403-52b2-4394-b361-4cd9617f584d-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 10 23:47:35.422932 kubelet[2672]: I0910 23:47:35.422646 2672 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7ec97403-52b2-4394-b361-4cd9617f584d-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 10 23:47:35.667324 systemd[1]: Removed slice kubepods-besteffort-pod1cf795db_4294_4e84_98fc_db007d34dc3a.slice - libcontainer container kubepods-besteffort-pod1cf795db_4294_4e84_98fc_db007d34dc3a.slice. Sep 10 23:47:35.668637 systemd[1]: Removed slice kubepods-burstable-pod7ec97403_52b2_4394_b361_4cd9617f584d.slice - libcontainer container kubepods-burstable-pod7ec97403_52b2_4394_b361_4cd9617f584d.slice. Sep 10 23:47:35.668733 systemd[1]: kubepods-burstable-pod7ec97403_52b2_4394_b361_4cd9617f584d.slice: Consumed 6.687s CPU time, 123.3M memory peak, 132K read from disk, 12.9M written to disk. Sep 10 23:47:35.891916 kubelet[2672]: I0910 23:47:35.891885 2672 scope.go:117] "RemoveContainer" containerID="42449d0fc3485823273acfe499333b5b794dadcbeb8618c06ee08be988b5bc4d" Sep 10 23:47:35.893779 containerd[1532]: time="2025-09-10T23:47:35.893605839Z" level=info msg="RemoveContainer for \"42449d0fc3485823273acfe499333b5b794dadcbeb8618c06ee08be988b5bc4d\"" Sep 10 23:47:35.919573 containerd[1532]: time="2025-09-10T23:47:35.919472625Z" level=info msg="RemoveContainer for \"42449d0fc3485823273acfe499333b5b794dadcbeb8618c06ee08be988b5bc4d\" returns successfully" Sep 10 23:47:35.921847 kubelet[2672]: I0910 23:47:35.921821 2672 scope.go:117] "RemoveContainer" containerID="42449d0fc3485823273acfe499333b5b794dadcbeb8618c06ee08be988b5bc4d" Sep 10 23:47:35.922207 containerd[1532]: time="2025-09-10T23:47:35.922161453Z" level=error msg="ContainerStatus for \"42449d0fc3485823273acfe499333b5b794dadcbeb8618c06ee08be988b5bc4d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"42449d0fc3485823273acfe499333b5b794dadcbeb8618c06ee08be988b5bc4d\": not found" Sep 10 23:47:35.932404 kubelet[2672]: E0910 23:47:35.932346 2672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"42449d0fc3485823273acfe499333b5b794dadcbeb8618c06ee08be988b5bc4d\": not found" containerID="42449d0fc3485823273acfe499333b5b794dadcbeb8618c06ee08be988b5bc4d" Sep 10 23:47:35.932507 kubelet[2672]: I0910 23:47:35.932426 2672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"42449d0fc3485823273acfe499333b5b794dadcbeb8618c06ee08be988b5bc4d"} err="failed to get container status \"42449d0fc3485823273acfe499333b5b794dadcbeb8618c06ee08be988b5bc4d\": rpc error: code = NotFound desc = an error occurred when try to find container \"42449d0fc3485823273acfe499333b5b794dadcbeb8618c06ee08be988b5bc4d\": not found" Sep 10 23:47:35.932507 kubelet[2672]: I0910 23:47:35.932491 2672 scope.go:117] "RemoveContainer" containerID="042942955ae5c6075a233ccbc63cc4d879e59bb9f8253c28fd06d207cf02d580" Sep 10 23:47:35.934729 containerd[1532]: time="2025-09-10T23:47:35.934694062Z" level=info msg="RemoveContainer for \"042942955ae5c6075a233ccbc63cc4d879e59bb9f8253c28fd06d207cf02d580\"" Sep 10 23:47:35.944408 containerd[1532]: time="2025-09-10T23:47:35.944360201Z" level=info msg="RemoveContainer for \"042942955ae5c6075a233ccbc63cc4d879e59bb9f8253c28fd06d207cf02d580\" returns successfully" Sep 10 23:47:35.944711 kubelet[2672]: I0910 23:47:35.944674 2672 scope.go:117] "RemoveContainer" containerID="460aba16a59dd9cd5bea1bb5b91d9a32ca6a245324694559b905a454fcbf3c2c" Sep 10 23:47:35.946570 containerd[1532]: time="2025-09-10T23:47:35.946536104Z" level=info msg="RemoveContainer for \"460aba16a59dd9cd5bea1bb5b91d9a32ca6a245324694559b905a454fcbf3c2c\"" Sep 10 23:47:35.950463 containerd[1532]: time="2025-09-10T23:47:35.950413744Z" level=info msg="RemoveContainer for \"460aba16a59dd9cd5bea1bb5b91d9a32ca6a245324694559b905a454fcbf3c2c\" returns successfully" Sep 10 23:47:35.950760 kubelet[2672]: I0910 23:47:35.950733 2672 scope.go:117] "RemoveContainer" containerID="8fa15c55be50683ef783aa18f109a123cfbfe8ec0edcd42646aa067152a64237" Sep 10 23:47:35.953228 containerd[1532]: time="2025-09-10T23:47:35.953193172Z" level=info msg="RemoveContainer for \"8fa15c55be50683ef783aa18f109a123cfbfe8ec0edcd42646aa067152a64237\"" Sep 10 23:47:35.956560 containerd[1532]: time="2025-09-10T23:47:35.956526487Z" level=info msg="RemoveContainer for \"8fa15c55be50683ef783aa18f109a123cfbfe8ec0edcd42646aa067152a64237\" returns successfully" Sep 10 23:47:35.956762 kubelet[2672]: I0910 23:47:35.956732 2672 scope.go:117] "RemoveContainer" containerID="c60b842b71f161a50f33e39fb8cd43f35ee9423e69fed3be499e44c991bdec4b" Sep 10 23:47:35.958348 containerd[1532]: time="2025-09-10T23:47:35.958307345Z" level=info msg="RemoveContainer for \"c60b842b71f161a50f33e39fb8cd43f35ee9423e69fed3be499e44c991bdec4b\"" Sep 10 23:47:35.961573 containerd[1532]: time="2025-09-10T23:47:35.961520058Z" level=info msg="RemoveContainer for \"c60b842b71f161a50f33e39fb8cd43f35ee9423e69fed3be499e44c991bdec4b\" returns successfully" Sep 10 23:47:35.961755 kubelet[2672]: I0910 23:47:35.961724 2672 scope.go:117] "RemoveContainer" containerID="153022a457a6e404ef1a05b9f10a8832b08e4a4bfefa1781bd7f1153027f88c0" Sep 10 23:47:35.963336 containerd[1532]: time="2025-09-10T23:47:35.963302917Z" level=info msg="RemoveContainer for \"153022a457a6e404ef1a05b9f10a8832b08e4a4bfefa1781bd7f1153027f88c0\"" Sep 10 23:47:35.966215 containerd[1532]: time="2025-09-10T23:47:35.966181466Z" level=info msg="RemoveContainer for \"153022a457a6e404ef1a05b9f10a8832b08e4a4bfefa1781bd7f1153027f88c0\" returns successfully" Sep 10 23:47:35.966417 kubelet[2672]: I0910 23:47:35.966390 2672 scope.go:117] "RemoveContainer" containerID="042942955ae5c6075a233ccbc63cc4d879e59bb9f8253c28fd06d207cf02d580" Sep 10 23:47:35.966679 containerd[1532]: time="2025-09-10T23:47:35.966643591Z" level=error msg="ContainerStatus for \"042942955ae5c6075a233ccbc63cc4d879e59bb9f8253c28fd06d207cf02d580\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"042942955ae5c6075a233ccbc63cc4d879e59bb9f8253c28fd06d207cf02d580\": not found" Sep 10 23:47:35.966790 kubelet[2672]: E0910 23:47:35.966769 2672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"042942955ae5c6075a233ccbc63cc4d879e59bb9f8253c28fd06d207cf02d580\": not found" containerID="042942955ae5c6075a233ccbc63cc4d879e59bb9f8253c28fd06d207cf02d580" Sep 10 23:47:35.966836 kubelet[2672]: I0910 23:47:35.966813 2672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"042942955ae5c6075a233ccbc63cc4d879e59bb9f8253c28fd06d207cf02d580"} err="failed to get container status \"042942955ae5c6075a233ccbc63cc4d879e59bb9f8253c28fd06d207cf02d580\": rpc error: code = NotFound desc = an error occurred when try to find container \"042942955ae5c6075a233ccbc63cc4d879e59bb9f8253c28fd06d207cf02d580\": not found" Sep 10 23:47:35.966865 kubelet[2672]: I0910 23:47:35.966837 2672 scope.go:117] "RemoveContainer" containerID="460aba16a59dd9cd5bea1bb5b91d9a32ca6a245324694559b905a454fcbf3c2c" Sep 10 23:47:35.967097 containerd[1532]: time="2025-09-10T23:47:35.967009115Z" level=error msg="ContainerStatus for \"460aba16a59dd9cd5bea1bb5b91d9a32ca6a245324694559b905a454fcbf3c2c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"460aba16a59dd9cd5bea1bb5b91d9a32ca6a245324694559b905a454fcbf3c2c\": not found" Sep 10 23:47:35.967208 kubelet[2672]: E0910 23:47:35.967181 2672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"460aba16a59dd9cd5bea1bb5b91d9a32ca6a245324694559b905a454fcbf3c2c\": not found" containerID="460aba16a59dd9cd5bea1bb5b91d9a32ca6a245324694559b905a454fcbf3c2c" Sep 10 23:47:35.967253 kubelet[2672]: I0910 23:47:35.967213 2672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"460aba16a59dd9cd5bea1bb5b91d9a32ca6a245324694559b905a454fcbf3c2c"} err="failed to get container status \"460aba16a59dd9cd5bea1bb5b91d9a32ca6a245324694559b905a454fcbf3c2c\": rpc error: code = NotFound desc = an error occurred when try to find container \"460aba16a59dd9cd5bea1bb5b91d9a32ca6a245324694559b905a454fcbf3c2c\": not found" Sep 10 23:47:35.967253 kubelet[2672]: I0910 23:47:35.967231 2672 scope.go:117] "RemoveContainer" containerID="8fa15c55be50683ef783aa18f109a123cfbfe8ec0edcd42646aa067152a64237" Sep 10 23:47:35.967461 containerd[1532]: time="2025-09-10T23:47:35.967428759Z" level=error msg="ContainerStatus for \"8fa15c55be50683ef783aa18f109a123cfbfe8ec0edcd42646aa067152a64237\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8fa15c55be50683ef783aa18f109a123cfbfe8ec0edcd42646aa067152a64237\": not found" Sep 10 23:47:35.967555 kubelet[2672]: E0910 23:47:35.967534 2672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8fa15c55be50683ef783aa18f109a123cfbfe8ec0edcd42646aa067152a64237\": not found" containerID="8fa15c55be50683ef783aa18f109a123cfbfe8ec0edcd42646aa067152a64237" Sep 10 23:47:35.967597 kubelet[2672]: I0910 23:47:35.967560 2672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8fa15c55be50683ef783aa18f109a123cfbfe8ec0edcd42646aa067152a64237"} err="failed to get container status \"8fa15c55be50683ef783aa18f109a123cfbfe8ec0edcd42646aa067152a64237\": rpc error: code = NotFound desc = an error occurred when try to find container \"8fa15c55be50683ef783aa18f109a123cfbfe8ec0edcd42646aa067152a64237\": not found" Sep 10 23:47:35.967632 kubelet[2672]: I0910 23:47:35.967597 2672 scope.go:117] "RemoveContainer" containerID="c60b842b71f161a50f33e39fb8cd43f35ee9423e69fed3be499e44c991bdec4b" Sep 10 23:47:35.967798 containerd[1532]: time="2025-09-10T23:47:35.967764362Z" level=error msg="ContainerStatus for \"c60b842b71f161a50f33e39fb8cd43f35ee9423e69fed3be499e44c991bdec4b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c60b842b71f161a50f33e39fb8cd43f35ee9423e69fed3be499e44c991bdec4b\": not found" Sep 10 23:47:35.967951 kubelet[2672]: E0910 23:47:35.967929 2672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c60b842b71f161a50f33e39fb8cd43f35ee9423e69fed3be499e44c991bdec4b\": not found" containerID="c60b842b71f161a50f33e39fb8cd43f35ee9423e69fed3be499e44c991bdec4b" Sep 10 23:47:35.967995 kubelet[2672]: I0910 23:47:35.967948 2672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c60b842b71f161a50f33e39fb8cd43f35ee9423e69fed3be499e44c991bdec4b"} err="failed to get container status \"c60b842b71f161a50f33e39fb8cd43f35ee9423e69fed3be499e44c991bdec4b\": rpc error: code = NotFound desc = an error occurred when try to find container \"c60b842b71f161a50f33e39fb8cd43f35ee9423e69fed3be499e44c991bdec4b\": not found" Sep 10 23:47:35.967995 kubelet[2672]: I0910 23:47:35.967962 2672 scope.go:117] "RemoveContainer" containerID="153022a457a6e404ef1a05b9f10a8832b08e4a4bfefa1781bd7f1153027f88c0" Sep 10 23:47:35.968132 containerd[1532]: time="2025-09-10T23:47:35.968098606Z" level=error msg="ContainerStatus for \"153022a457a6e404ef1a05b9f10a8832b08e4a4bfefa1781bd7f1153027f88c0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"153022a457a6e404ef1a05b9f10a8832b08e4a4bfefa1781bd7f1153027f88c0\": not found" Sep 10 23:47:35.968290 kubelet[2672]: E0910 23:47:35.968267 2672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"153022a457a6e404ef1a05b9f10a8832b08e4a4bfefa1781bd7f1153027f88c0\": not found" containerID="153022a457a6e404ef1a05b9f10a8832b08e4a4bfefa1781bd7f1153027f88c0" Sep 10 23:47:35.968333 kubelet[2672]: I0910 23:47:35.968298 2672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"153022a457a6e404ef1a05b9f10a8832b08e4a4bfefa1781bd7f1153027f88c0"} err="failed to get container status \"153022a457a6e404ef1a05b9f10a8832b08e4a4bfefa1781bd7f1153027f88c0\": rpc error: code = NotFound desc = an error occurred when try to find container \"153022a457a6e404ef1a05b9f10a8832b08e4a4bfefa1781bd7f1153027f88c0\": not found" Sep 10 23:47:35.994293 systemd[1]: var-lib-kubelet-pods-1cf795db\x2d4294\x2d4e84\x2d98fc\x2ddb007d34dc3a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drqtrl.mount: Deactivated successfully. Sep 10 23:47:35.994398 systemd[1]: var-lib-kubelet-pods-7ec97403\x2d52b2\x2d4394\x2db361\x2d4cd9617f584d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d88cpp.mount: Deactivated successfully. Sep 10 23:47:35.994452 systemd[1]: var-lib-kubelet-pods-7ec97403\x2d52b2\x2d4394\x2db361\x2d4cd9617f584d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 10 23:47:35.994507 systemd[1]: var-lib-kubelet-pods-7ec97403\x2d52b2\x2d4394\x2db361\x2d4cd9617f584d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 10 23:47:36.910966 sshd[4262]: Connection closed by 10.0.0.1 port 49748 Sep 10 23:47:36.911370 sshd-session[4260]: pam_unix(sshd:session): session closed for user core Sep 10 23:47:36.926745 systemd[1]: sshd@21-10.0.0.38:22-10.0.0.1:49748.service: Deactivated successfully. Sep 10 23:47:36.928800 systemd[1]: session-22.scope: Deactivated successfully. Sep 10 23:47:36.929009 systemd[1]: session-22.scope: Consumed 1.897s CPU time, 26.6M memory peak. Sep 10 23:47:36.929751 systemd-logind[1516]: Session 22 logged out. Waiting for processes to exit. Sep 10 23:47:36.933500 systemd[1]: Started sshd@22-10.0.0.38:22-10.0.0.1:49752.service - OpenSSH per-connection server daemon (10.0.0.1:49752). Sep 10 23:47:36.934303 systemd-logind[1516]: Removed session 22. Sep 10 23:47:36.980575 sshd[4415]: Accepted publickey for core from 10.0.0.1 port 49752 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:47:36.981976 sshd-session[4415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:47:36.986935 systemd-logind[1516]: New session 23 of user core. Sep 10 23:47:36.997393 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 10 23:47:37.663248 kubelet[2672]: I0910 23:47:37.662384 2672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cf795db-4294-4e84-98fc-db007d34dc3a" path="/var/lib/kubelet/pods/1cf795db-4294-4e84-98fc-db007d34dc3a/volumes" Sep 10 23:47:37.663248 kubelet[2672]: I0910 23:47:37.662751 2672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ec97403-52b2-4394-b361-4cd9617f584d" path="/var/lib/kubelet/pods/7ec97403-52b2-4394-b361-4cd9617f584d/volumes" Sep 10 23:47:38.206541 sshd[4417]: Connection closed by 10.0.0.1 port 49752 Sep 10 23:47:38.207042 sshd-session[4415]: pam_unix(sshd:session): session closed for user core Sep 10 23:47:38.218315 systemd[1]: sshd@22-10.0.0.38:22-10.0.0.1:49752.service: Deactivated successfully. Sep 10 23:47:38.221848 systemd[1]: session-23.scope: Deactivated successfully. Sep 10 23:47:38.223246 systemd[1]: session-23.scope: Consumed 1.106s CPU time, 24.2M memory peak. Sep 10 23:47:38.227299 systemd-logind[1516]: Session 23 logged out. Waiting for processes to exit. Sep 10 23:47:38.234926 systemd[1]: Started sshd@23-10.0.0.38:22-10.0.0.1:49764.service - OpenSSH per-connection server daemon (10.0.0.1:49764). Sep 10 23:47:38.236789 systemd-logind[1516]: Removed session 23. Sep 10 23:47:38.257386 systemd[1]: Created slice kubepods-burstable-pod946f1d56_b5c9_41c3_993e_dbdb8b0337fd.slice - libcontainer container kubepods-burstable-pod946f1d56_b5c9_41c3_993e_dbdb8b0337fd.slice. Sep 10 23:47:38.292081 sshd[4429]: Accepted publickey for core from 10.0.0.1 port 49764 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:47:38.293417 sshd-session[4429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:47:38.297478 systemd-logind[1516]: New session 24 of user core. Sep 10 23:47:38.310366 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 10 23:47:38.336443 kubelet[2672]: I0910 23:47:38.336403 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/946f1d56-b5c9-41c3-993e-dbdb8b0337fd-clustermesh-secrets\") pod \"cilium-r2758\" (UID: \"946f1d56-b5c9-41c3-993e-dbdb8b0337fd\") " pod="kube-system/cilium-r2758" Sep 10 23:47:38.336940 kubelet[2672]: I0910 23:47:38.336592 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/946f1d56-b5c9-41c3-993e-dbdb8b0337fd-xtables-lock\") pod \"cilium-r2758\" (UID: \"946f1d56-b5c9-41c3-993e-dbdb8b0337fd\") " pod="kube-system/cilium-r2758" Sep 10 23:47:38.336940 kubelet[2672]: I0910 23:47:38.336646 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/946f1d56-b5c9-41c3-993e-dbdb8b0337fd-cilium-config-path\") pod \"cilium-r2758\" (UID: \"946f1d56-b5c9-41c3-993e-dbdb8b0337fd\") " pod="kube-system/cilium-r2758" Sep 10 23:47:38.336940 kubelet[2672]: I0910 23:47:38.336668 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/946f1d56-b5c9-41c3-993e-dbdb8b0337fd-host-proc-sys-net\") pod \"cilium-r2758\" (UID: \"946f1d56-b5c9-41c3-993e-dbdb8b0337fd\") " pod="kube-system/cilium-r2758" Sep 10 23:47:38.336940 kubelet[2672]: I0910 23:47:38.336686 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/946f1d56-b5c9-41c3-993e-dbdb8b0337fd-lib-modules\") pod \"cilium-r2758\" (UID: \"946f1d56-b5c9-41c3-993e-dbdb8b0337fd\") " pod="kube-system/cilium-r2758" Sep 10 23:47:38.336940 kubelet[2672]: I0910 23:47:38.336703 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/946f1d56-b5c9-41c3-993e-dbdb8b0337fd-cilium-ipsec-secrets\") pod \"cilium-r2758\" (UID: \"946f1d56-b5c9-41c3-993e-dbdb8b0337fd\") " pod="kube-system/cilium-r2758" Sep 10 23:47:38.336940 kubelet[2672]: I0910 23:47:38.336722 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/946f1d56-b5c9-41c3-993e-dbdb8b0337fd-cilium-cgroup\") pod \"cilium-r2758\" (UID: \"946f1d56-b5c9-41c3-993e-dbdb8b0337fd\") " pod="kube-system/cilium-r2758" Sep 10 23:47:38.337176 kubelet[2672]: I0910 23:47:38.336750 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/946f1d56-b5c9-41c3-993e-dbdb8b0337fd-host-proc-sys-kernel\") pod \"cilium-r2758\" (UID: \"946f1d56-b5c9-41c3-993e-dbdb8b0337fd\") " pod="kube-system/cilium-r2758" Sep 10 23:47:38.337176 kubelet[2672]: I0910 23:47:38.336767 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/946f1d56-b5c9-41c3-993e-dbdb8b0337fd-hostproc\") pod \"cilium-r2758\" (UID: \"946f1d56-b5c9-41c3-993e-dbdb8b0337fd\") " pod="kube-system/cilium-r2758" Sep 10 23:47:38.337176 kubelet[2672]: I0910 23:47:38.336784 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/946f1d56-b5c9-41c3-993e-dbdb8b0337fd-bpf-maps\") pod \"cilium-r2758\" (UID: \"946f1d56-b5c9-41c3-993e-dbdb8b0337fd\") " pod="kube-system/cilium-r2758" Sep 10 23:47:38.337176 kubelet[2672]: I0910 23:47:38.336800 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/946f1d56-b5c9-41c3-993e-dbdb8b0337fd-hubble-tls\") pod \"cilium-r2758\" (UID: \"946f1d56-b5c9-41c3-993e-dbdb8b0337fd\") " pod="kube-system/cilium-r2758" Sep 10 23:47:38.337176 kubelet[2672]: I0910 23:47:38.336815 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpbv7\" (UniqueName: \"kubernetes.io/projected/946f1d56-b5c9-41c3-993e-dbdb8b0337fd-kube-api-access-vpbv7\") pod \"cilium-r2758\" (UID: \"946f1d56-b5c9-41c3-993e-dbdb8b0337fd\") " pod="kube-system/cilium-r2758" Sep 10 23:47:38.337176 kubelet[2672]: I0910 23:47:38.336837 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/946f1d56-b5c9-41c3-993e-dbdb8b0337fd-cilium-run\") pod \"cilium-r2758\" (UID: \"946f1d56-b5c9-41c3-993e-dbdb8b0337fd\") " pod="kube-system/cilium-r2758" Sep 10 23:47:38.337295 kubelet[2672]: I0910 23:47:38.336852 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/946f1d56-b5c9-41c3-993e-dbdb8b0337fd-etc-cni-netd\") pod \"cilium-r2758\" (UID: \"946f1d56-b5c9-41c3-993e-dbdb8b0337fd\") " pod="kube-system/cilium-r2758" Sep 10 23:47:38.337295 kubelet[2672]: I0910 23:47:38.336866 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/946f1d56-b5c9-41c3-993e-dbdb8b0337fd-cni-path\") pod \"cilium-r2758\" (UID: \"946f1d56-b5c9-41c3-993e-dbdb8b0337fd\") " pod="kube-system/cilium-r2758" Sep 10 23:47:38.359553 sshd[4431]: Connection closed by 10.0.0.1 port 49764 Sep 10 23:47:38.359894 sshd-session[4429]: pam_unix(sshd:session): session closed for user core Sep 10 23:47:38.375980 systemd[1]: sshd@23-10.0.0.38:22-10.0.0.1:49764.service: Deactivated successfully. Sep 10 23:47:38.378110 systemd[1]: session-24.scope: Deactivated successfully. Sep 10 23:47:38.379221 systemd-logind[1516]: Session 24 logged out. Waiting for processes to exit. Sep 10 23:47:38.386005 systemd[1]: Started sshd@24-10.0.0.38:22-10.0.0.1:49770.service - OpenSSH per-connection server daemon (10.0.0.1:49770). Sep 10 23:47:38.386799 systemd-logind[1516]: Removed session 24. Sep 10 23:47:38.438609 sshd[4438]: Accepted publickey for core from 10.0.0.1 port 49770 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:47:38.441696 sshd-session[4438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:47:38.458772 systemd-logind[1516]: New session 25 of user core. Sep 10 23:47:38.468402 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 10 23:47:38.561885 kubelet[2672]: E0910 23:47:38.561836 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:47:38.563169 containerd[1532]: time="2025-09-10T23:47:38.562889848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r2758,Uid:946f1d56-b5c9-41c3-993e-dbdb8b0337fd,Namespace:kube-system,Attempt:0,}" Sep 10 23:47:38.588656 containerd[1532]: time="2025-09-10T23:47:38.588607492Z" level=info msg="connecting to shim 465f092b2a40a8e6dbf43cf97fe819806986608ec01c64e58ca25fa47f14a0d3" address="unix:///run/containerd/s/2494406aaad6229f8423e12bc4edadbb6510ad60c0970642e15da615ba525fab" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:47:38.618424 systemd[1]: Started cri-containerd-465f092b2a40a8e6dbf43cf97fe819806986608ec01c64e58ca25fa47f14a0d3.scope - libcontainer container 465f092b2a40a8e6dbf43cf97fe819806986608ec01c64e58ca25fa47f14a0d3. Sep 10 23:47:38.642037 containerd[1532]: time="2025-09-10T23:47:38.641987479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r2758,Uid:946f1d56-b5c9-41c3-993e-dbdb8b0337fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"465f092b2a40a8e6dbf43cf97fe819806986608ec01c64e58ca25fa47f14a0d3\"" Sep 10 23:47:38.643021 kubelet[2672]: E0910 23:47:38.642980 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:47:38.650694 containerd[1532]: time="2025-09-10T23:47:38.650590560Z" level=info msg="CreateContainer within sandbox \"465f092b2a40a8e6dbf43cf97fe819806986608ec01c64e58ca25fa47f14a0d3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 10 23:47:38.657953 containerd[1532]: time="2025-09-10T23:47:38.657911430Z" level=info msg="Container b6da0a40ef2cf787d4325196fc7ed63c47f6f43c21f5e7907d63dce5ac80a835: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:47:38.663424 containerd[1532]: time="2025-09-10T23:47:38.663370402Z" level=info msg="CreateContainer within sandbox \"465f092b2a40a8e6dbf43cf97fe819806986608ec01c64e58ca25fa47f14a0d3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b6da0a40ef2cf787d4325196fc7ed63c47f6f43c21f5e7907d63dce5ac80a835\"" Sep 10 23:47:38.664488 containerd[1532]: time="2025-09-10T23:47:38.664449492Z" level=info msg="StartContainer for \"b6da0a40ef2cf787d4325196fc7ed63c47f6f43c21f5e7907d63dce5ac80a835\"" Sep 10 23:47:38.665797 containerd[1532]: time="2025-09-10T23:47:38.665529742Z" level=info msg="connecting to shim b6da0a40ef2cf787d4325196fc7ed63c47f6f43c21f5e7907d63dce5ac80a835" address="unix:///run/containerd/s/2494406aaad6229f8423e12bc4edadbb6510ad60c0970642e15da615ba525fab" protocol=ttrpc version=3 Sep 10 23:47:38.689373 systemd[1]: Started cri-containerd-b6da0a40ef2cf787d4325196fc7ed63c47f6f43c21f5e7907d63dce5ac80a835.scope - libcontainer container b6da0a40ef2cf787d4325196fc7ed63c47f6f43c21f5e7907d63dce5ac80a835. Sep 10 23:47:38.712824 kubelet[2672]: E0910 23:47:38.712718 2672 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 10 23:47:38.720706 containerd[1532]: time="2025-09-10T23:47:38.720664065Z" level=info msg="StartContainer for \"b6da0a40ef2cf787d4325196fc7ed63c47f6f43c21f5e7907d63dce5ac80a835\" returns successfully" Sep 10 23:47:38.729698 systemd[1]: cri-containerd-b6da0a40ef2cf787d4325196fc7ed63c47f6f43c21f5e7907d63dce5ac80a835.scope: Deactivated successfully. Sep 10 23:47:38.732213 containerd[1532]: time="2025-09-10T23:47:38.732151574Z" level=info msg="received exit event container_id:\"b6da0a40ef2cf787d4325196fc7ed63c47f6f43c21f5e7907d63dce5ac80a835\" id:\"b6da0a40ef2cf787d4325196fc7ed63c47f6f43c21f5e7907d63dce5ac80a835\" pid:4510 exited_at:{seconds:1757548058 nanos:731826771}" Sep 10 23:47:38.732429 containerd[1532]: time="2025-09-10T23:47:38.732357576Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b6da0a40ef2cf787d4325196fc7ed63c47f6f43c21f5e7907d63dce5ac80a835\" id:\"b6da0a40ef2cf787d4325196fc7ed63c47f6f43c21f5e7907d63dce5ac80a835\" pid:4510 exited_at:{seconds:1757548058 nanos:731826771}" Sep 10 23:47:38.907527 kubelet[2672]: E0910 23:47:38.907482 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:47:38.912209 containerd[1532]: time="2025-09-10T23:47:38.912164682Z" level=info msg="CreateContainer within sandbox \"465f092b2a40a8e6dbf43cf97fe819806986608ec01c64e58ca25fa47f14a0d3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 10 23:47:38.934010 containerd[1532]: time="2025-09-10T23:47:38.933962609Z" level=info msg="Container 0204e1e9f7875f425797e4087c1e42689903b325f2589eef3f133afd53943945: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:47:38.940544 containerd[1532]: time="2025-09-10T23:47:38.940495791Z" level=info msg="CreateContainer within sandbox \"465f092b2a40a8e6dbf43cf97fe819806986608ec01c64e58ca25fa47f14a0d3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0204e1e9f7875f425797e4087c1e42689903b325f2589eef3f133afd53943945\"" Sep 10 23:47:38.941283 containerd[1532]: time="2025-09-10T23:47:38.941253598Z" level=info msg="StartContainer for \"0204e1e9f7875f425797e4087c1e42689903b325f2589eef3f133afd53943945\"" Sep 10 23:47:38.942255 containerd[1532]: time="2025-09-10T23:47:38.942228248Z" level=info msg="connecting to shim 0204e1e9f7875f425797e4087c1e42689903b325f2589eef3f133afd53943945" address="unix:///run/containerd/s/2494406aaad6229f8423e12bc4edadbb6510ad60c0970642e15da615ba525fab" protocol=ttrpc version=3 Sep 10 23:47:38.965327 systemd[1]: Started cri-containerd-0204e1e9f7875f425797e4087c1e42689903b325f2589eef3f133afd53943945.scope - libcontainer container 0204e1e9f7875f425797e4087c1e42689903b325f2589eef3f133afd53943945. Sep 10 23:47:38.991881 containerd[1532]: time="2025-09-10T23:47:38.991843278Z" level=info msg="StartContainer for \"0204e1e9f7875f425797e4087c1e42689903b325f2589eef3f133afd53943945\" returns successfully" Sep 10 23:47:38.998337 systemd[1]: cri-containerd-0204e1e9f7875f425797e4087c1e42689903b325f2589eef3f133afd53943945.scope: Deactivated successfully. Sep 10 23:47:38.999890 containerd[1532]: time="2025-09-10T23:47:38.999823794Z" level=info msg="received exit event container_id:\"0204e1e9f7875f425797e4087c1e42689903b325f2589eef3f133afd53943945\" id:\"0204e1e9f7875f425797e4087c1e42689903b325f2589eef3f133afd53943945\" pid:4557 exited_at:{seconds:1757548058 nanos:999426790}" Sep 10 23:47:39.000315 containerd[1532]: time="2025-09-10T23:47:39.000287238Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0204e1e9f7875f425797e4087c1e42689903b325f2589eef3f133afd53943945\" id:\"0204e1e9f7875f425797e4087c1e42689903b325f2589eef3f133afd53943945\" pid:4557 exited_at:{seconds:1757548058 nanos:999426790}" Sep 10 23:47:39.913191 kubelet[2672]: E0910 23:47:39.912981 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:47:39.920214 containerd[1532]: time="2025-09-10T23:47:39.920172053Z" level=info msg="CreateContainer within sandbox \"465f092b2a40a8e6dbf43cf97fe819806986608ec01c64e58ca25fa47f14a0d3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 10 23:47:39.929379 containerd[1532]: time="2025-09-10T23:47:39.929222056Z" level=info msg="Container 4b903737ae83e95f635185591cc94445f941b8b47f573a9d592622c085099ddf: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:47:39.940153 containerd[1532]: time="2025-09-10T23:47:39.940091277Z" level=info msg="CreateContainer within sandbox \"465f092b2a40a8e6dbf43cf97fe819806986608ec01c64e58ca25fa47f14a0d3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4b903737ae83e95f635185591cc94445f941b8b47f573a9d592622c085099ddf\"" Sep 10 23:47:39.940627 containerd[1532]: time="2025-09-10T23:47:39.940598841Z" level=info msg="StartContainer for \"4b903737ae83e95f635185591cc94445f941b8b47f573a9d592622c085099ddf\"" Sep 10 23:47:39.942242 containerd[1532]: time="2025-09-10T23:47:39.942210736Z" level=info msg="connecting to shim 4b903737ae83e95f635185591cc94445f941b8b47f573a9d592622c085099ddf" address="unix:///run/containerd/s/2494406aaad6229f8423e12bc4edadbb6510ad60c0970642e15da615ba525fab" protocol=ttrpc version=3 Sep 10 23:47:39.968367 systemd[1]: Started cri-containerd-4b903737ae83e95f635185591cc94445f941b8b47f573a9d592622c085099ddf.scope - libcontainer container 4b903737ae83e95f635185591cc94445f941b8b47f573a9d592622c085099ddf. Sep 10 23:47:40.004321 systemd[1]: cri-containerd-4b903737ae83e95f635185591cc94445f941b8b47f573a9d592622c085099ddf.scope: Deactivated successfully. Sep 10 23:47:40.008284 containerd[1532]: time="2025-09-10T23:47:40.008244464Z" level=info msg="received exit event container_id:\"4b903737ae83e95f635185591cc94445f941b8b47f573a9d592622c085099ddf\" id:\"4b903737ae83e95f635185591cc94445f941b8b47f573a9d592622c085099ddf\" pid:4602 exited_at:{seconds:1757548060 nanos:7992822}" Sep 10 23:47:40.008477 containerd[1532]: time="2025-09-10T23:47:40.008419825Z" level=info msg="StartContainer for \"4b903737ae83e95f635185591cc94445f941b8b47f573a9d592622c085099ddf\" returns successfully" Sep 10 23:47:40.008665 containerd[1532]: time="2025-09-10T23:47:40.008610467Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4b903737ae83e95f635185591cc94445f941b8b47f573a9d592622c085099ddf\" id:\"4b903737ae83e95f635185591cc94445f941b8b47f573a9d592622c085099ddf\" pid:4602 exited_at:{seconds:1757548060 nanos:7992822}" Sep 10 23:47:40.032063 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b903737ae83e95f635185591cc94445f941b8b47f573a9d592622c085099ddf-rootfs.mount: Deactivated successfully. Sep 10 23:47:40.918380 kubelet[2672]: E0910 23:47:40.918342 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:47:40.926408 containerd[1532]: time="2025-09-10T23:47:40.926364316Z" level=info msg="CreateContainer within sandbox \"465f092b2a40a8e6dbf43cf97fe819806986608ec01c64e58ca25fa47f14a0d3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 10 23:47:40.940219 containerd[1532]: time="2025-09-10T23:47:40.939322712Z" level=info msg="Container 0babae009a181c9543037eec47ac9e8238694bf0c8a1407cde0e0d6d2cf5a0d2: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:47:40.946182 containerd[1532]: time="2025-09-10T23:47:40.946118533Z" level=info msg="CreateContainer within sandbox \"465f092b2a40a8e6dbf43cf97fe819806986608ec01c64e58ca25fa47f14a0d3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0babae009a181c9543037eec47ac9e8238694bf0c8a1407cde0e0d6d2cf5a0d2\"" Sep 10 23:47:40.946692 containerd[1532]: time="2025-09-10T23:47:40.946665018Z" level=info msg="StartContainer for \"0babae009a181c9543037eec47ac9e8238694bf0c8a1407cde0e0d6d2cf5a0d2\"" Sep 10 23:47:40.948233 containerd[1532]: time="2025-09-10T23:47:40.948033910Z" level=info msg="connecting to shim 0babae009a181c9543037eec47ac9e8238694bf0c8a1407cde0e0d6d2cf5a0d2" address="unix:///run/containerd/s/2494406aaad6229f8423e12bc4edadbb6510ad60c0970642e15da615ba525fab" protocol=ttrpc version=3 Sep 10 23:47:40.978370 systemd[1]: Started cri-containerd-0babae009a181c9543037eec47ac9e8238694bf0c8a1407cde0e0d6d2cf5a0d2.scope - libcontainer container 0babae009a181c9543037eec47ac9e8238694bf0c8a1407cde0e0d6d2cf5a0d2. Sep 10 23:47:41.021871 systemd[1]: cri-containerd-0babae009a181c9543037eec47ac9e8238694bf0c8a1407cde0e0d6d2cf5a0d2.scope: Deactivated successfully. Sep 10 23:47:41.025424 containerd[1532]: time="2025-09-10T23:47:41.023716305Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0babae009a181c9543037eec47ac9e8238694bf0c8a1407cde0e0d6d2cf5a0d2\" id:\"0babae009a181c9543037eec47ac9e8238694bf0c8a1407cde0e0d6d2cf5a0d2\" pid:4642 exited_at:{seconds:1757548061 nanos:22533935}" Sep 10 23:47:41.028161 containerd[1532]: time="2025-09-10T23:47:41.027852341Z" level=info msg="received exit event container_id:\"0babae009a181c9543037eec47ac9e8238694bf0c8a1407cde0e0d6d2cf5a0d2\" id:\"0babae009a181c9543037eec47ac9e8238694bf0c8a1407cde0e0d6d2cf5a0d2\" pid:4642 exited_at:{seconds:1757548061 nanos:22533935}" Sep 10 23:47:41.034756 containerd[1532]: time="2025-09-10T23:47:41.034717361Z" level=info msg="StartContainer for \"0babae009a181c9543037eec47ac9e8238694bf0c8a1407cde0e0d6d2cf5a0d2\" returns successfully" Sep 10 23:47:41.046622 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0babae009a181c9543037eec47ac9e8238694bf0c8a1407cde0e0d6d2cf5a0d2-rootfs.mount: Deactivated successfully. Sep 10 23:47:41.925538 kubelet[2672]: E0910 23:47:41.925321 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:47:41.936300 containerd[1532]: time="2025-09-10T23:47:41.936249609Z" level=info msg="CreateContainer within sandbox \"465f092b2a40a8e6dbf43cf97fe819806986608ec01c64e58ca25fa47f14a0d3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 10 23:47:41.951497 containerd[1532]: time="2025-09-10T23:47:41.951121899Z" level=info msg="Container 6b2002fd9db01d5872c1152c99b608496b33ff4884e70d7e2389ff3d2b259d0f: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:47:41.968777 containerd[1532]: time="2025-09-10T23:47:41.968719573Z" level=info msg="CreateContainer within sandbox \"465f092b2a40a8e6dbf43cf97fe819806986608ec01c64e58ca25fa47f14a0d3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6b2002fd9db01d5872c1152c99b608496b33ff4884e70d7e2389ff3d2b259d0f\"" Sep 10 23:47:41.969557 containerd[1532]: time="2025-09-10T23:47:41.969514540Z" level=info msg="StartContainer for \"6b2002fd9db01d5872c1152c99b608496b33ff4884e70d7e2389ff3d2b259d0f\"" Sep 10 23:47:41.970733 containerd[1532]: time="2025-09-10T23:47:41.970585109Z" level=info msg="connecting to shim 6b2002fd9db01d5872c1152c99b608496b33ff4884e70d7e2389ff3d2b259d0f" address="unix:///run/containerd/s/2494406aaad6229f8423e12bc4edadbb6510ad60c0970642e15da615ba525fab" protocol=ttrpc version=3 Sep 10 23:47:41.995362 systemd[1]: Started cri-containerd-6b2002fd9db01d5872c1152c99b608496b33ff4884e70d7e2389ff3d2b259d0f.scope - libcontainer container 6b2002fd9db01d5872c1152c99b608496b33ff4884e70d7e2389ff3d2b259d0f. Sep 10 23:47:42.027398 containerd[1532]: time="2025-09-10T23:47:42.027358920Z" level=info msg="StartContainer for \"6b2002fd9db01d5872c1152c99b608496b33ff4884e70d7e2389ff3d2b259d0f\" returns successfully" Sep 10 23:47:42.093939 containerd[1532]: time="2025-09-10T23:47:42.093824526Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6b2002fd9db01d5872c1152c99b608496b33ff4884e70d7e2389ff3d2b259d0f\" id:\"e04d08d2623c0517f1f5f00a49d8d2e9b2230be4a5f4335c0978c2e5d32d2379\" pid:4712 exited_at:{seconds:1757548062 nanos:93373562}" Sep 10 23:47:42.324203 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 10 23:47:42.931651 kubelet[2672]: E0910 23:47:42.931606 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:47:42.950698 kubelet[2672]: I0910 23:47:42.950162 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-r2758" podStartSLOduration=4.95012626 podStartE2EDuration="4.95012626s" podCreationTimestamp="2025-09-10 23:47:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:47:42.949672256 +0000 UTC m=+79.392696290" watchObservedRunningTime="2025-09-10 23:47:42.95012626 +0000 UTC m=+79.393150294" Sep 10 23:47:44.564350 kubelet[2672]: E0910 23:47:44.564235 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:47:44.660324 kubelet[2672]: E0910 23:47:44.660260 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:47:44.912722 containerd[1532]: time="2025-09-10T23:47:44.912675751Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6b2002fd9db01d5872c1152c99b608496b33ff4884e70d7e2389ff3d2b259d0f\" id:\"0ec01307708b8ae2d7b56d2e0fc39b2265037a4a4fb07d4ac8eaa89f3e1a0af9\" pid:5113 exit_status:1 exited_at:{seconds:1757548064 nanos:912382228}" Sep 10 23:47:45.314290 systemd-networkd[1438]: lxc_health: Link UP Sep 10 23:47:45.315767 systemd-networkd[1438]: lxc_health: Gained carrier Sep 10 23:47:46.564199 kubelet[2672]: E0910 23:47:46.564153 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:47:46.939608 kubelet[2672]: E0910 23:47:46.939532 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:47:47.016327 systemd-networkd[1438]: lxc_health: Gained IPv6LL Sep 10 23:47:47.099794 containerd[1532]: time="2025-09-10T23:47:47.099741131Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6b2002fd9db01d5872c1152c99b608496b33ff4884e70d7e2389ff3d2b259d0f\" id:\"15e739bb7a4e0ba9f394c37f0dcc7eacb3f84b66a187d451644d23133fe07e26\" pid:5251 exited_at:{seconds:1757548067 nanos:99344648}" Sep 10 23:47:47.941557 kubelet[2672]: E0910 23:47:47.941520 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:47:49.216070 containerd[1532]: time="2025-09-10T23:47:49.216029700Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6b2002fd9db01d5872c1152c99b608496b33ff4884e70d7e2389ff3d2b259d0f\" id:\"f1b9bedb9bf1f223fd99a400e2740f29c1d8c7b8d3f11c867b04081dea8a2258\" pid:5283 exited_at:{seconds:1757548069 nanos:215523096}" Sep 10 23:47:51.344737 containerd[1532]: time="2025-09-10T23:47:51.344686412Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6b2002fd9db01d5872c1152c99b608496b33ff4884e70d7e2389ff3d2b259d0f\" id:\"580beef229360740f1bece6801dd559a16fe8a80495abaf01179677659dbc974\" pid:5308 exited_at:{seconds:1757548071 nanos:344301689}" Sep 10 23:47:51.350043 sshd[4444]: Connection closed by 10.0.0.1 port 49770 Sep 10 23:47:51.350507 sshd-session[4438]: pam_unix(sshd:session): session closed for user core Sep 10 23:47:51.354555 systemd[1]: sshd@24-10.0.0.38:22-10.0.0.1:49770.service: Deactivated successfully. Sep 10 23:47:51.356551 systemd[1]: session-25.scope: Deactivated successfully. Sep 10 23:47:51.357484 systemd-logind[1516]: Session 25 logged out. Waiting for processes to exit. Sep 10 23:47:51.358915 systemd-logind[1516]: Removed session 25. Sep 10 23:47:52.660006 kubelet[2672]: E0910 23:47:52.659964 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"