Sep 9 23:58:28.771482 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 9 23:58:28.771502 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Tue Sep 9 22:10:22 -00 2025 Sep 9 23:58:28.771512 kernel: KASLR enabled Sep 9 23:58:28.771517 kernel: efi: EFI v2.7 by EDK II Sep 9 23:58:28.771523 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Sep 9 23:58:28.771528 kernel: random: crng init done Sep 9 23:58:28.771535 kernel: secureboot: Secure boot disabled Sep 9 23:58:28.771540 kernel: ACPI: Early table checksum verification disabled Sep 9 23:58:28.771546 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Sep 9 23:58:28.771554 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 9 23:58:28.771560 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:58:28.771566 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:58:28.771571 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:58:28.771585 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:58:28.771592 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:58:28.771600 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:58:28.771607 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:58:28.771613 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:58:28.771619 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:58:28.771625 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 9 23:58:28.771631 kernel: ACPI: Use ACPI SPCR as default console: No Sep 9 23:58:28.771637 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 23:58:28.771678 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Sep 9 23:58:28.771685 kernel: Zone ranges: Sep 9 23:58:28.771692 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 23:58:28.771700 kernel: DMA32 empty Sep 9 23:58:28.771706 kernel: Normal empty Sep 9 23:58:28.771712 kernel: Device empty Sep 9 23:58:28.771718 kernel: Movable zone start for each node Sep 9 23:58:28.771724 kernel: Early memory node ranges Sep 9 23:58:28.771730 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Sep 9 23:58:28.771736 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Sep 9 23:58:28.771742 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Sep 9 23:58:28.771749 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Sep 9 23:58:28.771755 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Sep 9 23:58:28.771761 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Sep 9 23:58:28.771767 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Sep 9 23:58:28.771775 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Sep 9 23:58:28.771781 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Sep 9 23:58:28.771791 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 9 23:58:28.771800 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 9 23:58:28.771806 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 9 23:58:28.771813 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 9 23:58:28.771821 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 23:58:28.771828 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 9 23:58:28.771834 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Sep 9 23:58:28.771841 kernel: psci: probing for conduit method from ACPI. Sep 9 23:58:28.771847 kernel: psci: PSCIv1.1 detected in firmware. Sep 9 23:58:28.771854 kernel: psci: Using standard PSCI v0.2 function IDs Sep 9 23:58:28.771860 kernel: psci: Trusted OS migration not required Sep 9 23:58:28.771867 kernel: psci: SMC Calling Convention v1.1 Sep 9 23:58:28.771873 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 9 23:58:28.771880 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 9 23:58:28.771902 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 9 23:58:28.771908 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 9 23:58:28.771915 kernel: Detected PIPT I-cache on CPU0 Sep 9 23:58:28.771921 kernel: CPU features: detected: GIC system register CPU interface Sep 9 23:58:28.771928 kernel: CPU features: detected: Spectre-v4 Sep 9 23:58:28.771934 kernel: CPU features: detected: Spectre-BHB Sep 9 23:58:28.771941 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 9 23:58:28.771947 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 9 23:58:28.771954 kernel: CPU features: detected: ARM erratum 1418040 Sep 9 23:58:28.771960 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 9 23:58:28.771967 kernel: alternatives: applying boot alternatives Sep 9 23:58:28.771974 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=fc7b279c2d918629032c01551b74c66c198cf923a976f9b3bc0d959e7c2302db Sep 9 23:58:28.771982 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 23:58:28.771989 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 23:58:28.771996 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 23:58:28.772002 kernel: Fallback order for Node 0: 0 Sep 9 23:58:28.772008 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 9 23:58:28.772015 kernel: Policy zone: DMA Sep 9 23:58:28.772021 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 23:58:28.772028 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 9 23:58:28.772034 kernel: software IO TLB: area num 4. Sep 9 23:58:28.772041 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 9 23:58:28.772048 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Sep 9 23:58:28.772055 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 9 23:58:28.772062 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 23:58:28.772069 kernel: rcu: RCU event tracing is enabled. Sep 9 23:58:28.772076 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 9 23:58:28.772082 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 23:58:28.772089 kernel: Tracing variant of Tasks RCU enabled. Sep 9 23:58:28.772095 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 23:58:28.772102 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 9 23:58:28.772108 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 23:58:28.772114 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 23:58:28.772121 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 9 23:58:28.772128 kernel: GICv3: 256 SPIs implemented Sep 9 23:58:28.772134 kernel: GICv3: 0 Extended SPIs implemented Sep 9 23:58:28.772141 kernel: Root IRQ handler: gic_handle_irq Sep 9 23:58:28.772147 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 9 23:58:28.772153 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 9 23:58:28.772160 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 9 23:58:28.772166 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 9 23:58:28.772172 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 9 23:58:28.772179 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 9 23:58:28.772185 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 9 23:58:28.772192 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 9 23:58:28.772198 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 23:58:28.772206 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 23:58:28.772212 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 9 23:58:28.772219 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 9 23:58:28.772225 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 9 23:58:28.772232 kernel: arm-pv: using stolen time PV Sep 9 23:58:28.772239 kernel: Console: colour dummy device 80x25 Sep 9 23:58:28.772245 kernel: ACPI: Core revision 20240827 Sep 9 23:58:28.772252 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 9 23:58:28.772258 kernel: pid_max: default: 32768 minimum: 301 Sep 9 23:58:28.772265 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 9 23:58:28.772273 kernel: landlock: Up and running. Sep 9 23:58:28.772279 kernel: SELinux: Initializing. Sep 9 23:58:28.772286 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 23:58:28.772292 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 23:58:28.772299 kernel: rcu: Hierarchical SRCU implementation. Sep 9 23:58:28.772306 kernel: rcu: Max phase no-delay instances is 400. Sep 9 23:58:28.772312 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 9 23:58:28.772319 kernel: Remapping and enabling EFI services. Sep 9 23:58:28.772325 kernel: smp: Bringing up secondary CPUs ... Sep 9 23:58:28.772337 kernel: Detected PIPT I-cache on CPU1 Sep 9 23:58:28.772344 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 9 23:58:28.772351 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 9 23:58:28.772359 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 23:58:28.772366 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 9 23:58:28.772373 kernel: Detected PIPT I-cache on CPU2 Sep 9 23:58:28.772380 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 9 23:58:28.772387 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 9 23:58:28.772395 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 23:58:28.772402 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 9 23:58:28.772409 kernel: Detected PIPT I-cache on CPU3 Sep 9 23:58:28.772415 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 9 23:58:28.772422 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 9 23:58:28.772429 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 23:58:28.772436 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 9 23:58:28.772443 kernel: smp: Brought up 1 node, 4 CPUs Sep 9 23:58:28.772449 kernel: SMP: Total of 4 processors activated. Sep 9 23:58:28.772457 kernel: CPU: All CPU(s) started at EL1 Sep 9 23:58:28.772464 kernel: CPU features: detected: 32-bit EL0 Support Sep 9 23:58:28.772471 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 9 23:58:28.772478 kernel: CPU features: detected: Common not Private translations Sep 9 23:58:28.772485 kernel: CPU features: detected: CRC32 instructions Sep 9 23:58:28.772491 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 9 23:58:28.772498 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 9 23:58:28.772505 kernel: CPU features: detected: LSE atomic instructions Sep 9 23:58:28.772512 kernel: CPU features: detected: Privileged Access Never Sep 9 23:58:28.772520 kernel: CPU features: detected: RAS Extension Support Sep 9 23:58:28.772527 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 9 23:58:28.772534 kernel: alternatives: applying system-wide alternatives Sep 9 23:58:28.772541 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 9 23:58:28.772549 kernel: Memory: 2424544K/2572288K available (11136K kernel code, 2436K rwdata, 9060K rodata, 38912K init, 1038K bss, 125408K reserved, 16384K cma-reserved) Sep 9 23:58:28.772555 kernel: devtmpfs: initialized Sep 9 23:58:28.772563 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 23:58:28.772570 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 9 23:58:28.772580 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 9 23:58:28.772589 kernel: 0 pages in range for non-PLT usage Sep 9 23:58:28.772596 kernel: 508576 pages in range for PLT usage Sep 9 23:58:28.772603 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 23:58:28.772609 kernel: SMBIOS 3.0.0 present. Sep 9 23:58:28.772616 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 9 23:58:28.772623 kernel: DMI: Memory slots populated: 1/1 Sep 9 23:58:28.772630 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 23:58:28.772637 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 9 23:58:28.772676 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 9 23:58:28.772686 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 9 23:58:28.772693 kernel: audit: initializing netlink subsys (disabled) Sep 9 23:58:28.772700 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Sep 9 23:58:28.772707 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 23:58:28.772713 kernel: cpuidle: using governor menu Sep 9 23:58:28.772720 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 9 23:58:28.772727 kernel: ASID allocator initialised with 32768 entries Sep 9 23:58:28.772734 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 23:58:28.772741 kernel: Serial: AMBA PL011 UART driver Sep 9 23:58:28.772749 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 23:58:28.772756 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 23:58:28.772763 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 9 23:58:28.772770 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 9 23:58:28.772777 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 23:58:28.772784 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 23:58:28.772794 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 9 23:58:28.772801 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 9 23:58:28.772808 kernel: ACPI: Added _OSI(Module Device) Sep 9 23:58:28.772816 kernel: ACPI: Added _OSI(Processor Device) Sep 9 23:58:28.772823 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 23:58:28.772830 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 23:58:28.772837 kernel: ACPI: Interpreter enabled Sep 9 23:58:28.772844 kernel: ACPI: Using GIC for interrupt routing Sep 9 23:58:28.772851 kernel: ACPI: MCFG table detected, 1 entries Sep 9 23:58:28.772857 kernel: ACPI: CPU0 has been hot-added Sep 9 23:58:28.772864 kernel: ACPI: CPU1 has been hot-added Sep 9 23:58:28.772871 kernel: ACPI: CPU2 has been hot-added Sep 9 23:58:28.772878 kernel: ACPI: CPU3 has been hot-added Sep 9 23:58:28.772886 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 9 23:58:28.772893 kernel: printk: legacy console [ttyAMA0] enabled Sep 9 23:58:28.772905 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 23:58:28.773035 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 23:58:28.773102 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 9 23:58:28.773159 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 9 23:58:28.773224 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 9 23:58:28.773283 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 9 23:58:28.773292 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 9 23:58:28.773299 kernel: PCI host bridge to bus 0000:00 Sep 9 23:58:28.773363 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 9 23:58:28.773416 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 9 23:58:28.773468 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 9 23:58:28.773519 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 23:58:28.773619 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 9 23:58:28.773708 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 9 23:58:28.773780 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 9 23:58:28.773852 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 9 23:58:28.773920 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 9 23:58:28.774004 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 9 23:58:28.774063 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 9 23:58:28.774155 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 9 23:58:28.774210 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 9 23:58:28.774261 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 9 23:58:28.774312 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 9 23:58:28.774322 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 9 23:58:28.774329 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 9 23:58:28.774336 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 9 23:58:28.774345 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 9 23:58:28.774352 kernel: iommu: Default domain type: Translated Sep 9 23:58:28.774359 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 9 23:58:28.774366 kernel: efivars: Registered efivars operations Sep 9 23:58:28.774373 kernel: vgaarb: loaded Sep 9 23:58:28.774380 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 9 23:58:28.774387 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 23:58:28.774394 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 23:58:28.774401 kernel: pnp: PnP ACPI init Sep 9 23:58:28.774474 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 9 23:58:28.774484 kernel: pnp: PnP ACPI: found 1 devices Sep 9 23:58:28.774491 kernel: NET: Registered PF_INET protocol family Sep 9 23:58:28.774498 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 23:58:28.774505 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 23:58:28.774512 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 23:58:28.774519 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 23:58:28.774526 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 23:58:28.774534 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 23:58:28.774541 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 23:58:28.774548 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 23:58:28.774555 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 23:58:28.774562 kernel: PCI: CLS 0 bytes, default 64 Sep 9 23:58:28.774569 kernel: kvm [1]: HYP mode not available Sep 9 23:58:28.774576 kernel: Initialise system trusted keyrings Sep 9 23:58:28.774592 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 23:58:28.774599 kernel: Key type asymmetric registered Sep 9 23:58:28.774607 kernel: Asymmetric key parser 'x509' registered Sep 9 23:58:28.774614 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 9 23:58:28.774624 kernel: io scheduler mq-deadline registered Sep 9 23:58:28.774631 kernel: io scheduler kyber registered Sep 9 23:58:28.774638 kernel: io scheduler bfq registered Sep 9 23:58:28.774654 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 9 23:58:28.774661 kernel: ACPI: button: Power Button [PWRB] Sep 9 23:58:28.774668 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 9 23:58:28.774735 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 9 23:58:28.774748 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 23:58:28.774759 kernel: thunder_xcv, ver 1.0 Sep 9 23:58:28.774766 kernel: thunder_bgx, ver 1.0 Sep 9 23:58:28.774773 kernel: nicpf, ver 1.0 Sep 9 23:58:28.774781 kernel: nicvf, ver 1.0 Sep 9 23:58:28.774853 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 9 23:58:28.774911 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-09T23:58:28 UTC (1757462308) Sep 9 23:58:28.774920 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 9 23:58:28.774927 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 9 23:58:28.774936 kernel: watchdog: NMI not fully supported Sep 9 23:58:28.774943 kernel: watchdog: Hard watchdog permanently disabled Sep 9 23:58:28.774950 kernel: NET: Registered PF_INET6 protocol family Sep 9 23:58:28.774957 kernel: Segment Routing with IPv6 Sep 9 23:58:28.774963 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 23:58:28.774970 kernel: NET: Registered PF_PACKET protocol family Sep 9 23:58:28.774977 kernel: Key type dns_resolver registered Sep 9 23:58:28.774984 kernel: registered taskstats version 1 Sep 9 23:58:28.774990 kernel: Loading compiled-in X.509 certificates Sep 9 23:58:28.774999 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: 61217a1897415238555e2058a4e44c51622b0f87' Sep 9 23:58:28.775006 kernel: Demotion targets for Node 0: null Sep 9 23:58:28.775013 kernel: Key type .fscrypt registered Sep 9 23:58:28.775020 kernel: Key type fscrypt-provisioning registered Sep 9 23:58:28.775027 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 23:58:28.775034 kernel: ima: Allocated hash algorithm: sha1 Sep 9 23:58:28.775041 kernel: ima: No architecture policies found Sep 9 23:58:28.775047 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 9 23:58:28.775055 kernel: clk: Disabling unused clocks Sep 9 23:58:28.775063 kernel: PM: genpd: Disabling unused power domains Sep 9 23:58:28.775070 kernel: Warning: unable to open an initial console. Sep 9 23:58:28.775077 kernel: Freeing unused kernel memory: 38912K Sep 9 23:58:28.775084 kernel: Run /init as init process Sep 9 23:58:28.775091 kernel: with arguments: Sep 9 23:58:28.775098 kernel: /init Sep 9 23:58:28.775105 kernel: with environment: Sep 9 23:58:28.775111 kernel: HOME=/ Sep 9 23:58:28.775118 kernel: TERM=linux Sep 9 23:58:28.775126 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 23:58:28.775134 systemd[1]: Successfully made /usr/ read-only. Sep 9 23:58:28.775144 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 23:58:28.775152 systemd[1]: Detected virtualization kvm. Sep 9 23:58:28.775159 systemd[1]: Detected architecture arm64. Sep 9 23:58:28.775166 systemd[1]: Running in initrd. Sep 9 23:58:28.775174 systemd[1]: No hostname configured, using default hostname. Sep 9 23:58:28.775183 systemd[1]: Hostname set to . Sep 9 23:58:28.775190 systemd[1]: Initializing machine ID from VM UUID. Sep 9 23:58:28.775197 systemd[1]: Queued start job for default target initrd.target. Sep 9 23:58:28.775205 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 23:58:28.775212 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 23:58:28.775220 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 23:58:28.775228 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 23:58:28.775235 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 23:58:28.775245 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 23:58:28.775253 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 23:58:28.775261 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 23:58:28.775269 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 23:58:28.775276 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 23:58:28.775284 systemd[1]: Reached target paths.target - Path Units. Sep 9 23:58:28.775291 systemd[1]: Reached target slices.target - Slice Units. Sep 9 23:58:28.775300 systemd[1]: Reached target swap.target - Swaps. Sep 9 23:58:28.775307 systemd[1]: Reached target timers.target - Timer Units. Sep 9 23:58:28.775315 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 23:58:28.775322 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 23:58:28.775330 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 23:58:28.775337 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 23:58:28.775345 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 23:58:28.775353 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 23:58:28.775361 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 23:58:28.775369 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 23:58:28.775377 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 23:58:28.775387 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 23:58:28.775395 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 23:58:28.775403 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 9 23:58:28.775411 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 23:58:28.775418 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 23:58:28.775430 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 23:58:28.775445 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:58:28.775453 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 23:58:28.775461 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 23:58:28.775468 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 23:58:28.775477 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 23:58:28.775510 systemd-journald[244]: Collecting audit messages is disabled. Sep 9 23:58:28.775528 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 23:58:28.775537 systemd-journald[244]: Journal started Sep 9 23:58:28.775557 systemd-journald[244]: Runtime Journal (/run/log/journal/29a68d45d200488899f6289d6ea888df) is 6M, max 48.5M, 42.4M free. Sep 9 23:58:28.781043 kernel: Bridge firewalling registered Sep 9 23:58:28.763265 systemd-modules-load[245]: Inserted module 'overlay' Sep 9 23:58:28.777300 systemd-modules-load[245]: Inserted module 'br_netfilter' Sep 9 23:58:28.785019 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:58:28.785038 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 23:58:28.786129 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 23:58:28.788298 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 23:58:28.791105 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 23:58:28.792942 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 23:58:28.794631 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 23:58:28.807704 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 23:58:28.815384 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:58:28.816402 systemd-tmpfiles[270]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 9 23:58:28.816550 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 23:58:28.819120 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 23:58:28.822255 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 23:58:28.824847 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 23:58:28.826518 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 23:58:28.846451 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=fc7b279c2d918629032c01551b74c66c198cf923a976f9b3bc0d959e7c2302db Sep 9 23:58:28.860322 systemd-resolved[284]: Positive Trust Anchors: Sep 9 23:58:28.860344 systemd-resolved[284]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 23:58:28.860376 systemd-resolved[284]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 23:58:28.865334 systemd-resolved[284]: Defaulting to hostname 'linux'. Sep 9 23:58:28.867594 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 23:58:28.868466 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 23:58:28.918680 kernel: SCSI subsystem initialized Sep 9 23:58:28.923660 kernel: Loading iSCSI transport class v2.0-870. Sep 9 23:58:28.930660 kernel: iscsi: registered transport (tcp) Sep 9 23:58:28.943049 kernel: iscsi: registered transport (qla4xxx) Sep 9 23:58:28.943091 kernel: QLogic iSCSI HBA Driver Sep 9 23:58:28.959399 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 23:58:28.986969 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 23:58:28.988406 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 23:58:29.038215 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 23:58:29.040389 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 23:58:29.105673 kernel: raid6: neonx8 gen() 15782 MB/s Sep 9 23:58:29.122666 kernel: raid6: neonx4 gen() 15812 MB/s Sep 9 23:58:29.139659 kernel: raid6: neonx2 gen() 13196 MB/s Sep 9 23:58:29.156659 kernel: raid6: neonx1 gen() 10444 MB/s Sep 9 23:58:29.173666 kernel: raid6: int64x8 gen() 6880 MB/s Sep 9 23:58:29.190662 kernel: raid6: int64x4 gen() 7335 MB/s Sep 9 23:58:29.207659 kernel: raid6: int64x2 gen() 6096 MB/s Sep 9 23:58:29.224669 kernel: raid6: int64x1 gen() 5047 MB/s Sep 9 23:58:29.224698 kernel: raid6: using algorithm neonx4 gen() 15812 MB/s Sep 9 23:58:29.241672 kernel: raid6: .... xor() 12351 MB/s, rmw enabled Sep 9 23:58:29.241688 kernel: raid6: using neon recovery algorithm Sep 9 23:58:29.246667 kernel: xor: measuring software checksum speed Sep 9 23:58:29.246687 kernel: 8regs : 21607 MB/sec Sep 9 23:58:29.247702 kernel: 32regs : 21641 MB/sec Sep 9 23:58:29.247714 kernel: arm64_neon : 28128 MB/sec Sep 9 23:58:29.247722 kernel: xor: using function: arm64_neon (28128 MB/sec) Sep 9 23:58:29.299679 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 23:58:29.306237 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 23:58:29.308601 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 23:58:29.337328 systemd-udevd[495]: Using default interface naming scheme 'v255'. Sep 9 23:58:29.341441 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 23:58:29.345774 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 23:58:29.374502 dracut-pre-trigger[506]: rd.md=0: removing MD RAID activation Sep 9 23:58:29.401284 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 23:58:29.403421 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 23:58:29.456519 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 23:58:29.459402 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 23:58:29.507676 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 9 23:58:29.508930 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 9 23:58:29.521748 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 23:58:29.521787 kernel: GPT:9289727 != 19775487 Sep 9 23:58:29.522661 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 23:58:29.522683 kernel: GPT:9289727 != 19775487 Sep 9 23:58:29.523658 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 23:58:29.523682 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 23:58:29.529106 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 23:58:29.529220 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:58:29.533177 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:58:29.536985 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:58:29.555288 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 9 23:58:29.563626 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 23:58:29.565541 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:58:29.574005 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 9 23:58:29.586305 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 9 23:58:29.587600 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 9 23:58:29.596326 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 23:58:29.597673 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 23:58:29.599714 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 23:58:29.601796 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 23:58:29.604532 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 23:58:29.606422 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 23:58:29.619802 disk-uuid[589]: Primary Header is updated. Sep 9 23:58:29.619802 disk-uuid[589]: Secondary Entries is updated. Sep 9 23:58:29.619802 disk-uuid[589]: Secondary Header is updated. Sep 9 23:58:29.623672 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 23:58:29.625854 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 23:58:30.632687 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 23:58:30.633948 disk-uuid[594]: The operation has completed successfully. Sep 9 23:58:30.657654 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 23:58:30.658525 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 23:58:30.682424 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 23:58:30.708698 sh[610]: Success Sep 9 23:58:30.721361 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 23:58:30.721426 kernel: device-mapper: uevent: version 1.0.3 Sep 9 23:58:30.721439 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 9 23:58:30.727660 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 9 23:58:30.748699 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 23:58:30.751236 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 23:58:30.763614 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 23:58:30.769769 kernel: BTRFS: device fsid 2bc16190-0dd5-44d6-b331-3d703f5a1d1f devid 1 transid 40 /dev/mapper/usr (253:0) scanned by mount (622) Sep 9 23:58:30.769808 kernel: BTRFS info (device dm-0): first mount of filesystem 2bc16190-0dd5-44d6-b331-3d703f5a1d1f Sep 9 23:58:30.769821 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:58:30.773910 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 23:58:30.773952 kernel: BTRFS info (device dm-0): enabling free space tree Sep 9 23:58:30.774956 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 23:58:30.776032 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 9 23:58:30.778566 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 23:58:30.780469 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 23:58:30.781913 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 23:58:30.807694 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (653) Sep 9 23:58:30.810221 kernel: BTRFS info (device vda6): first mount of filesystem 3a7d3e29-58a5-4f0c-ac69-b528108338f5 Sep 9 23:58:30.810267 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:58:30.812663 kernel: BTRFS info (device vda6): turning on async discard Sep 9 23:58:30.812698 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 23:58:30.816693 kernel: BTRFS info (device vda6): last unmount of filesystem 3a7d3e29-58a5-4f0c-ac69-b528108338f5 Sep 9 23:58:30.818358 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 23:58:30.820542 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 23:58:30.888708 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 23:58:30.891240 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 23:58:30.928667 ignition[698]: Ignition 2.21.0 Sep 9 23:58:30.928681 ignition[698]: Stage: fetch-offline Sep 9 23:58:30.928725 ignition[698]: no configs at "/usr/lib/ignition/base.d" Sep 9 23:58:30.928734 ignition[698]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 23:58:30.928933 ignition[698]: parsed url from cmdline: "" Sep 9 23:58:30.932042 systemd-networkd[804]: lo: Link UP Sep 9 23:58:30.928936 ignition[698]: no config URL provided Sep 9 23:58:30.932046 systemd-networkd[804]: lo: Gained carrier Sep 9 23:58:30.928941 ignition[698]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 23:58:30.932763 systemd-networkd[804]: Enumeration completed Sep 9 23:58:30.928948 ignition[698]: no config at "/usr/lib/ignition/user.ign" Sep 9 23:58:30.932872 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 23:58:30.928965 ignition[698]: op(1): [started] loading QEMU firmware config module Sep 9 23:58:30.933175 systemd-networkd[804]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:58:30.928969 ignition[698]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 9 23:58:30.933179 systemd-networkd[804]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 23:58:30.937488 ignition[698]: op(1): [finished] loading QEMU firmware config module Sep 9 23:58:30.933798 systemd-networkd[804]: eth0: Link UP Sep 9 23:58:30.934070 systemd-networkd[804]: eth0: Gained carrier Sep 9 23:58:30.934079 systemd-networkd[804]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:58:30.934162 systemd[1]: Reached target network.target - Network. Sep 9 23:58:30.945687 systemd-networkd[804]: eth0: DHCPv4 address 10.0.0.125/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 23:58:30.984000 ignition[698]: parsing config with SHA512: 63c4adf8e68d0567a38593ac2a482b0edbb73c7c9b836f61b6a5d0bfa2c9e6e7fc37ea4070a34efbdb5e28b77e45156005a6c837c2aa2fb85ac7a2af52daabd4 Sep 9 23:58:30.990498 unknown[698]: fetched base config from "system" Sep 9 23:58:30.990511 unknown[698]: fetched user config from "qemu" Sep 9 23:58:30.991003 ignition[698]: fetch-offline: fetch-offline passed Sep 9 23:58:30.991073 ignition[698]: Ignition finished successfully Sep 9 23:58:30.992937 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 23:58:30.994388 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 23:58:30.995139 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 23:58:31.017245 ignition[812]: Ignition 2.21.0 Sep 9 23:58:31.017261 ignition[812]: Stage: kargs Sep 9 23:58:31.017395 ignition[812]: no configs at "/usr/lib/ignition/base.d" Sep 9 23:58:31.017404 ignition[812]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 23:58:31.019072 ignition[812]: kargs: kargs passed Sep 9 23:58:31.019140 ignition[812]: Ignition finished successfully Sep 9 23:58:31.023706 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 23:58:31.026259 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 23:58:31.054350 ignition[820]: Ignition 2.21.0 Sep 9 23:58:31.054370 ignition[820]: Stage: disks Sep 9 23:58:31.054511 ignition[820]: no configs at "/usr/lib/ignition/base.d" Sep 9 23:58:31.054519 ignition[820]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 23:58:31.055905 ignition[820]: disks: disks passed Sep 9 23:58:31.057584 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 23:58:31.055957 ignition[820]: Ignition finished successfully Sep 9 23:58:31.058804 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 23:58:31.060124 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 23:58:31.061525 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 23:58:31.063203 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 23:58:31.064711 systemd[1]: Reached target basic.target - Basic System. Sep 9 23:58:31.066944 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 23:58:31.095825 systemd-fsck[831]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 9 23:58:31.100426 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 23:58:31.102967 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 23:58:31.162700 kernel: EXT4-fs (vda9): mounted filesystem 7cc0d7f3-e4a1-4dc4-8b58-ceece0d874c1 r/w with ordered data mode. Quota mode: none. Sep 9 23:58:31.163336 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 23:58:31.164527 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 23:58:31.168214 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 23:58:31.171660 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 23:58:31.173307 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 23:58:31.173360 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 23:58:31.173383 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 23:58:31.183125 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 23:58:31.184939 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 23:58:31.189149 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (839) Sep 9 23:58:31.189177 kernel: BTRFS info (device vda6): first mount of filesystem 3a7d3e29-58a5-4f0c-ac69-b528108338f5 Sep 9 23:58:31.189904 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:58:31.192797 kernel: BTRFS info (device vda6): turning on async discard Sep 9 23:58:31.192817 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 23:58:31.194180 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 23:58:31.218086 initrd-setup-root[863]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 23:58:31.222149 initrd-setup-root[870]: cut: /sysroot/etc/group: No such file or directory Sep 9 23:58:31.225803 initrd-setup-root[877]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 23:58:31.229413 initrd-setup-root[884]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 23:58:31.304715 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 23:58:31.306581 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 23:58:31.309763 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 23:58:31.325670 kernel: BTRFS info (device vda6): last unmount of filesystem 3a7d3e29-58a5-4f0c-ac69-b528108338f5 Sep 9 23:58:31.334726 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 23:58:31.345813 ignition[953]: INFO : Ignition 2.21.0 Sep 9 23:58:31.345813 ignition[953]: INFO : Stage: mount Sep 9 23:58:31.348159 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 23:58:31.348159 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 23:58:31.348159 ignition[953]: INFO : mount: mount passed Sep 9 23:58:31.348159 ignition[953]: INFO : Ignition finished successfully Sep 9 23:58:31.350689 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 23:58:31.352552 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 23:58:31.768709 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 23:58:31.770204 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 23:58:31.795152 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (965) Sep 9 23:58:31.795195 kernel: BTRFS info (device vda6): first mount of filesystem 3a7d3e29-58a5-4f0c-ac69-b528108338f5 Sep 9 23:58:31.795206 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:58:31.798131 kernel: BTRFS info (device vda6): turning on async discard Sep 9 23:58:31.798153 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 23:58:31.799512 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 23:58:31.828487 ignition[982]: INFO : Ignition 2.21.0 Sep 9 23:58:31.828487 ignition[982]: INFO : Stage: files Sep 9 23:58:31.830612 ignition[982]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 23:58:31.830612 ignition[982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 23:58:31.830612 ignition[982]: DEBUG : files: compiled without relabeling support, skipping Sep 9 23:58:31.833732 ignition[982]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 23:58:31.833732 ignition[982]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 23:58:31.835983 ignition[982]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 23:58:31.835983 ignition[982]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 23:58:31.835983 ignition[982]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 23:58:31.835166 unknown[982]: wrote ssh authorized keys file for user: core Sep 9 23:58:31.839976 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 9 23:58:31.839976 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 9 23:58:31.988857 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 23:58:32.073780 systemd-networkd[804]: eth0: Gained IPv6LL Sep 9 23:58:32.267688 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 9 23:58:32.267688 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 23:58:32.271467 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 9 23:58:32.394760 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 9 23:58:32.495179 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 23:58:32.496794 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 9 23:58:32.496794 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 23:58:32.496794 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 23:58:32.496794 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 23:58:32.496794 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 23:58:32.496794 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 23:58:32.496794 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 23:58:32.496794 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 23:58:32.508102 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 23:58:32.508102 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 23:58:32.508102 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 23:58:32.508102 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 23:58:32.508102 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 23:58:32.508102 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 9 23:58:32.738964 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 9 23:58:33.152804 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 23:58:33.152804 ignition[982]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 9 23:58:33.156155 ignition[982]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 23:58:33.156155 ignition[982]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 23:58:33.156155 ignition[982]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 9 23:58:33.156155 ignition[982]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 9 23:58:33.156155 ignition[982]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 23:58:33.156155 ignition[982]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 23:58:33.156155 ignition[982]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 9 23:58:33.156155 ignition[982]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 23:58:33.169883 ignition[982]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 23:58:33.173865 ignition[982]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 23:58:33.176373 ignition[982]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 23:58:33.176373 ignition[982]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 9 23:58:33.176373 ignition[982]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 23:58:33.176373 ignition[982]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 23:58:33.176373 ignition[982]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 23:58:33.176373 ignition[982]: INFO : files: files passed Sep 9 23:58:33.176373 ignition[982]: INFO : Ignition finished successfully Sep 9 23:58:33.177047 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 23:58:33.179551 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 23:58:33.181091 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 23:58:33.195495 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 23:58:33.195590 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 23:58:33.198561 initrd-setup-root-after-ignition[1010]: grep: /sysroot/oem/oem-release: No such file or directory Sep 9 23:58:33.199779 initrd-setup-root-after-ignition[1012]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 23:58:33.199779 initrd-setup-root-after-ignition[1012]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 23:58:33.203860 initrd-setup-root-after-ignition[1016]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 23:58:33.200885 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 23:58:33.202344 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 23:58:33.205300 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 23:58:33.244770 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 23:58:33.245680 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 23:58:33.246930 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 23:58:33.248571 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 23:58:33.250202 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 23:58:33.250988 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 23:58:33.283685 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 23:58:33.285973 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 23:58:33.303924 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 23:58:33.305080 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 23:58:33.306782 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 23:58:33.308450 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 23:58:33.308576 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 23:58:33.310678 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 23:58:33.312510 systemd[1]: Stopped target basic.target - Basic System. Sep 9 23:58:33.313885 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 23:58:33.315402 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 23:58:33.317014 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 23:58:33.318568 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 9 23:58:33.320410 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 23:58:33.321938 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 23:58:33.323588 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 23:58:33.325293 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 23:58:33.326707 systemd[1]: Stopped target swap.target - Swaps. Sep 9 23:58:33.328046 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 23:58:33.328169 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 23:58:33.330127 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 23:58:33.331537 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 23:58:33.333063 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 23:58:33.334497 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 23:58:33.335518 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 23:58:33.335634 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 23:58:33.337933 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 23:58:33.338042 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 23:58:33.339842 systemd[1]: Stopped target paths.target - Path Units. Sep 9 23:58:33.341011 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 23:58:33.345720 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 23:58:33.346706 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 23:58:33.348740 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 23:58:33.349890 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 23:58:33.349979 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 23:58:33.351501 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 23:58:33.351589 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 23:58:33.352718 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 23:58:33.352834 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 23:58:33.354279 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 23:58:33.354381 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 23:58:33.356355 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 23:58:33.358340 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 23:58:33.359292 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 23:58:33.359409 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 23:58:33.360801 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 23:58:33.360898 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 23:58:33.365511 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 23:58:33.379831 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 23:58:33.388123 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 23:58:33.392838 ignition[1037]: INFO : Ignition 2.21.0 Sep 9 23:58:33.392838 ignition[1037]: INFO : Stage: umount Sep 9 23:58:33.394179 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 23:58:33.394179 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 23:58:33.394179 ignition[1037]: INFO : umount: umount passed Sep 9 23:58:33.394179 ignition[1037]: INFO : Ignition finished successfully Sep 9 23:58:33.396039 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 23:58:33.396134 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 23:58:33.397454 systemd[1]: Stopped target network.target - Network. Sep 9 23:58:33.398314 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 23:58:33.398379 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 23:58:33.399581 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 23:58:33.399620 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 23:58:33.400973 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 23:58:33.401016 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 23:58:33.402452 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 23:58:33.402492 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 23:58:33.404248 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 23:58:33.405539 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 23:58:33.408538 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 23:58:33.408684 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 23:58:33.411778 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 23:58:33.411994 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 23:58:33.412079 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 23:58:33.415056 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 23:58:33.415553 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 9 23:58:33.416998 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 23:58:33.417035 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 23:58:33.419264 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 23:58:33.420022 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 23:58:33.420074 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 23:58:33.421753 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 23:58:33.421795 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:58:33.424148 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 23:58:33.424190 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 23:58:33.425502 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 23:58:33.425537 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 23:58:33.427773 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 23:58:33.430313 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 23:58:33.430365 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 23:58:33.443632 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 23:58:33.444788 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 23:58:33.446184 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 23:58:33.446261 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 23:58:33.447720 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 23:58:33.447805 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 23:58:33.449674 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 23:58:33.449734 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 23:58:33.450574 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 23:58:33.450605 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 23:58:33.454318 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 23:58:33.454369 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 23:58:33.456423 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 23:58:33.456465 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 23:58:33.458903 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 23:58:33.458949 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 23:58:33.461166 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 23:58:33.461214 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 23:58:33.463274 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 23:58:33.464834 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 9 23:58:33.464883 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 23:58:33.467340 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 23:58:33.467377 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 23:58:33.469583 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 9 23:58:33.469622 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 23:58:33.472106 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 23:58:33.472143 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 23:58:33.473956 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 23:58:33.473994 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:58:33.477352 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 9 23:58:33.477397 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 9 23:58:33.477424 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 9 23:58:33.477452 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 23:58:33.484785 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 23:58:33.484866 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 23:58:33.486794 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 23:58:33.488949 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 23:58:33.522094 systemd[1]: Switching root. Sep 9 23:58:33.555663 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Sep 9 23:58:33.555708 systemd-journald[244]: Journal stopped Sep 9 23:58:34.313406 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 23:58:34.313457 kernel: SELinux: policy capability open_perms=1 Sep 9 23:58:34.313470 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 23:58:34.313481 kernel: SELinux: policy capability always_check_network=0 Sep 9 23:58:34.313491 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 23:58:34.313501 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 23:58:34.313511 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 23:58:34.313520 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 23:58:34.313529 kernel: SELinux: policy capability userspace_initial_context=0 Sep 9 23:58:34.313538 kernel: audit: type=1403 audit(1757462313.729:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 23:58:34.313552 systemd[1]: Successfully loaded SELinux policy in 47.469ms. Sep 9 23:58:34.313585 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.718ms. Sep 9 23:58:34.313600 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 23:58:34.313610 systemd[1]: Detected virtualization kvm. Sep 9 23:58:34.313620 systemd[1]: Detected architecture arm64. Sep 9 23:58:34.313631 systemd[1]: Detected first boot. Sep 9 23:58:34.313641 systemd[1]: Initializing machine ID from VM UUID. Sep 9 23:58:34.313665 zram_generator::config[1083]: No configuration found. Sep 9 23:58:34.313676 kernel: NET: Registered PF_VSOCK protocol family Sep 9 23:58:34.313685 systemd[1]: Populated /etc with preset unit settings. Sep 9 23:58:34.313695 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 23:58:34.313712 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 23:58:34.313722 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 23:58:34.313732 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 23:58:34.313742 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 23:58:34.313752 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 23:58:34.313762 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 23:58:34.313773 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 23:58:34.313783 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 23:58:34.313795 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 23:58:34.313806 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 23:58:34.313816 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 23:58:34.313825 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 23:58:34.313841 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 23:58:34.313851 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 23:58:34.313865 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 23:58:34.313876 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 23:58:34.313886 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 23:58:34.313898 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 9 23:58:34.313908 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 23:58:34.313918 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 23:58:34.313928 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 23:58:34.313938 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 23:58:34.313948 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 23:58:34.313958 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 23:58:34.313968 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 23:58:34.313980 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 23:58:34.313990 systemd[1]: Reached target slices.target - Slice Units. Sep 9 23:58:34.313999 systemd[1]: Reached target swap.target - Swaps. Sep 9 23:58:34.314009 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 23:58:34.314019 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 23:58:34.314035 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 23:58:34.314045 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 23:58:34.314055 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 23:58:34.314068 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 23:58:34.314082 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 23:58:34.314092 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 23:58:34.314102 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 23:58:34.314113 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 23:58:34.314123 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 23:58:34.314133 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 23:58:34.314143 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 23:58:34.314154 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 23:58:34.314164 systemd[1]: Reached target machines.target - Containers. Sep 9 23:58:34.314175 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 23:58:34.314185 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 23:58:34.314195 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 23:58:34.314206 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 23:58:34.314216 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 23:58:34.314225 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 23:58:34.314235 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 23:58:34.314245 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 23:58:34.314257 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 23:58:34.314268 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 23:58:34.314278 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 23:58:34.314290 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 23:58:34.314300 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 23:58:34.314310 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 23:58:34.314320 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 23:58:34.314330 kernel: fuse: init (API version 7.41) Sep 9 23:58:34.314341 kernel: loop: module loaded Sep 9 23:58:34.314350 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 23:58:34.314360 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 23:58:34.314370 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 23:58:34.314380 kernel: ACPI: bus type drm_connector registered Sep 9 23:58:34.314389 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 23:58:34.314400 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 23:58:34.314410 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 23:58:34.314421 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 23:58:34.314431 systemd[1]: Stopped verity-setup.service. Sep 9 23:58:34.314440 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 23:58:34.314469 systemd-journald[1148]: Collecting audit messages is disabled. Sep 9 23:58:34.314494 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 23:58:34.314506 systemd-journald[1148]: Journal started Sep 9 23:58:34.314527 systemd-journald[1148]: Runtime Journal (/run/log/journal/29a68d45d200488899f6289d6ea888df) is 6M, max 48.5M, 42.4M free. Sep 9 23:58:34.106354 systemd[1]: Queued start job for default target multi-user.target. Sep 9 23:58:34.128703 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 9 23:58:34.129088 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 23:58:34.316692 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 23:58:34.318183 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 23:58:34.319353 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 23:58:34.320441 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 23:58:34.321720 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 23:58:34.322895 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 23:58:34.324249 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 23:58:34.325549 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 23:58:34.325763 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 23:58:34.326994 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 23:58:34.327138 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 23:58:34.330399 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 23:58:34.330548 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 23:58:34.331802 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 23:58:34.331965 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 23:58:34.333432 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 23:58:34.333609 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 23:58:34.334767 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 23:58:34.334922 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 23:58:34.335997 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 23:58:34.337318 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 23:58:34.338587 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 23:58:34.339949 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 23:58:34.351895 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 23:58:34.354002 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 23:58:34.355807 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 23:58:34.356658 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 23:58:34.356691 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 23:58:34.358310 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 23:58:34.366828 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 23:58:34.367695 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 23:58:34.368877 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 23:58:34.370756 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 23:58:34.371726 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 23:58:34.372763 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 23:58:34.373725 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 23:58:34.374585 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 23:58:34.378779 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 23:58:34.380929 systemd-journald[1148]: Time spent on flushing to /var/log/journal/29a68d45d200488899f6289d6ea888df is 18.142ms for 894 entries. Sep 9 23:58:34.380929 systemd-journald[1148]: System Journal (/var/log/journal/29a68d45d200488899f6289d6ea888df) is 8M, max 195.6M, 187.6M free. Sep 9 23:58:34.402764 systemd-journald[1148]: Received client request to flush runtime journal. Sep 9 23:58:34.381780 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 23:58:34.384370 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 23:58:34.385641 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 23:58:34.387481 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 23:58:34.404333 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Sep 9 23:58:34.404351 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Sep 9 23:58:34.405027 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 23:58:34.408748 kernel: loop0: detected capacity change from 0 to 207008 Sep 9 23:58:34.410321 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 23:58:34.413815 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 23:58:34.416253 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 23:58:34.420352 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 23:58:34.426722 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 23:58:34.432638 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 23:58:34.435707 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:58:34.442839 kernel: loop1: detected capacity change from 0 to 100608 Sep 9 23:58:34.451479 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 23:58:34.463924 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 23:58:34.466408 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 23:58:34.469675 kernel: loop2: detected capacity change from 0 to 119320 Sep 9 23:58:34.485520 systemd-tmpfiles[1224]: ACLs are not supported, ignoring. Sep 9 23:58:34.485546 systemd-tmpfiles[1224]: ACLs are not supported, ignoring. Sep 9 23:58:34.489513 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 23:58:34.514737 kernel: loop3: detected capacity change from 0 to 207008 Sep 9 23:58:34.521687 kernel: loop4: detected capacity change from 0 to 100608 Sep 9 23:58:34.527671 kernel: loop5: detected capacity change from 0 to 119320 Sep 9 23:58:34.533255 (sd-merge)[1230]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 9 23:58:34.533662 (sd-merge)[1230]: Merged extensions into '/usr'. Sep 9 23:58:34.537103 systemd[1]: Reload requested from client PID 1200 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 23:58:34.537120 systemd[1]: Reloading... Sep 9 23:58:34.580670 zram_generator::config[1253]: No configuration found. Sep 9 23:58:34.650822 ldconfig[1195]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 23:58:34.734953 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 23:58:34.735422 systemd[1]: Reloading finished in 197 ms. Sep 9 23:58:34.766158 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 23:58:34.767364 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 23:58:34.779786 systemd[1]: Starting ensure-sysext.service... Sep 9 23:58:34.781346 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 23:58:34.790474 systemd[1]: Reload requested from client PID 1290 ('systemctl') (unit ensure-sysext.service)... Sep 9 23:58:34.790490 systemd[1]: Reloading... Sep 9 23:58:34.794839 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 9 23:58:34.794868 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 9 23:58:34.795118 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 23:58:34.795301 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 23:58:34.795933 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 23:58:34.796136 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Sep 9 23:58:34.796185 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Sep 9 23:58:34.798882 systemd-tmpfiles[1291]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 23:58:34.798895 systemd-tmpfiles[1291]: Skipping /boot Sep 9 23:58:34.804989 systemd-tmpfiles[1291]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 23:58:34.805004 systemd-tmpfiles[1291]: Skipping /boot Sep 9 23:58:34.830684 zram_generator::config[1314]: No configuration found. Sep 9 23:58:34.961954 systemd[1]: Reloading finished in 171 ms. Sep 9 23:58:34.985256 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 23:58:34.990746 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 23:58:35.003715 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 23:58:35.005786 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 23:58:35.007599 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 23:58:35.011779 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 23:58:35.013874 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 23:58:35.015950 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 23:58:35.032763 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 23:58:35.035469 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 23:58:35.039274 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 23:58:35.040584 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 23:58:35.043256 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 23:58:35.047040 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 23:58:35.048122 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 23:58:35.048288 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 23:58:35.056930 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 23:58:35.058639 systemd-udevd[1359]: Using default interface naming scheme 'v255'. Sep 9 23:58:35.059449 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 23:58:35.059678 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 23:58:35.061221 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 23:58:35.061391 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 23:58:35.069683 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 23:58:35.071107 augenrules[1387]: No rules Sep 9 23:58:35.072169 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 23:58:35.072310 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 23:58:35.074243 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 23:58:35.074429 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 23:58:35.077275 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 23:58:35.078851 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 23:58:35.086028 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 23:58:35.088496 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 23:58:35.093862 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 23:58:35.094789 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 23:58:35.094898 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 23:58:35.098974 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 23:58:35.100696 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 23:58:35.100823 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 23:58:35.114011 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 23:58:35.118541 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 23:58:35.119898 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 23:58:35.132388 systemd[1]: Finished ensure-sysext.service. Sep 9 23:58:35.133782 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 23:58:35.136050 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 23:58:35.136257 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 23:58:35.145157 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 9 23:58:35.150442 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 23:58:35.151569 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 23:58:35.152855 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 23:58:35.156980 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 23:58:35.157996 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 23:58:35.158036 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 23:58:35.158094 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 23:58:35.161810 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 23:58:35.163714 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 23:58:35.164141 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 23:58:35.164320 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 23:58:35.180981 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 23:58:35.181158 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 23:58:35.183047 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 23:58:35.197294 augenrules[1436]: /sbin/augenrules: No change Sep 9 23:58:35.205265 augenrules[1466]: No rules Sep 9 23:58:35.205068 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 23:58:35.205714 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 23:58:35.216984 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 23:58:35.219402 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 23:58:35.250076 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 23:58:35.280658 systemd-networkd[1424]: lo: Link UP Sep 9 23:58:35.280956 systemd-networkd[1424]: lo: Gained carrier Sep 9 23:58:35.281899 systemd-networkd[1424]: Enumeration completed Sep 9 23:58:35.282404 systemd-networkd[1424]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:58:35.282482 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 23:58:35.284633 systemd-networkd[1424]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 23:58:35.285322 systemd-networkd[1424]: eth0: Link UP Sep 9 23:58:35.285510 systemd-networkd[1424]: eth0: Gained carrier Sep 9 23:58:35.285586 systemd-networkd[1424]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:58:35.286917 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 23:58:35.290336 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 23:58:35.305760 systemd-networkd[1424]: eth0: DHCPv4 address 10.0.0.125/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 23:58:35.306144 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 23:58:35.307814 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 23:58:35.315888 systemd-timesyncd[1442]: Network configuration changed, trying to establish connection. Sep 9 23:58:35.717729 systemd-timesyncd[1442]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 9 23:58:35.717771 systemd-timesyncd[1442]: Initial clock synchronization to Tue 2025-09-09 23:58:35.717653 UTC. Sep 9 23:58:35.720366 systemd-resolved[1357]: Positive Trust Anchors: Sep 9 23:58:35.720379 systemd-resolved[1357]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 23:58:35.720415 systemd-resolved[1357]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 23:58:35.730684 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:58:35.732891 systemd-resolved[1357]: Defaulting to hostname 'linux'. Sep 9 23:58:35.734379 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 23:58:35.735718 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 23:58:35.737250 systemd[1]: Reached target network.target - Network. Sep 9 23:58:35.738076 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 23:58:35.778658 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:58:35.779791 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 23:58:35.780745 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 23:58:35.781779 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 23:58:35.782989 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 23:58:35.784091 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 23:58:35.785171 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 23:58:35.786237 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 23:58:35.786269 systemd[1]: Reached target paths.target - Path Units. Sep 9 23:58:35.787047 systemd[1]: Reached target timers.target - Timer Units. Sep 9 23:58:35.788882 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 23:58:35.791026 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 23:58:35.793622 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 23:58:35.794817 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 23:58:35.795903 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 23:58:35.799085 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 23:58:35.800380 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 23:58:35.802040 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 23:58:35.803035 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 23:58:35.803879 systemd[1]: Reached target basic.target - Basic System. Sep 9 23:58:35.804710 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 23:58:35.804738 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 23:58:35.805649 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 23:58:35.807336 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 23:58:35.809098 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 23:58:35.810902 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 23:58:35.812988 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 23:58:35.813784 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 23:58:35.814710 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 23:58:35.817877 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 23:58:35.818812 jq[1505]: false Sep 9 23:58:35.820202 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 23:58:35.823630 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 23:58:35.827067 extend-filesystems[1506]: Found /dev/vda6 Sep 9 23:58:35.828169 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 23:58:35.829169 extend-filesystems[1506]: Found /dev/vda9 Sep 9 23:58:35.831228 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 23:58:35.834339 extend-filesystems[1506]: Checking size of /dev/vda9 Sep 9 23:58:35.831752 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 23:58:35.832337 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 23:58:35.837686 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 23:58:35.842531 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 23:58:35.845603 jq[1524]: true Sep 9 23:58:35.844971 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 23:58:35.845147 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 23:58:35.845378 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 23:58:35.845690 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 23:58:35.849857 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 23:58:35.850044 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 23:58:35.861299 update_engine[1523]: I20250909 23:58:35.861078 1523 main.cc:92] Flatcar Update Engine starting Sep 9 23:58:35.862660 extend-filesystems[1506]: Resized partition /dev/vda9 Sep 9 23:58:35.865543 extend-filesystems[1542]: resize2fs 1.47.2 (1-Jan-2025) Sep 9 23:58:35.872226 jq[1531]: true Sep 9 23:58:35.872522 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 9 23:58:35.890111 tar[1530]: linux-arm64/LICENSE Sep 9 23:58:35.890336 tar[1530]: linux-arm64/helm Sep 9 23:58:35.900545 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 9 23:58:35.900910 (ntainerd)[1546]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 23:58:35.903861 dbus-daemon[1503]: [system] SELinux support is enabled Sep 9 23:58:35.904871 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 23:58:35.911271 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 23:58:35.911301 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 23:58:35.912571 systemd-logind[1520]: Watching system buttons on /dev/input/event0 (Power Button) Sep 9 23:58:35.912844 systemd-logind[1520]: New seat seat0. Sep 9 23:58:35.913779 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 23:58:35.913803 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 23:58:35.914123 extend-filesystems[1542]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 23:58:35.914123 extend-filesystems[1542]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 23:58:35.914123 extend-filesystems[1542]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 9 23:58:35.927677 extend-filesystems[1506]: Resized filesystem in /dev/vda9 Sep 9 23:58:35.928341 update_engine[1523]: I20250909 23:58:35.919854 1523 update_check_scheduler.cc:74] Next update check in 5m30s Sep 9 23:58:35.915315 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 23:58:35.917089 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 23:58:35.917317 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 23:58:35.927735 systemd[1]: Started update-engine.service - Update Engine. Sep 9 23:58:35.929583 bash[1564]: Updated "/home/core/.ssh/authorized_keys" Sep 9 23:58:35.931832 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 23:58:35.934426 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 23:58:35.936044 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 23:58:35.988720 locksmithd[1566]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 23:58:36.070287 containerd[1546]: time="2025-09-09T23:58:36Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 9 23:58:36.071319 containerd[1546]: time="2025-09-09T23:58:36.071282479Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 9 23:58:36.081233 containerd[1546]: time="2025-09-09T23:58:36.081177679Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.48µs" Sep 9 23:58:36.081233 containerd[1546]: time="2025-09-09T23:58:36.081213599Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 9 23:58:36.081233 containerd[1546]: time="2025-09-09T23:58:36.081231759Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 9 23:58:36.081398 containerd[1546]: time="2025-09-09T23:58:36.081377799Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 9 23:58:36.081429 containerd[1546]: time="2025-09-09T23:58:36.081399319Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 9 23:58:36.081448 containerd[1546]: time="2025-09-09T23:58:36.081435399Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 23:58:36.081513 containerd[1546]: time="2025-09-09T23:58:36.081493559Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 23:58:36.081534 containerd[1546]: time="2025-09-09T23:58:36.081523079Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 23:58:36.081771 containerd[1546]: time="2025-09-09T23:58:36.081737439Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 23:58:36.081771 containerd[1546]: time="2025-09-09T23:58:36.081761679Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 23:58:36.081810 containerd[1546]: time="2025-09-09T23:58:36.081773599Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 23:58:36.081810 containerd[1546]: time="2025-09-09T23:58:36.081782759Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 9 23:58:36.081913 containerd[1546]: time="2025-09-09T23:58:36.081894839Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 9 23:58:36.082112 containerd[1546]: time="2025-09-09T23:58:36.082081839Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 23:58:36.082134 containerd[1546]: time="2025-09-09T23:58:36.082122359Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 23:58:36.082156 containerd[1546]: time="2025-09-09T23:58:36.082133639Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 9 23:58:36.082191 containerd[1546]: time="2025-09-09T23:58:36.082177919Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 9 23:58:36.083493 containerd[1546]: time="2025-09-09T23:58:36.083457839Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 9 23:58:36.083597 containerd[1546]: time="2025-09-09T23:58:36.083573359Z" level=info msg="metadata content store policy set" policy=shared Sep 9 23:58:36.086524 containerd[1546]: time="2025-09-09T23:58:36.086488399Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 9 23:58:36.086569 containerd[1546]: time="2025-09-09T23:58:36.086560479Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 9 23:58:36.086588 containerd[1546]: time="2025-09-09T23:58:36.086575999Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 9 23:58:36.086605 containerd[1546]: time="2025-09-09T23:58:36.086587239Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 9 23:58:36.086605 containerd[1546]: time="2025-09-09T23:58:36.086598839Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 9 23:58:36.086673 containerd[1546]: time="2025-09-09T23:58:36.086659559Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 9 23:58:36.086720 containerd[1546]: time="2025-09-09T23:58:36.086675959Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 9 23:58:36.086720 containerd[1546]: time="2025-09-09T23:58:36.086688799Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 9 23:58:36.086720 containerd[1546]: time="2025-09-09T23:58:36.086699919Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 9 23:58:36.086720 containerd[1546]: time="2025-09-09T23:58:36.086709239Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 9 23:58:36.086720 containerd[1546]: time="2025-09-09T23:58:36.086718799Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 9 23:58:36.086796 containerd[1546]: time="2025-09-09T23:58:36.086730839Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 9 23:58:36.086865 containerd[1546]: time="2025-09-09T23:58:36.086846199Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 9 23:58:36.086888 containerd[1546]: time="2025-09-09T23:58:36.086873159Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 9 23:58:36.086906 containerd[1546]: time="2025-09-09T23:58:36.086888399Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 9 23:58:36.086906 containerd[1546]: time="2025-09-09T23:58:36.086899639Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 9 23:58:36.086939 containerd[1546]: time="2025-09-09T23:58:36.086909839Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 9 23:58:36.086939 containerd[1546]: time="2025-09-09T23:58:36.086921919Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 9 23:58:36.086974 containerd[1546]: time="2025-09-09T23:58:36.086938119Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 9 23:58:36.086974 containerd[1546]: time="2025-09-09T23:58:36.086948639Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 9 23:58:36.086974 containerd[1546]: time="2025-09-09T23:58:36.086958839Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 9 23:58:36.086974 containerd[1546]: time="2025-09-09T23:58:36.086968999Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 9 23:58:36.087040 containerd[1546]: time="2025-09-09T23:58:36.086978239Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 9 23:58:36.087167 containerd[1546]: time="2025-09-09T23:58:36.087154359Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 9 23:58:36.087190 containerd[1546]: time="2025-09-09T23:58:36.087171799Z" level=info msg="Start snapshots syncer" Sep 9 23:58:36.087221 containerd[1546]: time="2025-09-09T23:58:36.087209399Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 9 23:58:36.087579 containerd[1546]: time="2025-09-09T23:58:36.087544519Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 9 23:58:36.087668 containerd[1546]: time="2025-09-09T23:58:36.087598359Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 9 23:58:36.087693 containerd[1546]: time="2025-09-09T23:58:36.087682559Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 9 23:58:36.087927 containerd[1546]: time="2025-09-09T23:58:36.087904439Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 9 23:58:36.087947 containerd[1546]: time="2025-09-09T23:58:36.087938119Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 9 23:58:36.087964 containerd[1546]: time="2025-09-09T23:58:36.087950319Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 9 23:58:36.087964 containerd[1546]: time="2025-09-09T23:58:36.087960559Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 9 23:58:36.087996 containerd[1546]: time="2025-09-09T23:58:36.087971439Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 9 23:58:36.087996 containerd[1546]: time="2025-09-09T23:58:36.087982639Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 9 23:58:36.087996 containerd[1546]: time="2025-09-09T23:58:36.087992119Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 9 23:58:36.088049 containerd[1546]: time="2025-09-09T23:58:36.088020719Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 9 23:58:36.088049 containerd[1546]: time="2025-09-09T23:58:36.088032719Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 9 23:58:36.088049 containerd[1546]: time="2025-09-09T23:58:36.088042919Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 9 23:58:36.088095 containerd[1546]: time="2025-09-09T23:58:36.088069239Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 23:58:36.088095 containerd[1546]: time="2025-09-09T23:58:36.088081839Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 23:58:36.088095 containerd[1546]: time="2025-09-09T23:58:36.088089559Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 23:58:36.088147 containerd[1546]: time="2025-09-09T23:58:36.088098559Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 23:58:36.088147 containerd[1546]: time="2025-09-09T23:58:36.088106679Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 9 23:58:36.088147 containerd[1546]: time="2025-09-09T23:58:36.088120079Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 9 23:58:36.088147 containerd[1546]: time="2025-09-09T23:58:36.088133519Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 9 23:58:36.088219 containerd[1546]: time="2025-09-09T23:58:36.088208919Z" level=info msg="runtime interface created" Sep 9 23:58:36.088238 containerd[1546]: time="2025-09-09T23:58:36.088218279Z" level=info msg="created NRI interface" Sep 9 23:58:36.088238 containerd[1546]: time="2025-09-09T23:58:36.088228559Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 9 23:58:36.088273 containerd[1546]: time="2025-09-09T23:58:36.088239279Z" level=info msg="Connect containerd service" Sep 9 23:58:36.088273 containerd[1546]: time="2025-09-09T23:58:36.088267679Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 23:58:36.088961 containerd[1546]: time="2025-09-09T23:58:36.088936079Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 23:58:36.158103 containerd[1546]: time="2025-09-09T23:58:36.157978959Z" level=info msg="Start subscribing containerd event" Sep 9 23:58:36.158103 containerd[1546]: time="2025-09-09T23:58:36.158058679Z" level=info msg="Start recovering state" Sep 9 23:58:36.158205 containerd[1546]: time="2025-09-09T23:58:36.158144719Z" level=info msg="Start event monitor" Sep 9 23:58:36.158205 containerd[1546]: time="2025-09-09T23:58:36.158157639Z" level=info msg="Start cni network conf syncer for default" Sep 9 23:58:36.158205 containerd[1546]: time="2025-09-09T23:58:36.158165799Z" level=info msg="Start streaming server" Sep 9 23:58:36.158205 containerd[1546]: time="2025-09-09T23:58:36.158174039Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 9 23:58:36.158205 containerd[1546]: time="2025-09-09T23:58:36.158180439Z" level=info msg="runtime interface starting up..." Sep 9 23:58:36.158205 containerd[1546]: time="2025-09-09T23:58:36.158185839Z" level=info msg="starting plugins..." Sep 9 23:58:36.158205 containerd[1546]: time="2025-09-09T23:58:36.158199599Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 9 23:58:36.158605 containerd[1546]: time="2025-09-09T23:58:36.158571799Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 23:58:36.158696 containerd[1546]: time="2025-09-09T23:58:36.158622679Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 23:58:36.160719 containerd[1546]: time="2025-09-09T23:58:36.160662559Z" level=info msg="containerd successfully booted in 0.090707s" Sep 9 23:58:36.160765 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 23:58:36.200801 tar[1530]: linux-arm64/README.md Sep 9 23:58:36.221551 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 23:58:36.923105 sshd_keygen[1529]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 23:58:36.943562 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 23:58:36.945887 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 23:58:36.967066 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 23:58:36.968559 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 23:58:36.970912 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 23:58:37.000037 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 23:58:37.004636 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 23:58:37.006553 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 9 23:58:37.007783 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 23:58:37.274659 systemd-networkd[1424]: eth0: Gained IPv6LL Sep 9 23:58:37.277073 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 23:58:37.279829 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 23:58:37.281965 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 9 23:58:37.284032 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:58:37.299427 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 23:58:37.326160 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 23:58:37.328154 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 9 23:58:37.329555 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 9 23:58:37.331347 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 23:58:37.879960 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:58:37.881278 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 23:58:37.884222 (kubelet)[1635]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 23:58:37.885604 systemd[1]: Startup finished in 2.010s (kernel) + 5.133s (initrd) + 3.802s (userspace) = 10.946s. Sep 9 23:58:38.258293 kubelet[1635]: E0909 23:58:38.257695 1635 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 23:58:38.260690 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 23:58:38.260816 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 23:58:38.261104 systemd[1]: kubelet.service: Consumed 745ms CPU time, 256.5M memory peak. Sep 9 23:58:41.463071 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 23:58:41.464382 systemd[1]: Started sshd@0-10.0.0.125:22-10.0.0.1:53774.service - OpenSSH per-connection server daemon (10.0.0.1:53774). Sep 9 23:58:41.522492 sshd[1649]: Accepted publickey for core from 10.0.0.1 port 53774 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:58:41.523977 sshd-session[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:58:41.529685 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 23:58:41.530482 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 23:58:41.535458 systemd-logind[1520]: New session 1 of user core. Sep 9 23:58:41.556445 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 23:58:41.558615 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 23:58:41.579503 (systemd)[1654]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 23:58:41.581677 systemd-logind[1520]: New session c1 of user core. Sep 9 23:58:41.686068 systemd[1654]: Queued start job for default target default.target. Sep 9 23:58:41.708447 systemd[1654]: Created slice app.slice - User Application Slice. Sep 9 23:58:41.708475 systemd[1654]: Reached target paths.target - Paths. Sep 9 23:58:41.708532 systemd[1654]: Reached target timers.target - Timers. Sep 9 23:58:41.709654 systemd[1654]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 23:58:41.718623 systemd[1654]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 23:58:41.718682 systemd[1654]: Reached target sockets.target - Sockets. Sep 9 23:58:41.718716 systemd[1654]: Reached target basic.target - Basic System. Sep 9 23:58:41.718743 systemd[1654]: Reached target default.target - Main User Target. Sep 9 23:58:41.718767 systemd[1654]: Startup finished in 131ms. Sep 9 23:58:41.718963 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 23:58:41.720717 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 23:58:41.782016 systemd[1]: Started sshd@1-10.0.0.125:22-10.0.0.1:53784.service - OpenSSH per-connection server daemon (10.0.0.1:53784). Sep 9 23:58:41.839642 sshd[1665]: Accepted publickey for core from 10.0.0.1 port 53784 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:58:41.840410 sshd-session[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:58:41.844565 systemd-logind[1520]: New session 2 of user core. Sep 9 23:58:41.865693 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 23:58:41.916329 sshd[1668]: Connection closed by 10.0.0.1 port 53784 Sep 9 23:58:41.916853 sshd-session[1665]: pam_unix(sshd:session): session closed for user core Sep 9 23:58:41.925324 systemd[1]: sshd@1-10.0.0.125:22-10.0.0.1:53784.service: Deactivated successfully. Sep 9 23:58:41.927974 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 23:58:41.928638 systemd-logind[1520]: Session 2 logged out. Waiting for processes to exit. Sep 9 23:58:41.930629 systemd[1]: Started sshd@2-10.0.0.125:22-10.0.0.1:53788.service - OpenSSH per-connection server daemon (10.0.0.1:53788). Sep 9 23:58:41.931555 systemd-logind[1520]: Removed session 2. Sep 9 23:58:41.991168 sshd[1674]: Accepted publickey for core from 10.0.0.1 port 53788 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:58:41.992396 sshd-session[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:58:41.996812 systemd-logind[1520]: New session 3 of user core. Sep 9 23:58:42.006656 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 23:58:42.054938 sshd[1677]: Connection closed by 10.0.0.1 port 53788 Sep 9 23:58:42.055795 sshd-session[1674]: pam_unix(sshd:session): session closed for user core Sep 9 23:58:42.065569 systemd[1]: sshd@2-10.0.0.125:22-10.0.0.1:53788.service: Deactivated successfully. Sep 9 23:58:42.067947 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 23:58:42.069408 systemd-logind[1520]: Session 3 logged out. Waiting for processes to exit. Sep 9 23:58:42.070762 systemd[1]: Started sshd@3-10.0.0.125:22-10.0.0.1:53804.service - OpenSSH per-connection server daemon (10.0.0.1:53804). Sep 9 23:58:42.071504 systemd-logind[1520]: Removed session 3. Sep 9 23:58:42.131765 sshd[1683]: Accepted publickey for core from 10.0.0.1 port 53804 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:58:42.132989 sshd-session[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:58:42.136446 systemd-logind[1520]: New session 4 of user core. Sep 9 23:58:42.142645 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 23:58:42.193026 sshd[1686]: Connection closed by 10.0.0.1 port 53804 Sep 9 23:58:42.193483 sshd-session[1683]: pam_unix(sshd:session): session closed for user core Sep 9 23:58:42.204377 systemd[1]: sshd@3-10.0.0.125:22-10.0.0.1:53804.service: Deactivated successfully. Sep 9 23:58:42.205847 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 23:58:42.206443 systemd-logind[1520]: Session 4 logged out. Waiting for processes to exit. Sep 9 23:58:42.208128 systemd[1]: Started sshd@4-10.0.0.125:22-10.0.0.1:53814.service - OpenSSH per-connection server daemon (10.0.0.1:53814). Sep 9 23:58:42.208974 systemd-logind[1520]: Removed session 4. Sep 9 23:58:42.265798 sshd[1692]: Accepted publickey for core from 10.0.0.1 port 53814 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:58:42.266969 sshd-session[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:58:42.270604 systemd-logind[1520]: New session 5 of user core. Sep 9 23:58:42.277672 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 23:58:42.338113 sudo[1696]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 23:58:42.338362 sudo[1696]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:58:42.349324 sudo[1696]: pam_unix(sudo:session): session closed for user root Sep 9 23:58:42.351297 sshd[1695]: Connection closed by 10.0.0.1 port 53814 Sep 9 23:58:42.351096 sshd-session[1692]: pam_unix(sshd:session): session closed for user core Sep 9 23:58:42.362468 systemd[1]: sshd@4-10.0.0.125:22-10.0.0.1:53814.service: Deactivated successfully. Sep 9 23:58:42.363900 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 23:58:42.364564 systemd-logind[1520]: Session 5 logged out. Waiting for processes to exit. Sep 9 23:58:42.366394 systemd[1]: Started sshd@5-10.0.0.125:22-10.0.0.1:53818.service - OpenSSH per-connection server daemon (10.0.0.1:53818). Sep 9 23:58:42.367163 systemd-logind[1520]: Removed session 5. Sep 9 23:58:42.421344 sshd[1702]: Accepted publickey for core from 10.0.0.1 port 53818 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:58:42.422597 sshd-session[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:58:42.426184 systemd-logind[1520]: New session 6 of user core. Sep 9 23:58:42.438650 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 23:58:42.489949 sudo[1707]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 23:58:42.490207 sudo[1707]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:58:42.496926 sudo[1707]: pam_unix(sudo:session): session closed for user root Sep 9 23:58:42.501563 sudo[1706]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 23:58:42.501832 sudo[1706]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:58:42.509926 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 23:58:42.546654 augenrules[1729]: No rules Sep 9 23:58:42.548080 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 23:58:42.548300 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 23:58:42.549157 sudo[1706]: pam_unix(sudo:session): session closed for user root Sep 9 23:58:42.550326 sshd[1705]: Connection closed by 10.0.0.1 port 53818 Sep 9 23:58:42.550765 sshd-session[1702]: pam_unix(sshd:session): session closed for user core Sep 9 23:58:42.561317 systemd[1]: sshd@5-10.0.0.125:22-10.0.0.1:53818.service: Deactivated successfully. Sep 9 23:58:42.562947 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 23:58:42.563656 systemd-logind[1520]: Session 6 logged out. Waiting for processes to exit. Sep 9 23:58:42.566216 systemd[1]: Started sshd@6-10.0.0.125:22-10.0.0.1:53834.service - OpenSSH per-connection server daemon (10.0.0.1:53834). Sep 9 23:58:42.566805 systemd-logind[1520]: Removed session 6. Sep 9 23:58:42.628692 sshd[1738]: Accepted publickey for core from 10.0.0.1 port 53834 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:58:42.630005 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:58:42.634219 systemd-logind[1520]: New session 7 of user core. Sep 9 23:58:42.648663 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 23:58:42.699667 sudo[1742]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 23:58:42.699922 sudo[1742]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:58:42.961712 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 23:58:42.981814 (dockerd)[1763]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 23:58:43.175650 dockerd[1763]: time="2025-09-09T23:58:43.175586599Z" level=info msg="Starting up" Sep 9 23:58:43.176478 dockerd[1763]: time="2025-09-09T23:58:43.176444519Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 9 23:58:43.186746 dockerd[1763]: time="2025-09-09T23:58:43.186715119Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 9 23:58:43.282905 dockerd[1763]: time="2025-09-09T23:58:43.282785199Z" level=info msg="Loading containers: start." Sep 9 23:58:43.290582 kernel: Initializing XFRM netlink socket Sep 9 23:58:43.472203 systemd-networkd[1424]: docker0: Link UP Sep 9 23:58:43.475468 dockerd[1763]: time="2025-09-09T23:58:43.475424199Z" level=info msg="Loading containers: done." Sep 9 23:58:43.487436 dockerd[1763]: time="2025-09-09T23:58:43.487373119Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 23:58:43.487591 dockerd[1763]: time="2025-09-09T23:58:43.487470119Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 9 23:58:43.487591 dockerd[1763]: time="2025-09-09T23:58:43.487575599Z" level=info msg="Initializing buildkit" Sep 9 23:58:43.507787 dockerd[1763]: time="2025-09-09T23:58:43.507749119Z" level=info msg="Completed buildkit initialization" Sep 9 23:58:43.513486 dockerd[1763]: time="2025-09-09T23:58:43.513440359Z" level=info msg="Daemon has completed initialization" Sep 9 23:58:43.513698 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 23:58:43.514183 dockerd[1763]: time="2025-09-09T23:58:43.513503679Z" level=info msg="API listen on /run/docker.sock" Sep 9 23:58:44.008598 containerd[1546]: time="2025-09-09T23:58:44.008563439Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 9 23:58:44.583413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2693582191.mount: Deactivated successfully. Sep 9 23:58:45.583140 containerd[1546]: time="2025-09-09T23:58:45.583081439Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:58:45.583538 containerd[1546]: time="2025-09-09T23:58:45.583488399Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=26328359" Sep 9 23:58:45.584462 containerd[1546]: time="2025-09-09T23:58:45.584428279Z" level=info msg="ImageCreate event name:\"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:58:45.587448 containerd[1546]: time="2025-09-09T23:58:45.586862479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:58:45.588004 containerd[1546]: time="2025-09-09T23:58:45.587971799Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"26325157\" in 1.57936904s" Sep 9 23:58:45.588048 containerd[1546]: time="2025-09-09T23:58:45.588014559Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\"" Sep 9 23:58:45.588597 containerd[1546]: time="2025-09-09T23:58:45.588576719Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 9 23:58:46.643263 containerd[1546]: time="2025-09-09T23:58:46.643207079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:58:46.647309 containerd[1546]: time="2025-09-09T23:58:46.647267519Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=22528554" Sep 9 23:58:46.648314 containerd[1546]: time="2025-09-09T23:58:46.648284919Z" level=info msg="ImageCreate event name:\"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:58:46.651401 containerd[1546]: time="2025-09-09T23:58:46.651361559Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:58:46.652786 containerd[1546]: time="2025-09-09T23:58:46.652747319Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"24065666\" in 1.06413964s" Sep 9 23:58:46.652786 containerd[1546]: time="2025-09-09T23:58:46.652782399Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\"" Sep 9 23:58:46.653398 containerd[1546]: time="2025-09-09T23:58:46.653178959Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 9 23:58:47.781778 containerd[1546]: time="2025-09-09T23:58:47.781731879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:58:47.782753 containerd[1546]: time="2025-09-09T23:58:47.782570199Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=17483529" Sep 9 23:58:47.783410 containerd[1546]: time="2025-09-09T23:58:47.783372279Z" level=info msg="ImageCreate event name:\"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:58:47.786211 containerd[1546]: time="2025-09-09T23:58:47.786178919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:58:47.787741 containerd[1546]: time="2025-09-09T23:58:47.787715519Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"19020659\" in 1.13450348s" Sep 9 23:58:47.787793 containerd[1546]: time="2025-09-09T23:58:47.787748079Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\"" Sep 9 23:58:47.788128 containerd[1546]: time="2025-09-09T23:58:47.788111559Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 9 23:58:48.475319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 23:58:48.478702 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:58:48.633563 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:58:48.638312 (kubelet)[2055]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 23:58:48.678899 kubelet[2055]: E0909 23:58:48.678854 2055 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 23:58:48.681952 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 23:58:48.682080 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 23:58:48.683593 systemd[1]: kubelet.service: Consumed 145ms CPU time, 107.1M memory peak. Sep 9 23:58:48.758157 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount936881166.mount: Deactivated successfully. Sep 9 23:58:49.045807 containerd[1546]: time="2025-09-09T23:58:49.045700919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:58:49.046737 containerd[1546]: time="2025-09-09T23:58:49.046564359Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=27376726" Sep 9 23:58:49.047469 containerd[1546]: time="2025-09-09T23:58:49.047441839Z" level=info msg="ImageCreate event name:\"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:58:49.050949 containerd[1546]: time="2025-09-09T23:58:49.050918799Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:58:49.051656 containerd[1546]: time="2025-09-09T23:58:49.051473999Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"27375743\" in 1.2633364s" Sep 9 23:58:49.051656 containerd[1546]: time="2025-09-09T23:58:49.051502399Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\"" Sep 9 23:58:49.052027 containerd[1546]: time="2025-09-09T23:58:49.052004199Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 9 23:58:49.555790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4122322843.mount: Deactivated successfully. Sep 9 23:58:50.298094 containerd[1546]: time="2025-09-09T23:58:50.298042799Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:58:50.299139 containerd[1546]: time="2025-09-09T23:58:50.299108639Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 9 23:58:50.300014 containerd[1546]: time="2025-09-09T23:58:50.299983679Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:58:50.303528 containerd[1546]: time="2025-09-09T23:58:50.303446159Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:58:50.304921 containerd[1546]: time="2025-09-09T23:58:50.304890279Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.25285504s" Sep 9 23:58:50.305053 containerd[1546]: time="2025-09-09T23:58:50.305010799Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 9 23:58:50.305490 containerd[1546]: time="2025-09-09T23:58:50.305463639Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 23:58:50.753628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2058391791.mount: Deactivated successfully. Sep 9 23:58:50.758997 containerd[1546]: time="2025-09-09T23:58:50.758947759Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 23:58:50.759540 containerd[1546]: time="2025-09-09T23:58:50.759515039Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 9 23:58:50.760550 containerd[1546]: time="2025-09-09T23:58:50.760495719Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 23:58:50.763136 containerd[1546]: time="2025-09-09T23:58:50.763100639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 23:58:50.763715 containerd[1546]: time="2025-09-09T23:58:50.763686839Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 458.09828ms" Sep 9 23:58:50.763753 containerd[1546]: time="2025-09-09T23:58:50.763716479Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 9 23:58:50.764413 containerd[1546]: time="2025-09-09T23:58:50.764345719Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 9 23:58:51.273643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2030374624.mount: Deactivated successfully. Sep 9 23:58:52.709727 containerd[1546]: time="2025-09-09T23:58:52.709660919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:58:52.710452 containerd[1546]: time="2025-09-09T23:58:52.710401679Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943167" Sep 9 23:58:52.711183 containerd[1546]: time="2025-09-09T23:58:52.711144759Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:58:52.714572 containerd[1546]: time="2025-09-09T23:58:52.714505079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:58:52.715580 containerd[1546]: time="2025-09-09T23:58:52.715544599Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 1.95116008s" Sep 9 23:58:52.715644 containerd[1546]: time="2025-09-09T23:58:52.715586999Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 9 23:58:56.507405 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:58:56.507556 systemd[1]: kubelet.service: Consumed 145ms CPU time, 107.1M memory peak. Sep 9 23:58:56.509248 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:58:56.528505 systemd[1]: Reload requested from client PID 2208 ('systemctl') (unit session-7.scope)... Sep 9 23:58:56.528539 systemd[1]: Reloading... Sep 9 23:58:56.609732 zram_generator::config[2251]: No configuration found. Sep 9 23:58:56.759060 systemd[1]: Reloading finished in 230 ms. Sep 9 23:58:56.821945 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 23:58:56.822019 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 23:58:56.822259 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:58:56.822304 systemd[1]: kubelet.service: Consumed 86ms CPU time, 94.9M memory peak. Sep 9 23:58:56.823640 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:58:56.931313 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:58:56.934781 (kubelet)[2296]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 23:58:56.966426 kubelet[2296]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 23:58:56.966426 kubelet[2296]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 23:58:56.966426 kubelet[2296]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 23:58:56.966801 kubelet[2296]: I0909 23:58:56.966473 2296 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 23:58:58.255216 kubelet[2296]: I0909 23:58:58.255159 2296 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 9 23:58:58.255216 kubelet[2296]: I0909 23:58:58.255204 2296 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 23:58:58.255904 kubelet[2296]: I0909 23:58:58.255862 2296 server.go:954] "Client rotation is on, will bootstrap in background" Sep 9 23:58:58.280036 kubelet[2296]: E0909 23:58:58.279990 2296 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.125:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.125:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:58:58.281414 kubelet[2296]: I0909 23:58:58.281390 2296 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 23:58:58.287852 kubelet[2296]: I0909 23:58:58.287833 2296 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 23:58:58.290692 kubelet[2296]: I0909 23:58:58.290668 2296 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 23:58:58.291535 kubelet[2296]: I0909 23:58:58.291416 2296 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 23:58:58.291638 kubelet[2296]: I0909 23:58:58.291451 2296 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 23:58:58.291724 kubelet[2296]: I0909 23:58:58.291708 2296 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 23:58:58.291724 kubelet[2296]: I0909 23:58:58.291717 2296 container_manager_linux.go:304] "Creating device plugin manager" Sep 9 23:58:58.291926 kubelet[2296]: I0909 23:58:58.291897 2296 state_mem.go:36] "Initialized new in-memory state store" Sep 9 23:58:58.294353 kubelet[2296]: I0909 23:58:58.294220 2296 kubelet.go:446] "Attempting to sync node with API server" Sep 9 23:58:58.294353 kubelet[2296]: I0909 23:58:58.294263 2296 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 23:58:58.294439 kubelet[2296]: I0909 23:58:58.294372 2296 kubelet.go:352] "Adding apiserver pod source" Sep 9 23:58:58.294439 kubelet[2296]: I0909 23:58:58.294384 2296 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 23:58:58.299025 kubelet[2296]: W0909 23:58:58.298958 2296 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.125:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused Sep 9 23:58:58.299090 kubelet[2296]: E0909 23:58:58.299033 2296 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.125:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.125:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:58:58.299090 kubelet[2296]: W0909 23:58:58.298957 2296 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.125:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused Sep 9 23:58:58.299090 kubelet[2296]: E0909 23:58:58.299068 2296 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.125:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.125:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:58:58.300827 kubelet[2296]: I0909 23:58:58.300800 2296 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 23:58:58.301591 kubelet[2296]: I0909 23:58:58.301571 2296 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 23:58:58.301872 kubelet[2296]: W0909 23:58:58.301800 2296 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 23:58:58.303036 kubelet[2296]: I0909 23:58:58.303014 2296 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 23:58:58.303106 kubelet[2296]: I0909 23:58:58.303054 2296 server.go:1287] "Started kubelet" Sep 9 23:58:58.303650 kubelet[2296]: I0909 23:58:58.303618 2296 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 23:58:58.305531 kubelet[2296]: I0909 23:58:58.304578 2296 server.go:479] "Adding debug handlers to kubelet server" Sep 9 23:58:58.305531 kubelet[2296]: I0909 23:58:58.305169 2296 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 23:58:58.305531 kubelet[2296]: I0909 23:58:58.305486 2296 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 23:58:58.307176 kubelet[2296]: I0909 23:58:58.307150 2296 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 23:58:58.307947 kubelet[2296]: E0909 23:58:58.307563 2296 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.125:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.125:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863c2af6cadfbd7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 23:58:58.303032279 +0000 UTC m=+1.364697241,LastTimestamp:2025-09-09 23:58:58.303032279 +0000 UTC m=+1.364697241,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 23:58:58.308190 kubelet[2296]: I0909 23:58:58.307861 2296 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 23:58:58.308881 kubelet[2296]: E0909 23:58:58.308549 2296 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 23:58:58.308881 kubelet[2296]: I0909 23:58:58.308593 2296 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 23:58:58.308881 kubelet[2296]: I0909 23:58:58.308763 2296 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 23:58:58.308881 kubelet[2296]: I0909 23:58:58.308807 2296 reconciler.go:26] "Reconciler: start to sync state" Sep 9 23:58:58.308881 kubelet[2296]: E0909 23:58:58.308814 2296 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.125:6443: connect: connection refused" interval="200ms" Sep 9 23:58:58.309156 kubelet[2296]: W0909 23:58:58.309121 2296 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.125:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused Sep 9 23:58:58.309722 kubelet[2296]: E0909 23:58:58.309700 2296 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.125:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.125:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:58:58.309815 kubelet[2296]: I0909 23:58:58.309479 2296 factory.go:221] Registration of the systemd container factory successfully Sep 9 23:58:58.309904 kubelet[2296]: E0909 23:58:58.309880 2296 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 23:58:58.310039 kubelet[2296]: I0909 23:58:58.310009 2296 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 23:58:58.311449 kubelet[2296]: I0909 23:58:58.311423 2296 factory.go:221] Registration of the containerd container factory successfully Sep 9 23:58:58.320543 kubelet[2296]: I0909 23:58:58.320501 2296 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 23:58:58.320543 kubelet[2296]: I0909 23:58:58.320536 2296 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 23:58:58.320708 kubelet[2296]: I0909 23:58:58.320558 2296 state_mem.go:36] "Initialized new in-memory state store" Sep 9 23:58:58.323835 kubelet[2296]: I0909 23:58:58.323672 2296 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 23:58:58.323993 kubelet[2296]: I0909 23:58:58.323966 2296 policy_none.go:49] "None policy: Start" Sep 9 23:58:58.324023 kubelet[2296]: I0909 23:58:58.324002 2296 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 23:58:58.324023 kubelet[2296]: I0909 23:58:58.324015 2296 state_mem.go:35] "Initializing new in-memory state store" Sep 9 23:58:58.325315 kubelet[2296]: I0909 23:58:58.324968 2296 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 23:58:58.325315 kubelet[2296]: I0909 23:58:58.324995 2296 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 9 23:58:58.325315 kubelet[2296]: I0909 23:58:58.325023 2296 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 23:58:58.325315 kubelet[2296]: I0909 23:58:58.325029 2296 kubelet.go:2382] "Starting kubelet main sync loop" Sep 9 23:58:58.325315 kubelet[2296]: E0909 23:58:58.325073 2296 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 23:58:58.329968 kubelet[2296]: W0909 23:58:58.329916 2296 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.125:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused Sep 9 23:58:58.330171 kubelet[2296]: E0909 23:58:58.330149 2296 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.125:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.125:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:58:58.333364 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 23:58:58.342885 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 23:58:58.345842 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 23:58:58.360774 kubelet[2296]: I0909 23:58:58.360743 2296 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 23:58:58.361404 kubelet[2296]: I0909 23:58:58.361380 2296 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 23:58:58.361610 kubelet[2296]: I0909 23:58:58.361567 2296 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 23:58:58.362327 kubelet[2296]: I0909 23:58:58.362301 2296 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 23:58:58.362683 kubelet[2296]: E0909 23:58:58.362650 2296 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 23:58:58.362730 kubelet[2296]: E0909 23:58:58.362696 2296 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 23:58:58.432459 systemd[1]: Created slice kubepods-burstable-poda81aa2f01b50f96951fe803f98a867ea.slice - libcontainer container kubepods-burstable-poda81aa2f01b50f96951fe803f98a867ea.slice. Sep 9 23:58:58.449232 kubelet[2296]: E0909 23:58:58.449197 2296 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 23:58:58.450811 systemd[1]: Created slice kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice - libcontainer container kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice. Sep 9 23:58:58.463783 kubelet[2296]: I0909 23:58:58.463548 2296 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 23:58:58.463973 kubelet[2296]: E0909 23:58:58.463949 2296 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.125:6443/api/v1/nodes\": dial tcp 10.0.0.125:6443: connect: connection refused" node="localhost" Sep 9 23:58:58.472502 kubelet[2296]: E0909 23:58:58.472455 2296 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 23:58:58.474733 systemd[1]: Created slice kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice - libcontainer container kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice. Sep 9 23:58:58.476204 kubelet[2296]: E0909 23:58:58.476159 2296 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 23:58:58.509574 kubelet[2296]: E0909 23:58:58.509482 2296 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.125:6443: connect: connection refused" interval="400ms" Sep 9 23:58:58.509754 kubelet[2296]: I0909 23:58:58.509665 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a81aa2f01b50f96951fe803f98a867ea-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a81aa2f01b50f96951fe803f98a867ea\") " pod="kube-system/kube-apiserver-localhost" Sep 9 23:58:58.509754 kubelet[2296]: I0909 23:58:58.509697 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a81aa2f01b50f96951fe803f98a867ea-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a81aa2f01b50f96951fe803f98a867ea\") " pod="kube-system/kube-apiserver-localhost" Sep 9 23:58:58.509754 kubelet[2296]: I0909 23:58:58.509717 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:58:58.509754 kubelet[2296]: I0909 23:58:58.509731 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:58:58.509754 kubelet[2296]: I0909 23:58:58.509747 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a81aa2f01b50f96951fe803f98a867ea-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a81aa2f01b50f96951fe803f98a867ea\") " pod="kube-system/kube-apiserver-localhost" Sep 9 23:58:58.509871 kubelet[2296]: I0909 23:58:58.509767 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:58:58.509871 kubelet[2296]: I0909 23:58:58.509781 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:58:58.509871 kubelet[2296]: I0909 23:58:58.509795 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:58:58.509871 kubelet[2296]: I0909 23:58:58.509810 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 9 23:58:58.666214 kubelet[2296]: I0909 23:58:58.666165 2296 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 23:58:58.666616 kubelet[2296]: E0909 23:58:58.666574 2296 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.125:6443/api/v1/nodes\": dial tcp 10.0.0.125:6443: connect: connection refused" node="localhost" Sep 9 23:58:58.751285 containerd[1546]: time="2025-09-09T23:58:58.750940479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a81aa2f01b50f96951fe803f98a867ea,Namespace:kube-system,Attempt:0,}" Sep 9 23:58:58.769524 containerd[1546]: time="2025-09-09T23:58:58.769421359Z" level=info msg="connecting to shim de92efb0025e6df95d381c8cab5fae12cd4b8fc859468cc0cf8b40b39a9be35f" address="unix:///run/containerd/s/f1292af920d73e7bcb8ccec9cbf98d089b9d97ed9530f717cdaf13efc62fe7b1" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:58:58.774285 containerd[1546]: time="2025-09-09T23:58:58.774053879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,}" Sep 9 23:58:58.777439 containerd[1546]: time="2025-09-09T23:58:58.777408799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,}" Sep 9 23:58:58.803715 systemd[1]: Started cri-containerd-de92efb0025e6df95d381c8cab5fae12cd4b8fc859468cc0cf8b40b39a9be35f.scope - libcontainer container de92efb0025e6df95d381c8cab5fae12cd4b8fc859468cc0cf8b40b39a9be35f. Sep 9 23:58:58.843449 containerd[1546]: time="2025-09-09T23:58:58.843166919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a81aa2f01b50f96951fe803f98a867ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"de92efb0025e6df95d381c8cab5fae12cd4b8fc859468cc0cf8b40b39a9be35f\"" Sep 9 23:58:58.848231 containerd[1546]: time="2025-09-09T23:58:58.848196479Z" level=info msg="CreateContainer within sandbox \"de92efb0025e6df95d381c8cab5fae12cd4b8fc859468cc0cf8b40b39a9be35f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 23:58:58.853011 containerd[1546]: time="2025-09-09T23:58:58.852969319Z" level=info msg="connecting to shim 217df357db8a779a930f743421a3dd7b0c5d980147bdd51a3031ebcc307a525c" address="unix:///run/containerd/s/75db7a22ce20b65e2531f5abc40ab051280a964511c464893e63cf4596de3ab4" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:58:58.853285 containerd[1546]: time="2025-09-09T23:58:58.853242239Z" level=info msg="connecting to shim 830903c03828793fde7f6e8e0149e5b22efb1df19303a79deaf2a01e729e2d18" address="unix:///run/containerd/s/92e93ab89f0d2bc945a85fcc368d29e5dddda2ff5667f3dbe835a3a296d71615" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:58:58.859265 containerd[1546]: time="2025-09-09T23:58:58.859168119Z" level=info msg="Container 40ed9eddd1c7d2ae92cd52cc5f945d916b2479fa8783f8467bd1c0759765c510: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:58:58.871030 containerd[1546]: time="2025-09-09T23:58:58.870992359Z" level=info msg="CreateContainer within sandbox \"de92efb0025e6df95d381c8cab5fae12cd4b8fc859468cc0cf8b40b39a9be35f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"40ed9eddd1c7d2ae92cd52cc5f945d916b2479fa8783f8467bd1c0759765c510\"" Sep 9 23:58:58.872671 containerd[1546]: time="2025-09-09T23:58:58.872645119Z" level=info msg="StartContainer for \"40ed9eddd1c7d2ae92cd52cc5f945d916b2479fa8783f8467bd1c0759765c510\"" Sep 9 23:58:58.874018 containerd[1546]: time="2025-09-09T23:58:58.873987239Z" level=info msg="connecting to shim 40ed9eddd1c7d2ae92cd52cc5f945d916b2479fa8783f8467bd1c0759765c510" address="unix:///run/containerd/s/f1292af920d73e7bcb8ccec9cbf98d089b9d97ed9530f717cdaf13efc62fe7b1" protocol=ttrpc version=3 Sep 9 23:58:58.874666 systemd[1]: Started cri-containerd-830903c03828793fde7f6e8e0149e5b22efb1df19303a79deaf2a01e729e2d18.scope - libcontainer container 830903c03828793fde7f6e8e0149e5b22efb1df19303a79deaf2a01e729e2d18. Sep 9 23:58:58.878305 systemd[1]: Started cri-containerd-217df357db8a779a930f743421a3dd7b0c5d980147bdd51a3031ebcc307a525c.scope - libcontainer container 217df357db8a779a930f743421a3dd7b0c5d980147bdd51a3031ebcc307a525c. Sep 9 23:58:58.896675 systemd[1]: Started cri-containerd-40ed9eddd1c7d2ae92cd52cc5f945d916b2479fa8783f8467bd1c0759765c510.scope - libcontainer container 40ed9eddd1c7d2ae92cd52cc5f945d916b2479fa8783f8467bd1c0759765c510. Sep 9 23:58:58.910751 kubelet[2296]: E0909 23:58:58.910711 2296 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.125:6443: connect: connection refused" interval="800ms" Sep 9 23:58:58.920490 containerd[1546]: time="2025-09-09T23:58:58.920445799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,} returns sandbox id \"830903c03828793fde7f6e8e0149e5b22efb1df19303a79deaf2a01e729e2d18\"" Sep 9 23:58:58.923154 containerd[1546]: time="2025-09-09T23:58:58.923116279Z" level=info msg="CreateContainer within sandbox \"830903c03828793fde7f6e8e0149e5b22efb1df19303a79deaf2a01e729e2d18\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 23:58:58.929077 containerd[1546]: time="2025-09-09T23:58:58.929036999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,} returns sandbox id \"217df357db8a779a930f743421a3dd7b0c5d980147bdd51a3031ebcc307a525c\"" Sep 9 23:58:58.933284 containerd[1546]: time="2025-09-09T23:58:58.933246559Z" level=info msg="CreateContainer within sandbox \"217df357db8a779a930f743421a3dd7b0c5d980147bdd51a3031ebcc307a525c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 23:58:58.937099 containerd[1546]: time="2025-09-09T23:58:58.936764759Z" level=info msg="Container 88921540aaed6839553fb1d53e97c9b486fe30ad278af6e1e9aa9e041e57573a: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:58:58.946024 containerd[1546]: time="2025-09-09T23:58:58.945958759Z" level=info msg="StartContainer for \"40ed9eddd1c7d2ae92cd52cc5f945d916b2479fa8783f8467bd1c0759765c510\" returns successfully" Sep 9 23:58:58.947210 containerd[1546]: time="2025-09-09T23:58:58.947150679Z" level=info msg="Container e17027fee88a8c538a8e0288fd90560e5e2e65a4831902f6b72b438beb341522: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:58:58.951539 containerd[1546]: time="2025-09-09T23:58:58.950629119Z" level=info msg="CreateContainer within sandbox \"830903c03828793fde7f6e8e0149e5b22efb1df19303a79deaf2a01e729e2d18\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"88921540aaed6839553fb1d53e97c9b486fe30ad278af6e1e9aa9e041e57573a\"" Sep 9 23:58:58.951539 containerd[1546]: time="2025-09-09T23:58:58.951134239Z" level=info msg="StartContainer for \"88921540aaed6839553fb1d53e97c9b486fe30ad278af6e1e9aa9e041e57573a\"" Sep 9 23:58:58.952458 containerd[1546]: time="2025-09-09T23:58:58.952434559Z" level=info msg="connecting to shim 88921540aaed6839553fb1d53e97c9b486fe30ad278af6e1e9aa9e041e57573a" address="unix:///run/containerd/s/92e93ab89f0d2bc945a85fcc368d29e5dddda2ff5667f3dbe835a3a296d71615" protocol=ttrpc version=3 Sep 9 23:58:58.953856 containerd[1546]: time="2025-09-09T23:58:58.953809559Z" level=info msg="CreateContainer within sandbox \"217df357db8a779a930f743421a3dd7b0c5d980147bdd51a3031ebcc307a525c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e17027fee88a8c538a8e0288fd90560e5e2e65a4831902f6b72b438beb341522\"" Sep 9 23:58:58.954726 containerd[1546]: time="2025-09-09T23:58:58.954696279Z" level=info msg="StartContainer for \"e17027fee88a8c538a8e0288fd90560e5e2e65a4831902f6b72b438beb341522\"" Sep 9 23:58:58.956765 containerd[1546]: time="2025-09-09T23:58:58.956729399Z" level=info msg="connecting to shim e17027fee88a8c538a8e0288fd90560e5e2e65a4831902f6b72b438beb341522" address="unix:///run/containerd/s/75db7a22ce20b65e2531f5abc40ab051280a964511c464893e63cf4596de3ab4" protocol=ttrpc version=3 Sep 9 23:58:58.978690 systemd[1]: Started cri-containerd-88921540aaed6839553fb1d53e97c9b486fe30ad278af6e1e9aa9e041e57573a.scope - libcontainer container 88921540aaed6839553fb1d53e97c9b486fe30ad278af6e1e9aa9e041e57573a. Sep 9 23:58:58.980082 systemd[1]: Started cri-containerd-e17027fee88a8c538a8e0288fd90560e5e2e65a4831902f6b72b438beb341522.scope - libcontainer container e17027fee88a8c538a8e0288fd90560e5e2e65a4831902f6b72b438beb341522. Sep 9 23:58:59.023175 containerd[1546]: time="2025-09-09T23:58:59.023032919Z" level=info msg="StartContainer for \"e17027fee88a8c538a8e0288fd90560e5e2e65a4831902f6b72b438beb341522\" returns successfully" Sep 9 23:58:59.031764 containerd[1546]: time="2025-09-09T23:58:59.031696719Z" level=info msg="StartContainer for \"88921540aaed6839553fb1d53e97c9b486fe30ad278af6e1e9aa9e041e57573a\" returns successfully" Sep 9 23:58:59.068135 kubelet[2296]: I0909 23:58:59.068101 2296 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 23:58:59.335543 kubelet[2296]: E0909 23:58:59.335431 2296 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 23:58:59.337929 kubelet[2296]: E0909 23:58:59.337782 2296 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 23:58:59.340876 kubelet[2296]: E0909 23:58:59.340764 2296 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 23:59:00.342097 kubelet[2296]: E0909 23:59:00.341855 2296 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 23:59:00.342097 kubelet[2296]: E0909 23:59:00.341931 2296 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 23:59:00.887060 kubelet[2296]: E0909 23:59:00.887024 2296 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 9 23:59:00.980208 kubelet[2296]: I0909 23:59:00.979883 2296 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 23:59:01.008626 kubelet[2296]: I0909 23:59:01.008589 2296 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 23:59:01.017094 kubelet[2296]: E0909 23:59:01.017038 2296 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 9 23:59:01.017094 kubelet[2296]: I0909 23:59:01.017070 2296 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 23:59:01.019272 kubelet[2296]: E0909 23:59:01.019243 2296 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 9 23:59:01.019272 kubelet[2296]: I0909 23:59:01.019269 2296 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 23:59:01.021623 kubelet[2296]: E0909 23:59:01.020921 2296 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 9 23:59:01.301079 kubelet[2296]: I0909 23:59:01.300945 2296 apiserver.go:52] "Watching apiserver" Sep 9 23:59:01.309366 kubelet[2296]: I0909 23:59:01.309317 2296 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 23:59:02.873201 systemd[1]: Reload requested from client PID 2563 ('systemctl') (unit session-7.scope)... Sep 9 23:59:02.873221 systemd[1]: Reloading... Sep 9 23:59:02.936655 zram_generator::config[2604]: No configuration found. Sep 9 23:59:03.109365 systemd[1]: Reloading finished in 235 ms. Sep 9 23:59:03.129079 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:59:03.142371 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 23:59:03.143590 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:59:03.143649 systemd[1]: kubelet.service: Consumed 1.511s CPU time, 128.3M memory peak. Sep 9 23:59:03.145165 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:59:03.288792 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:59:03.316890 (kubelet)[2648]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 23:59:03.367000 kubelet[2648]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 23:59:03.367000 kubelet[2648]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 23:59:03.367000 kubelet[2648]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 23:59:03.367000 kubelet[2648]: I0909 23:59:03.357119 2648 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 23:59:03.376310 kubelet[2648]: I0909 23:59:03.376261 2648 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 9 23:59:03.376310 kubelet[2648]: I0909 23:59:03.376291 2648 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 23:59:03.376549 kubelet[2648]: I0909 23:59:03.376531 2648 server.go:954] "Client rotation is on, will bootstrap in background" Sep 9 23:59:03.377710 kubelet[2648]: I0909 23:59:03.377692 2648 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 9 23:59:03.380311 kubelet[2648]: I0909 23:59:03.379921 2648 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 23:59:03.385346 kubelet[2648]: I0909 23:59:03.385313 2648 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 23:59:03.387810 kubelet[2648]: I0909 23:59:03.387790 2648 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 23:59:03.388041 kubelet[2648]: I0909 23:59:03.387988 2648 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 23:59:03.388161 kubelet[2648]: I0909 23:59:03.388013 2648 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 23:59:03.388227 kubelet[2648]: I0909 23:59:03.388171 2648 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 23:59:03.388227 kubelet[2648]: I0909 23:59:03.388178 2648 container_manager_linux.go:304] "Creating device plugin manager" Sep 9 23:59:03.388227 kubelet[2648]: I0909 23:59:03.388222 2648 state_mem.go:36] "Initialized new in-memory state store" Sep 9 23:59:03.388366 kubelet[2648]: I0909 23:59:03.388354 2648 kubelet.go:446] "Attempting to sync node with API server" Sep 9 23:59:03.388396 kubelet[2648]: I0909 23:59:03.388369 2648 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 23:59:03.388396 kubelet[2648]: I0909 23:59:03.388388 2648 kubelet.go:352] "Adding apiserver pod source" Sep 9 23:59:03.388396 kubelet[2648]: I0909 23:59:03.388396 2648 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 23:59:03.393582 kubelet[2648]: I0909 23:59:03.392087 2648 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 23:59:03.393582 kubelet[2648]: I0909 23:59:03.392541 2648 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 23:59:03.393582 kubelet[2648]: I0909 23:59:03.392984 2648 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 23:59:03.393582 kubelet[2648]: I0909 23:59:03.393016 2648 server.go:1287] "Started kubelet" Sep 9 23:59:03.393582 kubelet[2648]: I0909 23:59:03.393138 2648 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 23:59:03.393582 kubelet[2648]: I0909 23:59:03.393188 2648 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 23:59:03.393582 kubelet[2648]: I0909 23:59:03.393432 2648 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 23:59:03.394746 kubelet[2648]: I0909 23:59:03.394727 2648 server.go:479] "Adding debug handlers to kubelet server" Sep 9 23:59:03.394979 kubelet[2648]: I0909 23:59:03.394959 2648 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 23:59:03.395480 kubelet[2648]: I0909 23:59:03.395079 2648 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 23:59:03.396478 kubelet[2648]: I0909 23:59:03.396456 2648 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 23:59:03.396663 kubelet[2648]: I0909 23:59:03.396650 2648 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 23:59:03.396811 kubelet[2648]: I0909 23:59:03.396799 2648 reconciler.go:26] "Reconciler: start to sync state" Sep 9 23:59:03.397264 kubelet[2648]: E0909 23:59:03.397226 2648 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 23:59:03.398054 kubelet[2648]: I0909 23:59:03.398030 2648 factory.go:221] Registration of the systemd container factory successfully Sep 9 23:59:03.398226 kubelet[2648]: I0909 23:59:03.398201 2648 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 23:59:03.401527 kubelet[2648]: E0909 23:59:03.399763 2648 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 23:59:03.406334 kubelet[2648]: I0909 23:59:03.406300 2648 factory.go:221] Registration of the containerd container factory successfully Sep 9 23:59:03.416228 kubelet[2648]: I0909 23:59:03.416185 2648 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 23:59:03.422529 kubelet[2648]: I0909 23:59:03.421686 2648 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 23:59:03.422529 kubelet[2648]: I0909 23:59:03.421707 2648 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 9 23:59:03.422529 kubelet[2648]: I0909 23:59:03.421722 2648 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 23:59:03.422529 kubelet[2648]: I0909 23:59:03.421728 2648 kubelet.go:2382] "Starting kubelet main sync loop" Sep 9 23:59:03.422529 kubelet[2648]: E0909 23:59:03.421763 2648 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 23:59:03.448133 kubelet[2648]: I0909 23:59:03.448109 2648 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 23:59:03.448133 kubelet[2648]: I0909 23:59:03.448127 2648 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 23:59:03.448248 kubelet[2648]: I0909 23:59:03.448149 2648 state_mem.go:36] "Initialized new in-memory state store" Sep 9 23:59:03.448307 kubelet[2648]: I0909 23:59:03.448290 2648 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 23:59:03.448340 kubelet[2648]: I0909 23:59:03.448306 2648 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 23:59:03.448340 kubelet[2648]: I0909 23:59:03.448331 2648 policy_none.go:49] "None policy: Start" Sep 9 23:59:03.448380 kubelet[2648]: I0909 23:59:03.448342 2648 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 23:59:03.448380 kubelet[2648]: I0909 23:59:03.448352 2648 state_mem.go:35] "Initializing new in-memory state store" Sep 9 23:59:03.448450 kubelet[2648]: I0909 23:59:03.448437 2648 state_mem.go:75] "Updated machine memory state" Sep 9 23:59:03.451930 kubelet[2648]: I0909 23:59:03.451912 2648 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 23:59:03.452406 kubelet[2648]: I0909 23:59:03.452350 2648 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 23:59:03.452406 kubelet[2648]: I0909 23:59:03.452369 2648 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 23:59:03.452646 kubelet[2648]: I0909 23:59:03.452621 2648 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 23:59:03.453466 kubelet[2648]: E0909 23:59:03.453443 2648 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 23:59:03.522945 kubelet[2648]: I0909 23:59:03.522891 2648 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 23:59:03.522945 kubelet[2648]: I0909 23:59:03.522899 2648 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 23:59:03.523406 kubelet[2648]: I0909 23:59:03.523220 2648 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 23:59:03.554553 kubelet[2648]: I0909 23:59:03.554505 2648 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 23:59:03.561660 kubelet[2648]: I0909 23:59:03.561631 2648 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 9 23:59:03.561836 kubelet[2648]: I0909 23:59:03.561752 2648 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 23:59:03.697984 kubelet[2648]: I0909 23:59:03.697777 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a81aa2f01b50f96951fe803f98a867ea-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a81aa2f01b50f96951fe803f98a867ea\") " pod="kube-system/kube-apiserver-localhost" Sep 9 23:59:03.697984 kubelet[2648]: I0909 23:59:03.697813 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 9 23:59:03.697984 kubelet[2648]: I0909 23:59:03.697831 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a81aa2f01b50f96951fe803f98a867ea-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a81aa2f01b50f96951fe803f98a867ea\") " pod="kube-system/kube-apiserver-localhost" Sep 9 23:59:03.697984 kubelet[2648]: I0909 23:59:03.697849 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a81aa2f01b50f96951fe803f98a867ea-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a81aa2f01b50f96951fe803f98a867ea\") " pod="kube-system/kube-apiserver-localhost" Sep 9 23:59:03.697984 kubelet[2648]: I0909 23:59:03.697867 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:59:03.698762 kubelet[2648]: I0909 23:59:03.697882 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:59:03.698762 kubelet[2648]: I0909 23:59:03.697942 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:59:03.698762 kubelet[2648]: I0909 23:59:03.697976 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:59:03.698762 kubelet[2648]: I0909 23:59:03.697996 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:59:03.867389 sudo[2681]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 9 23:59:03.867679 sudo[2681]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 9 23:59:04.180633 sudo[2681]: pam_unix(sudo:session): session closed for user root Sep 9 23:59:04.389498 kubelet[2648]: I0909 23:59:04.389466 2648 apiserver.go:52] "Watching apiserver" Sep 9 23:59:04.397392 kubelet[2648]: I0909 23:59:04.397354 2648 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 23:59:04.453980 kubelet[2648]: I0909 23:59:04.453831 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.453800879 podStartE2EDuration="1.453800879s" podCreationTimestamp="2025-09-09 23:59:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:59:04.453426519 +0000 UTC m=+1.132832921" watchObservedRunningTime="2025-09-09 23:59:04.453800879 +0000 UTC m=+1.133207281" Sep 9 23:59:04.469297 kubelet[2648]: I0909 23:59:04.469219 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.469203639 podStartE2EDuration="1.469203639s" podCreationTimestamp="2025-09-09 23:59:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:59:04.468232719 +0000 UTC m=+1.147639121" watchObservedRunningTime="2025-09-09 23:59:04.469203639 +0000 UTC m=+1.148610041" Sep 9 23:59:04.469463 kubelet[2648]: I0909 23:59:04.469351 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.4693452790000001 podStartE2EDuration="1.469345279s" podCreationTimestamp="2025-09-09 23:59:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:59:04.460921159 +0000 UTC m=+1.140327561" watchObservedRunningTime="2025-09-09 23:59:04.469345279 +0000 UTC m=+1.148751681" Sep 9 23:59:06.004895 sudo[1742]: pam_unix(sudo:session): session closed for user root Sep 9 23:59:06.006596 sshd[1741]: Connection closed by 10.0.0.1 port 53834 Sep 9 23:59:06.007814 sshd-session[1738]: pam_unix(sshd:session): session closed for user core Sep 9 23:59:06.011024 systemd[1]: sshd@6-10.0.0.125:22-10.0.0.1:53834.service: Deactivated successfully. Sep 9 23:59:06.012740 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 23:59:06.012942 systemd[1]: session-7.scope: Consumed 5.922s CPU time, 257.9M memory peak. Sep 9 23:59:06.013809 systemd-logind[1520]: Session 7 logged out. Waiting for processes to exit. Sep 9 23:59:06.014902 systemd-logind[1520]: Removed session 7. Sep 9 23:59:09.518689 kubelet[2648]: I0909 23:59:09.518598 2648 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 23:59:09.519029 containerd[1546]: time="2025-09-09T23:59:09.518904304Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 23:59:09.519209 kubelet[2648]: I0909 23:59:09.519063 2648 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 23:59:10.582524 kubelet[2648]: I0909 23:59:10.582423 2648 status_manager.go:890] "Failed to get status for pod" podUID="cbccf766-97b9-4df1-83e9-8eef5222ea47" pod="kube-system/kube-proxy-qbwvr" err="pods \"kube-proxy-qbwvr\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" Sep 9 23:59:10.584061 kubelet[2648]: W0909 23:59:10.584003 2648 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 9 23:59:10.584061 kubelet[2648]: E0909 23:59:10.584041 2648 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 9 23:59:10.590691 systemd[1]: Created slice kubepods-besteffort-podcbccf766_97b9_4df1_83e9_8eef5222ea47.slice - libcontainer container kubepods-besteffort-podcbccf766_97b9_4df1_83e9_8eef5222ea47.slice. Sep 9 23:59:10.594110 kubelet[2648]: I0909 23:59:10.593697 2648 status_manager.go:890] "Failed to get status for pod" podUID="974989e6-23e2-445e-b544-682979f8bef6" pod="kube-system/cilium-f6h4v" err="pods \"cilium-f6h4v\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" Sep 9 23:59:10.594794 kubelet[2648]: W0909 23:59:10.594673 2648 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 9 23:59:10.595122 kubelet[2648]: W0909 23:59:10.594877 2648 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 9 23:59:10.595122 kubelet[2648]: E0909 23:59:10.594934 2648 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 9 23:59:10.595310 kubelet[2648]: E0909 23:59:10.595187 2648 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 9 23:59:10.605354 systemd[1]: Created slice kubepods-burstable-pod974989e6_23e2_445e_b544_682979f8bef6.slice - libcontainer container kubepods-burstable-pod974989e6_23e2_445e_b544_682979f8bef6.slice. Sep 9 23:59:10.643292 kubelet[2648]: I0909 23:59:10.643234 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/974989e6-23e2-445e-b544-682979f8bef6-bpf-maps\") pod \"cilium-f6h4v\" (UID: \"974989e6-23e2-445e-b544-682979f8bef6\") " pod="kube-system/cilium-f6h4v" Sep 9 23:59:10.643407 kubelet[2648]: I0909 23:59:10.643320 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/974989e6-23e2-445e-b544-682979f8bef6-lib-modules\") pod \"cilium-f6h4v\" (UID: \"974989e6-23e2-445e-b544-682979f8bef6\") " pod="kube-system/cilium-f6h4v" Sep 9 23:59:10.643407 kubelet[2648]: I0909 23:59:10.643341 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vf28\" (UniqueName: \"kubernetes.io/projected/cbccf766-97b9-4df1-83e9-8eef5222ea47-kube-api-access-7vf28\") pod \"kube-proxy-qbwvr\" (UID: \"cbccf766-97b9-4df1-83e9-8eef5222ea47\") " pod="kube-system/kube-proxy-qbwvr" Sep 9 23:59:10.643407 kubelet[2648]: I0909 23:59:10.643368 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/974989e6-23e2-445e-b544-682979f8bef6-cilium-config-path\") pod \"cilium-f6h4v\" (UID: \"974989e6-23e2-445e-b544-682979f8bef6\") " pod="kube-system/cilium-f6h4v" Sep 9 23:59:10.643407 kubelet[2648]: I0909 23:59:10.643383 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/974989e6-23e2-445e-b544-682979f8bef6-host-proc-sys-net\") pod \"cilium-f6h4v\" (UID: \"974989e6-23e2-445e-b544-682979f8bef6\") " pod="kube-system/cilium-f6h4v" Sep 9 23:59:10.643407 kubelet[2648]: I0909 23:59:10.643400 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/974989e6-23e2-445e-b544-682979f8bef6-hostproc\") pod \"cilium-f6h4v\" (UID: \"974989e6-23e2-445e-b544-682979f8bef6\") " pod="kube-system/cilium-f6h4v" Sep 9 23:59:10.643563 kubelet[2648]: I0909 23:59:10.643414 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/974989e6-23e2-445e-b544-682979f8bef6-cilium-cgroup\") pod \"cilium-f6h4v\" (UID: \"974989e6-23e2-445e-b544-682979f8bef6\") " pod="kube-system/cilium-f6h4v" Sep 9 23:59:10.643563 kubelet[2648]: I0909 23:59:10.643457 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/974989e6-23e2-445e-b544-682979f8bef6-hubble-tls\") pod \"cilium-f6h4v\" (UID: \"974989e6-23e2-445e-b544-682979f8bef6\") " pod="kube-system/cilium-f6h4v" Sep 9 23:59:10.643563 kubelet[2648]: I0909 23:59:10.643496 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cbccf766-97b9-4df1-83e9-8eef5222ea47-xtables-lock\") pod \"kube-proxy-qbwvr\" (UID: \"cbccf766-97b9-4df1-83e9-8eef5222ea47\") " pod="kube-system/kube-proxy-qbwvr" Sep 9 23:59:10.643563 kubelet[2648]: I0909 23:59:10.643543 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/974989e6-23e2-445e-b544-682979f8bef6-xtables-lock\") pod \"cilium-f6h4v\" (UID: \"974989e6-23e2-445e-b544-682979f8bef6\") " pod="kube-system/cilium-f6h4v" Sep 9 23:59:10.643644 kubelet[2648]: I0909 23:59:10.643565 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/974989e6-23e2-445e-b544-682979f8bef6-host-proc-sys-kernel\") pod \"cilium-f6h4v\" (UID: \"974989e6-23e2-445e-b544-682979f8bef6\") " pod="kube-system/cilium-f6h4v" Sep 9 23:59:10.643644 kubelet[2648]: I0909 23:59:10.643584 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cbccf766-97b9-4df1-83e9-8eef5222ea47-lib-modules\") pod \"kube-proxy-qbwvr\" (UID: \"cbccf766-97b9-4df1-83e9-8eef5222ea47\") " pod="kube-system/kube-proxy-qbwvr" Sep 9 23:59:10.643644 kubelet[2648]: I0909 23:59:10.643601 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/974989e6-23e2-445e-b544-682979f8bef6-etc-cni-netd\") pod \"cilium-f6h4v\" (UID: \"974989e6-23e2-445e-b544-682979f8bef6\") " pod="kube-system/cilium-f6h4v" Sep 9 23:59:10.643644 kubelet[2648]: I0909 23:59:10.643624 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cbccf766-97b9-4df1-83e9-8eef5222ea47-kube-proxy\") pod \"kube-proxy-qbwvr\" (UID: \"cbccf766-97b9-4df1-83e9-8eef5222ea47\") " pod="kube-system/kube-proxy-qbwvr" Sep 9 23:59:10.643644 kubelet[2648]: I0909 23:59:10.643640 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/974989e6-23e2-445e-b544-682979f8bef6-cilium-run\") pod \"cilium-f6h4v\" (UID: \"974989e6-23e2-445e-b544-682979f8bef6\") " pod="kube-system/cilium-f6h4v" Sep 9 23:59:10.643739 kubelet[2648]: I0909 23:59:10.643654 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/974989e6-23e2-445e-b544-682979f8bef6-clustermesh-secrets\") pod \"cilium-f6h4v\" (UID: \"974989e6-23e2-445e-b544-682979f8bef6\") " pod="kube-system/cilium-f6h4v" Sep 9 23:59:10.643739 kubelet[2648]: I0909 23:59:10.643672 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jv5d\" (UniqueName: \"kubernetes.io/projected/974989e6-23e2-445e-b544-682979f8bef6-kube-api-access-8jv5d\") pod \"cilium-f6h4v\" (UID: \"974989e6-23e2-445e-b544-682979f8bef6\") " pod="kube-system/cilium-f6h4v" Sep 9 23:59:10.643739 kubelet[2648]: I0909 23:59:10.643685 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/974989e6-23e2-445e-b544-682979f8bef6-cni-path\") pod \"cilium-f6h4v\" (UID: \"974989e6-23e2-445e-b544-682979f8bef6\") " pod="kube-system/cilium-f6h4v" Sep 9 23:59:10.693951 systemd[1]: Created slice kubepods-besteffort-pod31f242ad_af2c_4d0c_ac05_4bd5506759b0.slice - libcontainer container kubepods-besteffort-pod31f242ad_af2c_4d0c_ac05_4bd5506759b0.slice. Sep 9 23:59:10.744842 kubelet[2648]: I0909 23:59:10.744785 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/31f242ad-af2c-4d0c-ac05-4bd5506759b0-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-xgn6d\" (UID: \"31f242ad-af2c-4d0c-ac05-4bd5506759b0\") " pod="kube-system/cilium-operator-6c4d7847fc-xgn6d" Sep 9 23:59:10.745000 kubelet[2648]: I0909 23:59:10.744978 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n2h2\" (UniqueName: \"kubernetes.io/projected/31f242ad-af2c-4d0c-ac05-4bd5506759b0-kube-api-access-9n2h2\") pod \"cilium-operator-6c4d7847fc-xgn6d\" (UID: \"31f242ad-af2c-4d0c-ac05-4bd5506759b0\") " pod="kube-system/cilium-operator-6c4d7847fc-xgn6d" Sep 9 23:59:10.998957 containerd[1546]: time="2025-09-09T23:59:10.998845422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xgn6d,Uid:31f242ad-af2c-4d0c-ac05-4bd5506759b0,Namespace:kube-system,Attempt:0,}" Sep 9 23:59:11.019450 containerd[1546]: time="2025-09-09T23:59:11.019333011Z" level=info msg="connecting to shim 97518fca59beb4a2d43ad7eba0adc5f52c7e2b315d5214da31f99d3dcfa7730d" address="unix:///run/containerd/s/bbeec754772fbd16617fbe7b359f99a80123cd4631c72e3e40cc4df42e498ad2" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:59:11.050751 systemd[1]: Started cri-containerd-97518fca59beb4a2d43ad7eba0adc5f52c7e2b315d5214da31f99d3dcfa7730d.scope - libcontainer container 97518fca59beb4a2d43ad7eba0adc5f52c7e2b315d5214da31f99d3dcfa7730d. Sep 9 23:59:11.088846 containerd[1546]: time="2025-09-09T23:59:11.088747989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xgn6d,Uid:31f242ad-af2c-4d0c-ac05-4bd5506759b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"97518fca59beb4a2d43ad7eba0adc5f52c7e2b315d5214da31f99d3dcfa7730d\"" Sep 9 23:59:11.092298 containerd[1546]: time="2025-09-09T23:59:11.092265914Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 9 23:59:11.746453 kubelet[2648]: E0909 23:59:11.746251 2648 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Sep 9 23:59:11.746453 kubelet[2648]: E0909 23:59:11.746292 2648 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-f6h4v: failed to sync secret cache: timed out waiting for the condition Sep 9 23:59:11.746453 kubelet[2648]: E0909 23:59:11.746254 2648 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Sep 9 23:59:11.746453 kubelet[2648]: E0909 23:59:11.746366 2648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/974989e6-23e2-445e-b544-682979f8bef6-hubble-tls podName:974989e6-23e2-445e-b544-682979f8bef6 nodeName:}" failed. No retries permitted until 2025-09-09 23:59:12.246344997 +0000 UTC m=+8.925751399 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/974989e6-23e2-445e-b544-682979f8bef6-hubble-tls") pod "cilium-f6h4v" (UID: "974989e6-23e2-445e-b544-682979f8bef6") : failed to sync secret cache: timed out waiting for the condition Sep 9 23:59:11.746453 kubelet[2648]: E0909 23:59:11.746387 2648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/974989e6-23e2-445e-b544-682979f8bef6-clustermesh-secrets podName:974989e6-23e2-445e-b544-682979f8bef6 nodeName:}" failed. No retries permitted until 2025-09-09 23:59:12.246381317 +0000 UTC m=+8.925787679 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/974989e6-23e2-445e-b544-682979f8bef6-clustermesh-secrets") pod "cilium-f6h4v" (UID: "974989e6-23e2-445e-b544-682979f8bef6") : failed to sync secret cache: timed out waiting for the condition Sep 9 23:59:11.802316 containerd[1546]: time="2025-09-09T23:59:11.802014836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qbwvr,Uid:cbccf766-97b9-4df1-83e9-8eef5222ea47,Namespace:kube-system,Attempt:0,}" Sep 9 23:59:11.819222 containerd[1546]: time="2025-09-09T23:59:11.818872260Z" level=info msg="connecting to shim 6a119627388fcbcde4960237f591d298b14c1793daac559d4de8ef7f9fe09a01" address="unix:///run/containerd/s/2ff312f5f1c44ce5faa70d56ac4321d68b62bc0d8629a80efbe46499d65c54df" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:59:11.853728 systemd[1]: Started cri-containerd-6a119627388fcbcde4960237f591d298b14c1793daac559d4de8ef7f9fe09a01.scope - libcontainer container 6a119627388fcbcde4960237f591d298b14c1793daac559d4de8ef7f9fe09a01. Sep 9 23:59:11.887237 containerd[1546]: time="2025-09-09T23:59:11.887184356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qbwvr,Uid:cbccf766-97b9-4df1-83e9-8eef5222ea47,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a119627388fcbcde4960237f591d298b14c1793daac559d4de8ef7f9fe09a01\"" Sep 9 23:59:11.893741 containerd[1546]: time="2025-09-09T23:59:11.893650245Z" level=info msg="CreateContainer within sandbox \"6a119627388fcbcde4960237f591d298b14c1793daac559d4de8ef7f9fe09a01\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 23:59:11.912307 containerd[1546]: time="2025-09-09T23:59:11.912075591Z" level=info msg="Container 9e001208eeea3e6dfd9df0a525dbc7879edf638af189638389bb87d19fd39df1: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:59:11.914424 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3479484682.mount: Deactivated successfully. Sep 9 23:59:11.923027 containerd[1546]: time="2025-09-09T23:59:11.922984527Z" level=info msg="CreateContainer within sandbox \"6a119627388fcbcde4960237f591d298b14c1793daac559d4de8ef7f9fe09a01\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9e001208eeea3e6dfd9df0a525dbc7879edf638af189638389bb87d19fd39df1\"" Sep 9 23:59:11.924979 containerd[1546]: time="2025-09-09T23:59:11.924771089Z" level=info msg="StartContainer for \"9e001208eeea3e6dfd9df0a525dbc7879edf638af189638389bb87d19fd39df1\"" Sep 9 23:59:11.926435 containerd[1546]: time="2025-09-09T23:59:11.926405132Z" level=info msg="connecting to shim 9e001208eeea3e6dfd9df0a525dbc7879edf638af189638389bb87d19fd39df1" address="unix:///run/containerd/s/2ff312f5f1c44ce5faa70d56ac4321d68b62bc0d8629a80efbe46499d65c54df" protocol=ttrpc version=3 Sep 9 23:59:11.953701 systemd[1]: Started cri-containerd-9e001208eeea3e6dfd9df0a525dbc7879edf638af189638389bb87d19fd39df1.scope - libcontainer container 9e001208eeea3e6dfd9df0a525dbc7879edf638af189638389bb87d19fd39df1. Sep 9 23:59:11.992294 containerd[1546]: time="2025-09-09T23:59:11.992240585Z" level=info msg="StartContainer for \"9e001208eeea3e6dfd9df0a525dbc7879edf638af189638389bb87d19fd39df1\" returns successfully" Sep 9 23:59:12.408825 containerd[1546]: time="2025-09-09T23:59:12.408784897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f6h4v,Uid:974989e6-23e2-445e-b544-682979f8bef6,Namespace:kube-system,Attempt:0,}" Sep 9 23:59:12.430244 containerd[1546]: time="2025-09-09T23:59:12.430199605Z" level=info msg="connecting to shim 10c6443b8e57bb0741c17c0ad357113264f579fb119f23b24074695b782d1539" address="unix:///run/containerd/s/573aacd305aa48bfc953f90a204efeb3a0f308756b47bb04f85a26521061249a" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:59:12.454693 systemd[1]: Started cri-containerd-10c6443b8e57bb0741c17c0ad357113264f579fb119f23b24074695b782d1539.scope - libcontainer container 10c6443b8e57bb0741c17c0ad357113264f579fb119f23b24074695b782d1539. Sep 9 23:59:12.464784 kubelet[2648]: I0909 23:59:12.464678 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qbwvr" podStartSLOduration=2.464660891 podStartE2EDuration="2.464660891s" podCreationTimestamp="2025-09-09 23:59:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:59:12.46439141 +0000 UTC m=+9.143797812" watchObservedRunningTime="2025-09-09 23:59:12.464660891 +0000 UTC m=+9.144067293" Sep 9 23:59:12.488980 containerd[1546]: time="2025-09-09T23:59:12.488904843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f6h4v,Uid:974989e6-23e2-445e-b544-682979f8bef6,Namespace:kube-system,Attempt:0,} returns sandbox id \"10c6443b8e57bb0741c17c0ad357113264f579fb119f23b24074695b782d1539\"" Sep 9 23:59:12.675003 containerd[1546]: time="2025-09-09T23:59:12.674897289Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 9 23:59:12.677079 containerd[1546]: time="2025-09-09T23:59:12.677053452Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.584752178s" Sep 9 23:59:12.677142 containerd[1546]: time="2025-09-09T23:59:12.677084692Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 9 23:59:12.679286 containerd[1546]: time="2025-09-09T23:59:12.679083775Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 9 23:59:12.680888 containerd[1546]: time="2025-09-09T23:59:12.680287656Z" level=info msg="CreateContainer within sandbox \"97518fca59beb4a2d43ad7eba0adc5f52c7e2b315d5214da31f99d3dcfa7730d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 9 23:59:12.685464 containerd[1546]: time="2025-09-09T23:59:12.685413823Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:59:12.686035 containerd[1546]: time="2025-09-09T23:59:12.686000464Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:59:12.687547 containerd[1546]: time="2025-09-09T23:59:12.687496506Z" level=info msg="Container b5d8413bbbe311867604b4b12de7398e325f5f420810f7cf0d5cb862216f4c94: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:59:12.692762 containerd[1546]: time="2025-09-09T23:59:12.692726473Z" level=info msg="CreateContainer within sandbox \"97518fca59beb4a2d43ad7eba0adc5f52c7e2b315d5214da31f99d3dcfa7730d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b5d8413bbbe311867604b4b12de7398e325f5f420810f7cf0d5cb862216f4c94\"" Sep 9 23:59:12.694056 containerd[1546]: time="2025-09-09T23:59:12.694017114Z" level=info msg="StartContainer for \"b5d8413bbbe311867604b4b12de7398e325f5f420810f7cf0d5cb862216f4c94\"" Sep 9 23:59:12.694880 containerd[1546]: time="2025-09-09T23:59:12.694856635Z" level=info msg="connecting to shim b5d8413bbbe311867604b4b12de7398e325f5f420810f7cf0d5cb862216f4c94" address="unix:///run/containerd/s/bbeec754772fbd16617fbe7b359f99a80123cd4631c72e3e40cc4df42e498ad2" protocol=ttrpc version=3 Sep 9 23:59:12.715681 systemd[1]: Started cri-containerd-b5d8413bbbe311867604b4b12de7398e325f5f420810f7cf0d5cb862216f4c94.scope - libcontainer container b5d8413bbbe311867604b4b12de7398e325f5f420810f7cf0d5cb862216f4c94. Sep 9 23:59:12.739782 containerd[1546]: time="2025-09-09T23:59:12.739712815Z" level=info msg="StartContainer for \"b5d8413bbbe311867604b4b12de7398e325f5f420810f7cf0d5cb862216f4c94\" returns successfully" Sep 9 23:59:13.498567 kubelet[2648]: I0909 23:59:13.498402 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-xgn6d" podStartSLOduration=1.9098329550000002 podStartE2EDuration="3.498384858s" podCreationTimestamp="2025-09-09 23:59:10 +0000 UTC" firstStartedPulling="2025-09-09 23:59:11.090366631 +0000 UTC m=+7.769772993" lastFinishedPulling="2025-09-09 23:59:12.678918494 +0000 UTC m=+9.358324896" observedRunningTime="2025-09-09 23:59:13.47592623 +0000 UTC m=+10.155332672" watchObservedRunningTime="2025-09-09 23:59:13.498384858 +0000 UTC m=+10.177791220" Sep 9 23:59:20.946645 update_engine[1523]: I20250909 23:59:20.946574 1523 update_attempter.cc:509] Updating boot flags... Sep 9 23:59:23.755284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount838544040.mount: Deactivated successfully. Sep 9 23:59:25.526547 containerd[1546]: time="2025-09-09T23:59:25.526444245Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:59:25.528769 containerd[1546]: time="2025-09-09T23:59:25.528743646Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 9 23:59:25.531669 containerd[1546]: time="2025-09-09T23:59:25.531622488Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:59:25.533207 containerd[1546]: time="2025-09-09T23:59:25.532802728Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 12.853680513s" Sep 9 23:59:25.533207 containerd[1546]: time="2025-09-09T23:59:25.532841488Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 9 23:59:25.555104 containerd[1546]: time="2025-09-09T23:59:25.555052901Z" level=info msg="CreateContainer within sandbox \"10c6443b8e57bb0741c17c0ad357113264f579fb119f23b24074695b782d1539\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 23:59:25.588449 containerd[1546]: time="2025-09-09T23:59:25.580736756Z" level=info msg="Container ee41102290b193009f7819e93afb5bff5767f7b309d4fad1253f5dd46dd588a4: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:59:25.594650 containerd[1546]: time="2025-09-09T23:59:25.594587724Z" level=info msg="CreateContainer within sandbox \"10c6443b8e57bb0741c17c0ad357113264f579fb119f23b24074695b782d1539\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ee41102290b193009f7819e93afb5bff5767f7b309d4fad1253f5dd46dd588a4\"" Sep 9 23:59:25.595545 containerd[1546]: time="2025-09-09T23:59:25.595468724Z" level=info msg="StartContainer for \"ee41102290b193009f7819e93afb5bff5767f7b309d4fad1253f5dd46dd588a4\"" Sep 9 23:59:25.597828 containerd[1546]: time="2025-09-09T23:59:25.597603805Z" level=info msg="connecting to shim ee41102290b193009f7819e93afb5bff5767f7b309d4fad1253f5dd46dd588a4" address="unix:///run/containerd/s/573aacd305aa48bfc953f90a204efeb3a0f308756b47bb04f85a26521061249a" protocol=ttrpc version=3 Sep 9 23:59:25.654897 systemd[1]: Started cri-containerd-ee41102290b193009f7819e93afb5bff5767f7b309d4fad1253f5dd46dd588a4.scope - libcontainer container ee41102290b193009f7819e93afb5bff5767f7b309d4fad1253f5dd46dd588a4. Sep 9 23:59:25.682855 containerd[1546]: time="2025-09-09T23:59:25.682820454Z" level=info msg="StartContainer for \"ee41102290b193009f7819e93afb5bff5767f7b309d4fad1253f5dd46dd588a4\" returns successfully" Sep 9 23:59:25.699106 systemd[1]: cri-containerd-ee41102290b193009f7819e93afb5bff5767f7b309d4fad1253f5dd46dd588a4.scope: Deactivated successfully. Sep 9 23:59:25.734373 containerd[1546]: time="2025-09-09T23:59:25.734313404Z" level=info msg="received exit event container_id:\"ee41102290b193009f7819e93afb5bff5767f7b309d4fad1253f5dd46dd588a4\" id:\"ee41102290b193009f7819e93afb5bff5767f7b309d4fad1253f5dd46dd588a4\" pid:3134 exited_at:{seconds:1757462365 nanos:733917843}" Sep 9 23:59:25.734629 containerd[1546]: time="2025-09-09T23:59:25.734399924Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee41102290b193009f7819e93afb5bff5767f7b309d4fad1253f5dd46dd588a4\" id:\"ee41102290b193009f7819e93afb5bff5767f7b309d4fad1253f5dd46dd588a4\" pid:3134 exited_at:{seconds:1757462365 nanos:733917843}" Sep 9 23:59:26.505533 containerd[1546]: time="2025-09-09T23:59:26.504730386Z" level=info msg="CreateContainer within sandbox \"10c6443b8e57bb0741c17c0ad357113264f579fb119f23b24074695b782d1539\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 23:59:26.513327 containerd[1546]: time="2025-09-09T23:59:26.513293831Z" level=info msg="Container d4caf2e1705751bfebb0a4fd092ed9d39f692c1d3b5acc67cb1d38c68d9d1446: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:59:26.518240 containerd[1546]: time="2025-09-09T23:59:26.518209194Z" level=info msg="CreateContainer within sandbox \"10c6443b8e57bb0741c17c0ad357113264f579fb119f23b24074695b782d1539\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d4caf2e1705751bfebb0a4fd092ed9d39f692c1d3b5acc67cb1d38c68d9d1446\"" Sep 9 23:59:26.519081 containerd[1546]: time="2025-09-09T23:59:26.519045874Z" level=info msg="StartContainer for \"d4caf2e1705751bfebb0a4fd092ed9d39f692c1d3b5acc67cb1d38c68d9d1446\"" Sep 9 23:59:26.520085 containerd[1546]: time="2025-09-09T23:59:26.520062315Z" level=info msg="connecting to shim d4caf2e1705751bfebb0a4fd092ed9d39f692c1d3b5acc67cb1d38c68d9d1446" address="unix:///run/containerd/s/573aacd305aa48bfc953f90a204efeb3a0f308756b47bb04f85a26521061249a" protocol=ttrpc version=3 Sep 9 23:59:26.544690 systemd[1]: Started cri-containerd-d4caf2e1705751bfebb0a4fd092ed9d39f692c1d3b5acc67cb1d38c68d9d1446.scope - libcontainer container d4caf2e1705751bfebb0a4fd092ed9d39f692c1d3b5acc67cb1d38c68d9d1446. Sep 9 23:59:26.576753 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee41102290b193009f7819e93afb5bff5767f7b309d4fad1253f5dd46dd588a4-rootfs.mount: Deactivated successfully. Sep 9 23:59:26.577780 containerd[1546]: time="2025-09-09T23:59:26.577741306Z" level=info msg="StartContainer for \"d4caf2e1705751bfebb0a4fd092ed9d39f692c1d3b5acc67cb1d38c68d9d1446\" returns successfully" Sep 9 23:59:26.590204 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 23:59:26.590437 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:59:26.590621 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 9 23:59:26.592065 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 23:59:26.593480 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 23:59:26.594584 systemd[1]: cri-containerd-d4caf2e1705751bfebb0a4fd092ed9d39f692c1d3b5acc67cb1d38c68d9d1446.scope: Deactivated successfully. Sep 9 23:59:26.595795 containerd[1546]: time="2025-09-09T23:59:26.595746835Z" level=info msg="received exit event container_id:\"d4caf2e1705751bfebb0a4fd092ed9d39f692c1d3b5acc67cb1d38c68d9d1446\" id:\"d4caf2e1705751bfebb0a4fd092ed9d39f692c1d3b5acc67cb1d38c68d9d1446\" pid:3177 exited_at:{seconds:1757462366 nanos:595132755}" Sep 9 23:59:26.595939 containerd[1546]: time="2025-09-09T23:59:26.595911355Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d4caf2e1705751bfebb0a4fd092ed9d39f692c1d3b5acc67cb1d38c68d9d1446\" id:\"d4caf2e1705751bfebb0a4fd092ed9d39f692c1d3b5acc67cb1d38c68d9d1446\" pid:3177 exited_at:{seconds:1757462366 nanos:595132755}" Sep 9 23:59:26.617873 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d4caf2e1705751bfebb0a4fd092ed9d39f692c1d3b5acc67cb1d38c68d9d1446-rootfs.mount: Deactivated successfully. Sep 9 23:59:26.619597 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:59:27.506099 containerd[1546]: time="2025-09-09T23:59:27.505989146Z" level=info msg="CreateContainer within sandbox \"10c6443b8e57bb0741c17c0ad357113264f579fb119f23b24074695b782d1539\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 23:59:27.527386 containerd[1546]: time="2025-09-09T23:59:27.527273277Z" level=info msg="Container 57d93693681c2cb9c69f87e5bdd564d13a0997f233b6891fea322a6f448b7494: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:59:27.543349 containerd[1546]: time="2025-09-09T23:59:27.543287205Z" level=info msg="CreateContainer within sandbox \"10c6443b8e57bb0741c17c0ad357113264f579fb119f23b24074695b782d1539\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"57d93693681c2cb9c69f87e5bdd564d13a0997f233b6891fea322a6f448b7494\"" Sep 9 23:59:27.544020 containerd[1546]: time="2025-09-09T23:59:27.543940726Z" level=info msg="StartContainer for \"57d93693681c2cb9c69f87e5bdd564d13a0997f233b6891fea322a6f448b7494\"" Sep 9 23:59:27.545862 containerd[1546]: time="2025-09-09T23:59:27.545833847Z" level=info msg="connecting to shim 57d93693681c2cb9c69f87e5bdd564d13a0997f233b6891fea322a6f448b7494" address="unix:///run/containerd/s/573aacd305aa48bfc953f90a204efeb3a0f308756b47bb04f85a26521061249a" protocol=ttrpc version=3 Sep 9 23:59:27.571688 systemd[1]: Started cri-containerd-57d93693681c2cb9c69f87e5bdd564d13a0997f233b6891fea322a6f448b7494.scope - libcontainer container 57d93693681c2cb9c69f87e5bdd564d13a0997f233b6891fea322a6f448b7494. Sep 9 23:59:27.622153 containerd[1546]: time="2025-09-09T23:59:27.622110805Z" level=info msg="StartContainer for \"57d93693681c2cb9c69f87e5bdd564d13a0997f233b6891fea322a6f448b7494\" returns successfully" Sep 9 23:59:27.622435 systemd[1]: cri-containerd-57d93693681c2cb9c69f87e5bdd564d13a0997f233b6891fea322a6f448b7494.scope: Deactivated successfully. Sep 9 23:59:27.630372 containerd[1546]: time="2025-09-09T23:59:27.630335449Z" level=info msg="TaskExit event in podsandbox handler container_id:\"57d93693681c2cb9c69f87e5bdd564d13a0997f233b6891fea322a6f448b7494\" id:\"57d93693681c2cb9c69f87e5bdd564d13a0997f233b6891fea322a6f448b7494\" pid:3225 exited_at:{seconds:1757462367 nanos:629920769}" Sep 9 23:59:27.630533 containerd[1546]: time="2025-09-09T23:59:27.630476049Z" level=info msg="received exit event container_id:\"57d93693681c2cb9c69f87e5bdd564d13a0997f233b6891fea322a6f448b7494\" id:\"57d93693681c2cb9c69f87e5bdd564d13a0997f233b6891fea322a6f448b7494\" pid:3225 exited_at:{seconds:1757462367 nanos:629920769}" Sep 9 23:59:27.653368 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-57d93693681c2cb9c69f87e5bdd564d13a0997f233b6891fea322a6f448b7494-rootfs.mount: Deactivated successfully. Sep 9 23:59:28.515170 containerd[1546]: time="2025-09-09T23:59:28.515042558Z" level=info msg="CreateContainer within sandbox \"10c6443b8e57bb0741c17c0ad357113264f579fb119f23b24074695b782d1539\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 23:59:28.527531 containerd[1546]: time="2025-09-09T23:59:28.525636523Z" level=info msg="Container c6a1f60051ffa9a5e13907eca51415a47caaa354a9fb202b5064a561e8fc2b50: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:59:28.536521 containerd[1546]: time="2025-09-09T23:59:28.536451648Z" level=info msg="CreateContainer within sandbox \"10c6443b8e57bb0741c17c0ad357113264f579fb119f23b24074695b782d1539\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c6a1f60051ffa9a5e13907eca51415a47caaa354a9fb202b5064a561e8fc2b50\"" Sep 9 23:59:28.541526 containerd[1546]: time="2025-09-09T23:59:28.541468250Z" level=info msg="StartContainer for \"c6a1f60051ffa9a5e13907eca51415a47caaa354a9fb202b5064a561e8fc2b50\"" Sep 9 23:59:28.542328 containerd[1546]: time="2025-09-09T23:59:28.542296051Z" level=info msg="connecting to shim c6a1f60051ffa9a5e13907eca51415a47caaa354a9fb202b5064a561e8fc2b50" address="unix:///run/containerd/s/573aacd305aa48bfc953f90a204efeb3a0f308756b47bb04f85a26521061249a" protocol=ttrpc version=3 Sep 9 23:59:28.573677 systemd[1]: Started cri-containerd-c6a1f60051ffa9a5e13907eca51415a47caaa354a9fb202b5064a561e8fc2b50.scope - libcontainer container c6a1f60051ffa9a5e13907eca51415a47caaa354a9fb202b5064a561e8fc2b50. Sep 9 23:59:28.595076 systemd[1]: cri-containerd-c6a1f60051ffa9a5e13907eca51415a47caaa354a9fb202b5064a561e8fc2b50.scope: Deactivated successfully. Sep 9 23:59:28.595722 containerd[1546]: time="2025-09-09T23:59:28.595530876Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c6a1f60051ffa9a5e13907eca51415a47caaa354a9fb202b5064a561e8fc2b50\" id:\"c6a1f60051ffa9a5e13907eca51415a47caaa354a9fb202b5064a561e8fc2b50\" pid:3265 exited_at:{seconds:1757462368 nanos:595270435}" Sep 9 23:59:28.599704 containerd[1546]: time="2025-09-09T23:59:28.599662318Z" level=info msg="received exit event container_id:\"c6a1f60051ffa9a5e13907eca51415a47caaa354a9fb202b5064a561e8fc2b50\" id:\"c6a1f60051ffa9a5e13907eca51415a47caaa354a9fb202b5064a561e8fc2b50\" pid:3265 exited_at:{seconds:1757462368 nanos:595270435}" Sep 9 23:59:28.608267 containerd[1546]: time="2025-09-09T23:59:28.608223242Z" level=info msg="StartContainer for \"c6a1f60051ffa9a5e13907eca51415a47caaa354a9fb202b5064a561e8fc2b50\" returns successfully" Sep 9 23:59:28.618356 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c6a1f60051ffa9a5e13907eca51415a47caaa354a9fb202b5064a561e8fc2b50-rootfs.mount: Deactivated successfully. Sep 9 23:59:29.520191 containerd[1546]: time="2025-09-09T23:59:29.520131736Z" level=info msg="CreateContainer within sandbox \"10c6443b8e57bb0741c17c0ad357113264f579fb119f23b24074695b782d1539\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 23:59:29.530772 containerd[1546]: time="2025-09-09T23:59:29.530542261Z" level=info msg="Container e4658e9c48027ddf0b9ec472c54bcbe425cba64a1f18b1b432b17ecd8fd4cbb9: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:59:29.533376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1470377779.mount: Deactivated successfully. Sep 9 23:59:29.542918 containerd[1546]: time="2025-09-09T23:59:29.542865386Z" level=info msg="CreateContainer within sandbox \"10c6443b8e57bb0741c17c0ad357113264f579fb119f23b24074695b782d1539\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e4658e9c48027ddf0b9ec472c54bcbe425cba64a1f18b1b432b17ecd8fd4cbb9\"" Sep 9 23:59:29.543656 containerd[1546]: time="2025-09-09T23:59:29.543628347Z" level=info msg="StartContainer for \"e4658e9c48027ddf0b9ec472c54bcbe425cba64a1f18b1b432b17ecd8fd4cbb9\"" Sep 9 23:59:29.544800 containerd[1546]: time="2025-09-09T23:59:29.544775067Z" level=info msg="connecting to shim e4658e9c48027ddf0b9ec472c54bcbe425cba64a1f18b1b432b17ecd8fd4cbb9" address="unix:///run/containerd/s/573aacd305aa48bfc953f90a204efeb3a0f308756b47bb04f85a26521061249a" protocol=ttrpc version=3 Sep 9 23:59:29.572698 systemd[1]: Started cri-containerd-e4658e9c48027ddf0b9ec472c54bcbe425cba64a1f18b1b432b17ecd8fd4cbb9.scope - libcontainer container e4658e9c48027ddf0b9ec472c54bcbe425cba64a1f18b1b432b17ecd8fd4cbb9. Sep 9 23:59:29.604539 containerd[1546]: time="2025-09-09T23:59:29.604455373Z" level=info msg="StartContainer for \"e4658e9c48027ddf0b9ec472c54bcbe425cba64a1f18b1b432b17ecd8fd4cbb9\" returns successfully" Sep 9 23:59:29.699620 containerd[1546]: time="2025-09-09T23:59:29.699575375Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e4658e9c48027ddf0b9ec472c54bcbe425cba64a1f18b1b432b17ecd8fd4cbb9\" id:\"16b97f9a14da2d4e691dc10235cf995943af202eab05d8895cddc09ceeb4accc\" pid:3336 exited_at:{seconds:1757462369 nanos:699266575}" Sep 9 23:59:29.769294 kubelet[2648]: I0909 23:59:29.769264 2648 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 9 23:59:29.802941 systemd[1]: Created slice kubepods-burstable-podf01ea400_5400_4c74_89fc_0a66ab71a686.slice - libcontainer container kubepods-burstable-podf01ea400_5400_4c74_89fc_0a66ab71a686.slice. Sep 9 23:59:29.810369 systemd[1]: Created slice kubepods-burstable-pode55f1494_326f_4f80_a938_8ef8baf4c50e.slice - libcontainer container kubepods-burstable-pode55f1494_326f_4f80_a938_8ef8baf4c50e.slice. Sep 9 23:59:29.880225 kubelet[2648]: I0909 23:59:29.880180 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e55f1494-326f-4f80-a938-8ef8baf4c50e-config-volume\") pod \"coredns-668d6bf9bc-7vdf9\" (UID: \"e55f1494-326f-4f80-a938-8ef8baf4c50e\") " pod="kube-system/coredns-668d6bf9bc-7vdf9" Sep 9 23:59:29.880225 kubelet[2648]: I0909 23:59:29.880228 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtfbf\" (UniqueName: \"kubernetes.io/projected/e55f1494-326f-4f80-a938-8ef8baf4c50e-kube-api-access-mtfbf\") pod \"coredns-668d6bf9bc-7vdf9\" (UID: \"e55f1494-326f-4f80-a938-8ef8baf4c50e\") " pod="kube-system/coredns-668d6bf9bc-7vdf9" Sep 9 23:59:29.880400 kubelet[2648]: I0909 23:59:29.880257 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f01ea400-5400-4c74-89fc-0a66ab71a686-config-volume\") pod \"coredns-668d6bf9bc-l9b82\" (UID: \"f01ea400-5400-4c74-89fc-0a66ab71a686\") " pod="kube-system/coredns-668d6bf9bc-l9b82" Sep 9 23:59:29.880400 kubelet[2648]: I0909 23:59:29.880280 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m5zx\" (UniqueName: \"kubernetes.io/projected/f01ea400-5400-4c74-89fc-0a66ab71a686-kube-api-access-2m5zx\") pod \"coredns-668d6bf9bc-l9b82\" (UID: \"f01ea400-5400-4c74-89fc-0a66ab71a686\") " pod="kube-system/coredns-668d6bf9bc-l9b82" Sep 9 23:59:30.108996 containerd[1546]: time="2025-09-09T23:59:30.108954473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-l9b82,Uid:f01ea400-5400-4c74-89fc-0a66ab71a686,Namespace:kube-system,Attempt:0,}" Sep 9 23:59:30.113693 containerd[1546]: time="2025-09-09T23:59:30.113631395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7vdf9,Uid:e55f1494-326f-4f80-a938-8ef8baf4c50e,Namespace:kube-system,Attempt:0,}" Sep 9 23:59:30.539953 kubelet[2648]: I0909 23:59:30.539808 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-f6h4v" podStartSLOduration=7.490068724 podStartE2EDuration="20.539789052s" podCreationTimestamp="2025-09-09 23:59:10 +0000 UTC" firstStartedPulling="2025-09-09 23:59:12.490358285 +0000 UTC m=+9.169764687" lastFinishedPulling="2025-09-09 23:59:25.540078613 +0000 UTC m=+22.219485015" observedRunningTime="2025-09-09 23:59:30.539603492 +0000 UTC m=+27.219009894" watchObservedRunningTime="2025-09-09 23:59:30.539789052 +0000 UTC m=+27.219195454" Sep 9 23:59:31.608543 systemd[1]: Started sshd@7-10.0.0.125:22-10.0.0.1:58252.service - OpenSSH per-connection server daemon (10.0.0.1:58252). Sep 9 23:59:31.646196 systemd-networkd[1424]: cilium_host: Link UP Sep 9 23:59:31.646325 systemd-networkd[1424]: cilium_net: Link UP Sep 9 23:59:31.646440 systemd-networkd[1424]: cilium_net: Gained carrier Sep 9 23:59:31.646582 systemd-networkd[1424]: cilium_host: Gained carrier Sep 9 23:59:31.690770 sshd[3439]: Accepted publickey for core from 10.0.0.1 port 58252 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:59:31.692253 sshd-session[3439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:59:31.697449 systemd-logind[1520]: New session 8 of user core. Sep 9 23:59:31.704722 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 23:59:31.730546 systemd-networkd[1424]: cilium_vxlan: Link UP Sep 9 23:59:31.730552 systemd-networkd[1424]: cilium_vxlan: Gained carrier Sep 9 23:59:31.794984 systemd-networkd[1424]: cilium_host: Gained IPv6LL Sep 9 23:59:31.838863 sshd[3507]: Connection closed by 10.0.0.1 port 58252 Sep 9 23:59:31.839352 sshd-session[3439]: pam_unix(sshd:session): session closed for user core Sep 9 23:59:31.842862 systemd[1]: sshd@7-10.0.0.125:22-10.0.0.1:58252.service: Deactivated successfully. Sep 9 23:59:31.845940 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 23:59:31.846633 systemd-logind[1520]: Session 8 logged out. Waiting for processes to exit. Sep 9 23:59:31.847738 systemd-logind[1520]: Removed session 8. Sep 9 23:59:31.983582 kernel: NET: Registered PF_ALG protocol family Sep 9 23:59:32.506813 systemd-networkd[1424]: cilium_net: Gained IPv6LL Sep 9 23:59:32.599365 systemd-networkd[1424]: lxc_health: Link UP Sep 9 23:59:32.599656 systemd-networkd[1424]: lxc_health: Gained carrier Sep 9 23:59:32.954667 systemd-networkd[1424]: cilium_vxlan: Gained IPv6LL Sep 9 23:59:33.180594 kernel: eth0: renamed from tmpbf63b Sep 9 23:59:33.184080 systemd-networkd[1424]: lxcdc5f4cf3cbbb: Link UP Sep 9 23:59:33.185045 systemd-networkd[1424]: lxcf9015332b4e4: Link UP Sep 9 23:59:33.190529 kernel: eth0: renamed from tmpd816e Sep 9 23:59:33.192641 systemd-networkd[1424]: lxcf9015332b4e4: Gained carrier Sep 9 23:59:33.195272 systemd-networkd[1424]: lxcdc5f4cf3cbbb: Gained carrier Sep 9 23:59:33.850751 systemd-networkd[1424]: lxc_health: Gained IPv6LL Sep 9 23:59:35.066811 systemd-networkd[1424]: lxcf9015332b4e4: Gained IPv6LL Sep 9 23:59:35.067554 systemd-networkd[1424]: lxcdc5f4cf3cbbb: Gained IPv6LL Sep 9 23:59:36.854648 systemd[1]: Started sshd@8-10.0.0.125:22-10.0.0.1:58256.service - OpenSSH per-connection server daemon (10.0.0.1:58256). Sep 9 23:59:36.928573 sshd[3844]: Accepted publickey for core from 10.0.0.1 port 58256 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:59:36.929945 sshd-session[3844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:59:36.936619 systemd-logind[1520]: New session 9 of user core. Sep 9 23:59:36.941215 containerd[1546]: time="2025-09-09T23:59:36.941168941Z" level=info msg="connecting to shim bf63ba2f5ab86581ffeebe212ff14c7fc72e2898460de414a1291551d43cbf88" address="unix:///run/containerd/s/db40d9b498b99a826dbb1b05e10f3a4fc6995513ec4577dcc2c9386c7a2c6308" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:59:36.941905 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 23:59:36.942330 containerd[1546]: time="2025-09-09T23:59:36.942265102Z" level=info msg="connecting to shim d816e91c151095cfa634cd0ec30b0fecc0c4f1a53c708c96d5306febc786faae" address="unix:///run/containerd/s/86584b14677f6f185576d0692267967e9ccfb9823b97850769c4316b14059174" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:59:36.970761 systemd[1]: Started cri-containerd-bf63ba2f5ab86581ffeebe212ff14c7fc72e2898460de414a1291551d43cbf88.scope - libcontainer container bf63ba2f5ab86581ffeebe212ff14c7fc72e2898460de414a1291551d43cbf88. Sep 9 23:59:36.974134 systemd[1]: Started cri-containerd-d816e91c151095cfa634cd0ec30b0fecc0c4f1a53c708c96d5306febc786faae.scope - libcontainer container d816e91c151095cfa634cd0ec30b0fecc0c4f1a53c708c96d5306febc786faae. Sep 9 23:59:36.986034 systemd-resolved[1357]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 23:59:36.988672 systemd-resolved[1357]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 23:59:37.025074 containerd[1546]: time="2025-09-09T23:59:37.025034524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7vdf9,Uid:e55f1494-326f-4f80-a938-8ef8baf4c50e,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf63ba2f5ab86581ffeebe212ff14c7fc72e2898460de414a1291551d43cbf88\"" Sep 9 23:59:37.028351 containerd[1546]: time="2025-09-09T23:59:37.028306565Z" level=info msg="CreateContainer within sandbox \"bf63ba2f5ab86581ffeebe212ff14c7fc72e2898460de414a1291551d43cbf88\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 23:59:37.030172 containerd[1546]: time="2025-09-09T23:59:37.030133526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-l9b82,Uid:f01ea400-5400-4c74-89fc-0a66ab71a686,Namespace:kube-system,Attempt:0,} returns sandbox id \"d816e91c151095cfa634cd0ec30b0fecc0c4f1a53c708c96d5306febc786faae\"" Sep 9 23:59:37.034771 containerd[1546]: time="2025-09-09T23:59:37.034728087Z" level=info msg="CreateContainer within sandbox \"d816e91c151095cfa634cd0ec30b0fecc0c4f1a53c708c96d5306febc786faae\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 23:59:37.039537 containerd[1546]: time="2025-09-09T23:59:37.038548928Z" level=info msg="Container 87a05fcec936214af8c7bf5f9560ef91d4bb1c7e8ffa599460e545f68e31db7c: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:59:37.048273 containerd[1546]: time="2025-09-09T23:59:37.048220411Z" level=info msg="Container 271c74ce1e754af7bc8b9e7614bde99bd7586fa208239a1496074b8f80c86798: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:59:37.052309 containerd[1546]: time="2025-09-09T23:59:37.052248612Z" level=info msg="CreateContainer within sandbox \"bf63ba2f5ab86581ffeebe212ff14c7fc72e2898460de414a1291551d43cbf88\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"87a05fcec936214af8c7bf5f9560ef91d4bb1c7e8ffa599460e545f68e31db7c\"" Sep 9 23:59:37.053074 containerd[1546]: time="2025-09-09T23:59:37.053041332Z" level=info msg="StartContainer for \"87a05fcec936214af8c7bf5f9560ef91d4bb1c7e8ffa599460e545f68e31db7c\"" Sep 9 23:59:37.054456 containerd[1546]: time="2025-09-09T23:59:37.054341412Z" level=info msg="connecting to shim 87a05fcec936214af8c7bf5f9560ef91d4bb1c7e8ffa599460e545f68e31db7c" address="unix:///run/containerd/s/db40d9b498b99a826dbb1b05e10f3a4fc6995513ec4577dcc2c9386c7a2c6308" protocol=ttrpc version=3 Sep 9 23:59:37.061125 containerd[1546]: time="2025-09-09T23:59:37.061079334Z" level=info msg="CreateContainer within sandbox \"d816e91c151095cfa634cd0ec30b0fecc0c4f1a53c708c96d5306febc786faae\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"271c74ce1e754af7bc8b9e7614bde99bd7586fa208239a1496074b8f80c86798\"" Sep 9 23:59:37.062987 containerd[1546]: time="2025-09-09T23:59:37.062946814Z" level=info msg="StartContainer for \"271c74ce1e754af7bc8b9e7614bde99bd7586fa208239a1496074b8f80c86798\"" Sep 9 23:59:37.063932 containerd[1546]: time="2025-09-09T23:59:37.063899455Z" level=info msg="connecting to shim 271c74ce1e754af7bc8b9e7614bde99bd7586fa208239a1496074b8f80c86798" address="unix:///run/containerd/s/86584b14677f6f185576d0692267967e9ccfb9823b97850769c4316b14059174" protocol=ttrpc version=3 Sep 9 23:59:37.078758 systemd[1]: Started cri-containerd-87a05fcec936214af8c7bf5f9560ef91d4bb1c7e8ffa599460e545f68e31db7c.scope - libcontainer container 87a05fcec936214af8c7bf5f9560ef91d4bb1c7e8ffa599460e545f68e31db7c. Sep 9 23:59:37.098711 systemd[1]: Started cri-containerd-271c74ce1e754af7bc8b9e7614bde99bd7586fa208239a1496074b8f80c86798.scope - libcontainer container 271c74ce1e754af7bc8b9e7614bde99bd7586fa208239a1496074b8f80c86798. Sep 9 23:59:37.111315 sshd[3875]: Connection closed by 10.0.0.1 port 58256 Sep 9 23:59:37.111663 sshd-session[3844]: pam_unix(sshd:session): session closed for user core Sep 9 23:59:37.116502 systemd[1]: sshd@8-10.0.0.125:22-10.0.0.1:58256.service: Deactivated successfully. Sep 9 23:59:37.118860 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 23:59:37.119906 systemd-logind[1520]: Session 9 logged out. Waiting for processes to exit. Sep 9 23:59:37.124253 systemd-logind[1520]: Removed session 9. Sep 9 23:59:37.124967 containerd[1546]: time="2025-09-09T23:59:37.124568271Z" level=info msg="StartContainer for \"87a05fcec936214af8c7bf5f9560ef91d4bb1c7e8ffa599460e545f68e31db7c\" returns successfully" Sep 9 23:59:37.143947 containerd[1546]: time="2025-09-09T23:59:37.143910316Z" level=info msg="StartContainer for \"271c74ce1e754af7bc8b9e7614bde99bd7586fa208239a1496074b8f80c86798\" returns successfully" Sep 9 23:59:37.569923 kubelet[2648]: I0909 23:59:37.569858 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-l9b82" podStartSLOduration=27.569837788 podStartE2EDuration="27.569837788s" podCreationTimestamp="2025-09-09 23:59:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:59:37.555827304 +0000 UTC m=+34.235233706" watchObservedRunningTime="2025-09-09 23:59:37.569837788 +0000 UTC m=+34.249244190" Sep 9 23:59:42.130925 systemd[1]: Started sshd@9-10.0.0.125:22-10.0.0.1:40906.service - OpenSSH per-connection server daemon (10.0.0.1:40906). Sep 9 23:59:42.193650 sshd[4030]: Accepted publickey for core from 10.0.0.1 port 40906 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:59:42.195283 sshd-session[4030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:59:42.199703 systemd-logind[1520]: New session 10 of user core. Sep 9 23:59:42.210717 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 23:59:42.329071 sshd[4035]: Connection closed by 10.0.0.1 port 40906 Sep 9 23:59:42.329439 sshd-session[4030]: pam_unix(sshd:session): session closed for user core Sep 9 23:59:42.332818 systemd[1]: sshd@9-10.0.0.125:22-10.0.0.1:40906.service: Deactivated successfully. Sep 9 23:59:42.334454 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 23:59:42.335194 systemd-logind[1520]: Session 10 logged out. Waiting for processes to exit. Sep 9 23:59:42.336373 systemd-logind[1520]: Removed session 10. Sep 9 23:59:47.350743 systemd[1]: Started sshd@10-10.0.0.125:22-10.0.0.1:40912.service - OpenSSH per-connection server daemon (10.0.0.1:40912). Sep 9 23:59:47.417521 sshd[4051]: Accepted publickey for core from 10.0.0.1 port 40912 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:59:47.419091 sshd-session[4051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:59:47.425359 systemd-logind[1520]: New session 11 of user core. Sep 9 23:59:47.440734 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 23:59:47.562644 kubelet[2648]: I0909 23:59:47.562582 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7vdf9" podStartSLOduration=37.562564362 podStartE2EDuration="37.562564362s" podCreationTimestamp="2025-09-09 23:59:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:59:37.570153148 +0000 UTC m=+34.249559550" watchObservedRunningTime="2025-09-09 23:59:47.562564362 +0000 UTC m=+44.241970764" Sep 9 23:59:47.608408 sshd[4054]: Connection closed by 10.0.0.1 port 40912 Sep 9 23:59:47.608640 sshd-session[4051]: pam_unix(sshd:session): session closed for user core Sep 9 23:59:47.619139 systemd[1]: sshd@10-10.0.0.125:22-10.0.0.1:40912.service: Deactivated successfully. Sep 9 23:59:47.621998 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 23:59:47.622762 systemd-logind[1520]: Session 11 logged out. Waiting for processes to exit. Sep 9 23:59:47.625131 systemd[1]: Started sshd@11-10.0.0.125:22-10.0.0.1:40920.service - OpenSSH per-connection server daemon (10.0.0.1:40920). Sep 9 23:59:47.626073 systemd-logind[1520]: Removed session 11. Sep 9 23:59:47.696298 sshd[4075]: Accepted publickey for core from 10.0.0.1 port 40920 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:59:47.697664 sshd-session[4075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:59:47.701382 systemd-logind[1520]: New session 12 of user core. Sep 9 23:59:47.716678 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 23:59:47.862003 sshd[4078]: Connection closed by 10.0.0.1 port 40920 Sep 9 23:59:47.861265 sshd-session[4075]: pam_unix(sshd:session): session closed for user core Sep 9 23:59:47.872684 systemd[1]: sshd@11-10.0.0.125:22-10.0.0.1:40920.service: Deactivated successfully. Sep 9 23:59:47.876620 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 23:59:47.878026 systemd-logind[1520]: Session 12 logged out. Waiting for processes to exit. Sep 9 23:59:47.882200 systemd[1]: Started sshd@12-10.0.0.125:22-10.0.0.1:40936.service - OpenSSH per-connection server daemon (10.0.0.1:40936). Sep 9 23:59:47.882724 systemd-logind[1520]: Removed session 12. Sep 9 23:59:47.945410 sshd[4090]: Accepted publickey for core from 10.0.0.1 port 40936 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:59:47.946735 sshd-session[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:59:47.950739 systemd-logind[1520]: New session 13 of user core. Sep 9 23:59:47.960709 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 23:59:48.071159 sshd[4093]: Connection closed by 10.0.0.1 port 40936 Sep 9 23:59:48.071677 sshd-session[4090]: pam_unix(sshd:session): session closed for user core Sep 9 23:59:48.074964 systemd[1]: sshd@12-10.0.0.125:22-10.0.0.1:40936.service: Deactivated successfully. Sep 9 23:59:48.077889 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 23:59:48.078562 systemd-logind[1520]: Session 13 logged out. Waiting for processes to exit. Sep 9 23:59:48.079533 systemd-logind[1520]: Removed session 13. Sep 9 23:59:53.094460 systemd[1]: Started sshd@13-10.0.0.125:22-10.0.0.1:46982.service - OpenSSH per-connection server daemon (10.0.0.1:46982). Sep 9 23:59:53.168766 sshd[4108]: Accepted publickey for core from 10.0.0.1 port 46982 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:59:53.170185 sshd-session[4108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:59:53.176293 systemd-logind[1520]: New session 14 of user core. Sep 9 23:59:53.184822 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 23:59:53.328808 sshd[4111]: Connection closed by 10.0.0.1 port 46982 Sep 9 23:59:53.329255 sshd-session[4108]: pam_unix(sshd:session): session closed for user core Sep 9 23:59:53.332372 systemd[1]: sshd@13-10.0.0.125:22-10.0.0.1:46982.service: Deactivated successfully. Sep 9 23:59:53.333963 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 23:59:53.335690 systemd-logind[1520]: Session 14 logged out. Waiting for processes to exit. Sep 9 23:59:53.336571 systemd-logind[1520]: Removed session 14. Sep 9 23:59:58.347975 systemd[1]: Started sshd@14-10.0.0.125:22-10.0.0.1:46998.service - OpenSSH per-connection server daemon (10.0.0.1:46998). Sep 9 23:59:58.413021 sshd[4125]: Accepted publickey for core from 10.0.0.1 port 46998 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:59:58.414489 sshd-session[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:59:58.421631 systemd-logind[1520]: New session 15 of user core. Sep 9 23:59:58.434865 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 23:59:58.555717 sshd[4128]: Connection closed by 10.0.0.1 port 46998 Sep 9 23:59:58.556292 sshd-session[4125]: pam_unix(sshd:session): session closed for user core Sep 9 23:59:58.569598 systemd[1]: sshd@14-10.0.0.125:22-10.0.0.1:46998.service: Deactivated successfully. Sep 9 23:59:58.571691 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 23:59:58.572727 systemd-logind[1520]: Session 15 logged out. Waiting for processes to exit. Sep 9 23:59:58.575466 systemd[1]: Started sshd@15-10.0.0.125:22-10.0.0.1:47012.service - OpenSSH per-connection server daemon (10.0.0.1:47012). Sep 9 23:59:58.576988 systemd-logind[1520]: Removed session 15. Sep 9 23:59:58.635923 sshd[4141]: Accepted publickey for core from 10.0.0.1 port 47012 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:59:58.637814 sshd-session[4141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:59:58.643841 systemd-logind[1520]: New session 16 of user core. Sep 9 23:59:58.654739 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 23:59:58.901292 sshd[4144]: Connection closed by 10.0.0.1 port 47012 Sep 9 23:59:58.901917 sshd-session[4141]: pam_unix(sshd:session): session closed for user core Sep 9 23:59:58.911914 systemd[1]: sshd@15-10.0.0.125:22-10.0.0.1:47012.service: Deactivated successfully. Sep 9 23:59:58.913933 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 23:59:58.914893 systemd-logind[1520]: Session 16 logged out. Waiting for processes to exit. Sep 9 23:59:58.917534 systemd[1]: Started sshd@16-10.0.0.125:22-10.0.0.1:47028.service - OpenSSH per-connection server daemon (10.0.0.1:47028). Sep 9 23:59:58.918143 systemd-logind[1520]: Removed session 16. Sep 9 23:59:58.984359 sshd[4157]: Accepted publickey for core from 10.0.0.1 port 47028 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:59:58.987896 sshd-session[4157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:59:58.992677 systemd-logind[1520]: New session 17 of user core. Sep 9 23:59:58.998765 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 23:59:59.625766 sshd[4160]: Connection closed by 10.0.0.1 port 47028 Sep 9 23:59:59.626654 sshd-session[4157]: pam_unix(sshd:session): session closed for user core Sep 9 23:59:59.636869 systemd[1]: sshd@16-10.0.0.125:22-10.0.0.1:47028.service: Deactivated successfully. Sep 9 23:59:59.639051 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 23:59:59.641708 systemd-logind[1520]: Session 17 logged out. Waiting for processes to exit. Sep 9 23:59:59.645211 systemd[1]: Started sshd@17-10.0.0.125:22-10.0.0.1:47036.service - OpenSSH per-connection server daemon (10.0.0.1:47036). Sep 9 23:59:59.647693 systemd-logind[1520]: Removed session 17. Sep 9 23:59:59.719110 sshd[4182]: Accepted publickey for core from 10.0.0.1 port 47036 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:59:59.720535 sshd-session[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:59:59.725427 systemd-logind[1520]: New session 18 of user core. Sep 9 23:59:59.738758 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 23:59:59.992700 sshd[4185]: Connection closed by 10.0.0.1 port 47036 Sep 9 23:59:59.993612 sshd-session[4182]: pam_unix(sshd:session): session closed for user core Sep 10 00:00:00.008771 systemd[1]: sshd@17-10.0.0.125:22-10.0.0.1:47036.service: Deactivated successfully. Sep 10 00:00:00.011867 systemd[1]: session-18.scope: Deactivated successfully. Sep 10 00:00:00.014291 systemd-logind[1520]: Session 18 logged out. Waiting for processes to exit. Sep 10 00:00:00.016879 systemd[1]: Started sshd@18-10.0.0.125:22-10.0.0.1:38570.service - OpenSSH per-connection server daemon (10.0.0.1:38570). Sep 10 00:00:00.018575 systemd-logind[1520]: Removed session 18. Sep 10 00:00:00.089000 sshd[4196]: Accepted publickey for core from 10.0.0.1 port 38570 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 10 00:00:00.090874 sshd-session[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:00:00.096711 systemd-logind[1520]: New session 19 of user core. Sep 10 00:00:00.112408 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 10 00:00:00.234154 sshd[4199]: Connection closed by 10.0.0.1 port 38570 Sep 10 00:00:00.234692 sshd-session[4196]: pam_unix(sshd:session): session closed for user core Sep 10 00:00:00.238624 systemd[1]: sshd@18-10.0.0.125:22-10.0.0.1:38570.service: Deactivated successfully. Sep 10 00:00:00.240271 systemd[1]: session-19.scope: Deactivated successfully. Sep 10 00:00:00.241316 systemd-logind[1520]: Session 19 logged out. Waiting for processes to exit. Sep 10 00:00:00.242670 systemd-logind[1520]: Removed session 19. Sep 10 00:00:05.259046 systemd[1]: Started sshd@19-10.0.0.125:22-10.0.0.1:38612.service - OpenSSH per-connection server daemon (10.0.0.1:38612). Sep 10 00:00:05.322881 sshd[4220]: Accepted publickey for core from 10.0.0.1 port 38612 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 10 00:00:05.324204 sshd-session[4220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:00:05.329614 systemd-logind[1520]: New session 20 of user core. Sep 10 00:00:05.338696 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 10 00:00:05.453534 sshd[4223]: Connection closed by 10.0.0.1 port 38612 Sep 10 00:00:05.453804 sshd-session[4220]: pam_unix(sshd:session): session closed for user core Sep 10 00:00:05.457408 systemd[1]: sshd@19-10.0.0.125:22-10.0.0.1:38612.service: Deactivated successfully. Sep 10 00:00:05.459258 systemd[1]: session-20.scope: Deactivated successfully. Sep 10 00:00:05.460164 systemd-logind[1520]: Session 20 logged out. Waiting for processes to exit. Sep 10 00:00:05.461246 systemd-logind[1520]: Removed session 20. Sep 10 00:00:10.468840 systemd[1]: Started sshd@20-10.0.0.125:22-10.0.0.1:58748.service - OpenSSH per-connection server daemon (10.0.0.1:58748). Sep 10 00:00:10.534063 sshd[4238]: Accepted publickey for core from 10.0.0.1 port 58748 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 10 00:00:10.535428 sshd-session[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:00:10.544820 systemd-logind[1520]: New session 21 of user core. Sep 10 00:00:10.563355 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 10 00:00:10.693192 sshd[4241]: Connection closed by 10.0.0.1 port 58748 Sep 10 00:00:10.693558 sshd-session[4238]: pam_unix(sshd:session): session closed for user core Sep 10 00:00:10.697197 systemd[1]: sshd@20-10.0.0.125:22-10.0.0.1:58748.service: Deactivated successfully. Sep 10 00:00:10.699354 systemd[1]: session-21.scope: Deactivated successfully. Sep 10 00:00:10.700308 systemd-logind[1520]: Session 21 logged out. Waiting for processes to exit. Sep 10 00:00:10.702020 systemd-logind[1520]: Removed session 21. Sep 10 00:00:15.710392 systemd[1]: Started sshd@21-10.0.0.125:22-10.0.0.1:58774.service - OpenSSH per-connection server daemon (10.0.0.1:58774). Sep 10 00:00:15.773263 sshd[4258]: Accepted publickey for core from 10.0.0.1 port 58774 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 10 00:00:15.774665 sshd-session[4258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:00:15.779397 systemd-logind[1520]: New session 22 of user core. Sep 10 00:00:15.790747 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 10 00:00:15.900450 sshd[4261]: Connection closed by 10.0.0.1 port 58774 Sep 10 00:00:15.901218 sshd-session[4258]: pam_unix(sshd:session): session closed for user core Sep 10 00:00:15.904006 systemd[1]: sshd@21-10.0.0.125:22-10.0.0.1:58774.service: Deactivated successfully. Sep 10 00:00:15.905716 systemd[1]: session-22.scope: Deactivated successfully. Sep 10 00:00:15.907051 systemd-logind[1520]: Session 22 logged out. Waiting for processes to exit. Sep 10 00:00:15.908771 systemd-logind[1520]: Removed session 22. Sep 10 00:00:18.422901 kubelet[2648]: E0910 00:00:18.422820 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:00:20.911730 systemd[1]: Started sshd@22-10.0.0.125:22-10.0.0.1:47106.service - OpenSSH per-connection server daemon (10.0.0.1:47106). Sep 10 00:00:20.987854 sshd[4274]: Accepted publickey for core from 10.0.0.1 port 47106 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 10 00:00:20.989769 sshd-session[4274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:00:20.995714 systemd-logind[1520]: New session 23 of user core. Sep 10 00:00:21.005697 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 10 00:00:21.148654 sshd[4277]: Connection closed by 10.0.0.1 port 47106 Sep 10 00:00:21.148967 sshd-session[4274]: pam_unix(sshd:session): session closed for user core Sep 10 00:00:21.157723 systemd[1]: sshd@22-10.0.0.125:22-10.0.0.1:47106.service: Deactivated successfully. Sep 10 00:00:21.159564 systemd[1]: session-23.scope: Deactivated successfully. Sep 10 00:00:21.160234 systemd-logind[1520]: Session 23 logged out. Waiting for processes to exit. Sep 10 00:00:21.162270 systemd[1]: Started sshd@23-10.0.0.125:22-10.0.0.1:47118.service - OpenSSH per-connection server daemon (10.0.0.1:47118). Sep 10 00:00:21.164070 systemd-logind[1520]: Removed session 23. Sep 10 00:00:21.229918 sshd[4290]: Accepted publickey for core from 10.0.0.1 port 47118 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 10 00:00:21.231427 sshd-session[4290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:00:21.235578 systemd-logind[1520]: New session 24 of user core. Sep 10 00:00:21.250766 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 10 00:00:23.489553 containerd[1546]: time="2025-09-10T00:00:23.488744079Z" level=info msg="StopContainer for \"b5d8413bbbe311867604b4b12de7398e325f5f420810f7cf0d5cb862216f4c94\" with timeout 30 (s)" Sep 10 00:00:23.492074 containerd[1546]: time="2025-09-10T00:00:23.489772293Z" level=info msg="Stop container \"b5d8413bbbe311867604b4b12de7398e325f5f420810f7cf0d5cb862216f4c94\" with signal terminated" Sep 10 00:00:23.511209 systemd[1]: cri-containerd-b5d8413bbbe311867604b4b12de7398e325f5f420810f7cf0d5cb862216f4c94.scope: Deactivated successfully. Sep 10 00:00:23.513180 containerd[1546]: time="2025-09-10T00:00:23.513126576Z" level=info msg="received exit event container_id:\"b5d8413bbbe311867604b4b12de7398e325f5f420810f7cf0d5cb862216f4c94\" id:\"b5d8413bbbe311867604b4b12de7398e325f5f420810f7cf0d5cb862216f4c94\" pid:3052 exited_at:{seconds:1757462423 nanos:512379355}" Sep 10 00:00:23.513437 containerd[1546]: time="2025-09-10T00:00:23.513414729Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b5d8413bbbe311867604b4b12de7398e325f5f420810f7cf0d5cb862216f4c94\" id:\"b5d8413bbbe311867604b4b12de7398e325f5f420810f7cf0d5cb862216f4c94\" pid:3052 exited_at:{seconds:1757462423 nanos:512379355}" Sep 10 00:00:23.518176 containerd[1546]: time="2025-09-10T00:00:23.518134208Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 00:00:23.522665 containerd[1546]: time="2025-09-10T00:00:23.522627173Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e4658e9c48027ddf0b9ec472c54bcbe425cba64a1f18b1b432b17ecd8fd4cbb9\" id:\"f4b11d263182c232fc999ad5f907a0ac79ae0a61e85ae8dfa805805b40db0b25\" pid:4316 exited_at:{seconds:1757462423 nanos:522141026}" Sep 10 00:00:23.524357 containerd[1546]: time="2025-09-10T00:00:23.524324650Z" level=info msg="StopContainer for \"e4658e9c48027ddf0b9ec472c54bcbe425cba64a1f18b1b432b17ecd8fd4cbb9\" with timeout 2 (s)" Sep 10 00:00:23.525786 containerd[1546]: time="2025-09-10T00:00:23.525753053Z" level=info msg="Stop container \"e4658e9c48027ddf0b9ec472c54bcbe425cba64a1f18b1b432b17ecd8fd4cbb9\" with signal terminated" Sep 10 00:00:23.534478 systemd-networkd[1424]: lxc_health: Link DOWN Sep 10 00:00:23.534542 systemd-networkd[1424]: lxc_health: Lost carrier Sep 10 00:00:23.541634 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5d8413bbbe311867604b4b12de7398e325f5f420810f7cf0d5cb862216f4c94-rootfs.mount: Deactivated successfully. Sep 10 00:00:23.558254 containerd[1546]: time="2025-09-10T00:00:23.558199264Z" level=info msg="StopContainer for \"b5d8413bbbe311867604b4b12de7398e325f5f420810f7cf0d5cb862216f4c94\" returns successfully" Sep 10 00:00:23.559031 containerd[1546]: time="2025-09-10T00:00:23.558997484Z" level=info msg="StopPodSandbox for \"97518fca59beb4a2d43ad7eba0adc5f52c7e2b315d5214da31f99d3dcfa7730d\"" Sep 10 00:00:23.559114 systemd[1]: cri-containerd-e4658e9c48027ddf0b9ec472c54bcbe425cba64a1f18b1b432b17ecd8fd4cbb9.scope: Deactivated successfully. Sep 10 00:00:23.559407 systemd[1]: cri-containerd-e4658e9c48027ddf0b9ec472c54bcbe425cba64a1f18b1b432b17ecd8fd4cbb9.scope: Consumed 6.375s CPU time, 121.6M memory peak, 303K read from disk, 12.9M written to disk. Sep 10 00:00:23.559724 containerd[1546]: time="2025-09-10T00:00:23.559433433Z" level=info msg="Container to stop \"b5d8413bbbe311867604b4b12de7398e325f5f420810f7cf0d5cb862216f4c94\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:00:23.560618 containerd[1546]: time="2025-09-10T00:00:23.560592963Z" level=info msg="received exit event container_id:\"e4658e9c48027ddf0b9ec472c54bcbe425cba64a1f18b1b432b17ecd8fd4cbb9\" id:\"e4658e9c48027ddf0b9ec472c54bcbe425cba64a1f18b1b432b17ecd8fd4cbb9\" pid:3303 exited_at:{seconds:1757462423 nanos:560236892}" Sep 10 00:00:23.560760 containerd[1546]: time="2025-09-10T00:00:23.560727440Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e4658e9c48027ddf0b9ec472c54bcbe425cba64a1f18b1b432b17ecd8fd4cbb9\" id:\"e4658e9c48027ddf0b9ec472c54bcbe425cba64a1f18b1b432b17ecd8fd4cbb9\" pid:3303 exited_at:{seconds:1757462423 nanos:560236892}" Sep 10 00:00:23.572498 systemd[1]: cri-containerd-97518fca59beb4a2d43ad7eba0adc5f52c7e2b315d5214da31f99d3dcfa7730d.scope: Deactivated successfully. Sep 10 00:00:23.573998 containerd[1546]: time="2025-09-10T00:00:23.573940182Z" level=info msg="TaskExit event in podsandbox handler container_id:\"97518fca59beb4a2d43ad7eba0adc5f52c7e2b315d5214da31f99d3dcfa7730d\" id:\"97518fca59beb4a2d43ad7eba0adc5f52c7e2b315d5214da31f99d3dcfa7730d\" pid:2760 exit_status:137 exited_at:{seconds:1757462423 nanos:573487794}" Sep 10 00:00:23.586035 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4658e9c48027ddf0b9ec472c54bcbe425cba64a1f18b1b432b17ecd8fd4cbb9-rootfs.mount: Deactivated successfully. Sep 10 00:00:23.598526 containerd[1546]: time="2025-09-10T00:00:23.598482915Z" level=info msg="StopContainer for \"e4658e9c48027ddf0b9ec472c54bcbe425cba64a1f18b1b432b17ecd8fd4cbb9\" returns successfully" Sep 10 00:00:23.599102 containerd[1546]: time="2025-09-10T00:00:23.599005142Z" level=info msg="StopPodSandbox for \"10c6443b8e57bb0741c17c0ad357113264f579fb119f23b24074695b782d1539\"" Sep 10 00:00:23.599234 containerd[1546]: time="2025-09-10T00:00:23.599207497Z" level=info msg="Container to stop \"57d93693681c2cb9c69f87e5bdd564d13a0997f233b6891fea322a6f448b7494\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:00:23.599311 containerd[1546]: time="2025-09-10T00:00:23.599297174Z" level=info msg="Container to stop \"c6a1f60051ffa9a5e13907eca51415a47caaa354a9fb202b5064a561e8fc2b50\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:00:23.599365 containerd[1546]: time="2025-09-10T00:00:23.599352613Z" level=info msg="Container to stop \"e4658e9c48027ddf0b9ec472c54bcbe425cba64a1f18b1b432b17ecd8fd4cbb9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:00:23.599423 containerd[1546]: time="2025-09-10T00:00:23.599411212Z" level=info msg="Container to stop \"ee41102290b193009f7819e93afb5bff5767f7b309d4fad1253f5dd46dd588a4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:00:23.599478 containerd[1546]: time="2025-09-10T00:00:23.599464250Z" level=info msg="Container to stop \"d4caf2e1705751bfebb0a4fd092ed9d39f692c1d3b5acc67cb1d38c68d9d1446\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:00:23.602649 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97518fca59beb4a2d43ad7eba0adc5f52c7e2b315d5214da31f99d3dcfa7730d-rootfs.mount: Deactivated successfully. Sep 10 00:00:23.605455 systemd[1]: cri-containerd-10c6443b8e57bb0741c17c0ad357113264f579fb119f23b24074695b782d1539.scope: Deactivated successfully. Sep 10 00:00:23.608799 containerd[1546]: time="2025-09-10T00:00:23.608760933Z" level=info msg="TearDown network for sandbox \"97518fca59beb4a2d43ad7eba0adc5f52c7e2b315d5214da31f99d3dcfa7730d\" successfully" Sep 10 00:00:23.608799 containerd[1546]: time="2025-09-10T00:00:23.608790572Z" level=info msg="StopPodSandbox for \"97518fca59beb4a2d43ad7eba0adc5f52c7e2b315d5214da31f99d3dcfa7730d\" returns successfully" Sep 10 00:00:23.610069 containerd[1546]: time="2025-09-10T00:00:23.608504619Z" level=info msg="TaskExit event in podsandbox handler container_id:\"10c6443b8e57bb0741c17c0ad357113264f579fb119f23b24074695b782d1539\" id:\"10c6443b8e57bb0741c17c0ad357113264f579fb119f23b24074695b782d1539\" pid:3024 exit_status:137 exited_at:{seconds:1757462423 nanos:607121855}" Sep 10 00:00:23.610231 containerd[1546]: time="2025-09-10T00:00:23.610179576Z" level=info msg="received exit event sandbox_id:\"97518fca59beb4a2d43ad7eba0adc5f52c7e2b315d5214da31f99d3dcfa7730d\" exit_status:137 exited_at:{seconds:1757462423 nanos:573487794}" Sep 10 00:00:23.610979 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-97518fca59beb4a2d43ad7eba0adc5f52c7e2b315d5214da31f99d3dcfa7730d-shm.mount: Deactivated successfully. Sep 10 00:00:23.611703 containerd[1546]: time="2025-09-10T00:00:23.611676018Z" level=info msg="shim disconnected" id=97518fca59beb4a2d43ad7eba0adc5f52c7e2b315d5214da31f99d3dcfa7730d namespace=k8s.io Sep 10 00:00:23.611795 containerd[1546]: time="2025-09-10T00:00:23.611705377Z" level=warning msg="cleaning up after shim disconnected" id=97518fca59beb4a2d43ad7eba0adc5f52c7e2b315d5214da31f99d3dcfa7730d namespace=k8s.io Sep 10 00:00:23.611795 containerd[1546]: time="2025-09-10T00:00:23.611733817Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:00:23.632947 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10c6443b8e57bb0741c17c0ad357113264f579fb119f23b24074695b782d1539-rootfs.mount: Deactivated successfully. Sep 10 00:00:23.638452 containerd[1546]: time="2025-09-10T00:00:23.638414375Z" level=info msg="shim disconnected" id=10c6443b8e57bb0741c17c0ad357113264f579fb119f23b24074695b782d1539 namespace=k8s.io Sep 10 00:00:23.638703 containerd[1546]: time="2025-09-10T00:00:23.638448894Z" level=warning msg="cleaning up after shim disconnected" id=10c6443b8e57bb0741c17c0ad357113264f579fb119f23b24074695b782d1539 namespace=k8s.io Sep 10 00:00:23.638703 containerd[1546]: time="2025-09-10T00:00:23.638478413Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:00:23.638703 containerd[1546]: time="2025-09-10T00:00:23.638417375Z" level=info msg="received exit event sandbox_id:\"10c6443b8e57bb0741c17c0ad357113264f579fb119f23b24074695b782d1539\" exit_status:137 exited_at:{seconds:1757462423 nanos:607121855}" Sep 10 00:00:23.639545 containerd[1546]: time="2025-09-10T00:00:23.639491188Z" level=info msg="TearDown network for sandbox \"10c6443b8e57bb0741c17c0ad357113264f579fb119f23b24074695b782d1539\" successfully" Sep 10 00:00:23.639584 containerd[1546]: time="2025-09-10T00:00:23.639548586Z" level=info msg="StopPodSandbox for \"10c6443b8e57bb0741c17c0ad357113264f579fb119f23b24074695b782d1539\" returns successfully" Sep 10 00:00:23.648859 kubelet[2648]: I0910 00:00:23.648811 2648 scope.go:117] "RemoveContainer" containerID="b5d8413bbbe311867604b4b12de7398e325f5f420810f7cf0d5cb862216f4c94" Sep 10 00:00:23.655590 containerd[1546]: time="2025-09-10T00:00:23.653899339Z" level=info msg="RemoveContainer for \"b5d8413bbbe311867604b4b12de7398e325f5f420810f7cf0d5cb862216f4c94\"" Sep 10 00:00:23.662307 containerd[1546]: time="2025-09-10T00:00:23.662267966Z" level=info msg="RemoveContainer for \"b5d8413bbbe311867604b4b12de7398e325f5f420810f7cf0d5cb862216f4c94\" returns successfully" Sep 10 00:00:23.662664 kubelet[2648]: I0910 00:00:23.662623 2648 scope.go:117] "RemoveContainer" containerID="b5d8413bbbe311867604b4b12de7398e325f5f420810f7cf0d5cb862216f4c94" Sep 10 00:00:23.662985 containerd[1546]: time="2025-09-10T00:00:23.662933069Z" level=error msg="ContainerStatus for \"b5d8413bbbe311867604b4b12de7398e325f5f420810f7cf0d5cb862216f4c94\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b5d8413bbbe311867604b4b12de7398e325f5f420810f7cf0d5cb862216f4c94\": not found" Sep 10 00:00:23.663122 kubelet[2648]: E0910 00:00:23.663093 2648 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b5d8413bbbe311867604b4b12de7398e325f5f420810f7cf0d5cb862216f4c94\": not found" containerID="b5d8413bbbe311867604b4b12de7398e325f5f420810f7cf0d5cb862216f4c94" Sep 10 00:00:23.668426 kubelet[2648]: I0910 00:00:23.668296 2648 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b5d8413bbbe311867604b4b12de7398e325f5f420810f7cf0d5cb862216f4c94"} err="failed to get container status \"b5d8413bbbe311867604b4b12de7398e325f5f420810f7cf0d5cb862216f4c94\": rpc error: code = NotFound desc = an error occurred when try to find container \"b5d8413bbbe311867604b4b12de7398e325f5f420810f7cf0d5cb862216f4c94\": not found" Sep 10 00:00:23.668426 kubelet[2648]: I0910 00:00:23.668424 2648 scope.go:117] "RemoveContainer" containerID="e4658e9c48027ddf0b9ec472c54bcbe425cba64a1f18b1b432b17ecd8fd4cbb9" Sep 10 00:00:23.672021 containerd[1546]: time="2025-09-10T00:00:23.671986957Z" level=info msg="RemoveContainer for \"e4658e9c48027ddf0b9ec472c54bcbe425cba64a1f18b1b432b17ecd8fd4cbb9\"" Sep 10 00:00:23.678168 containerd[1546]: time="2025-09-10T00:00:23.678121841Z" level=info msg="RemoveContainer for \"e4658e9c48027ddf0b9ec472c54bcbe425cba64a1f18b1b432b17ecd8fd4cbb9\" returns successfully" Sep 10 00:00:23.678458 kubelet[2648]: I0910 00:00:23.678429 2648 scope.go:117] "RemoveContainer" containerID="c6a1f60051ffa9a5e13907eca51415a47caaa354a9fb202b5064a561e8fc2b50" Sep 10 00:00:23.680196 containerd[1546]: time="2025-09-10T00:00:23.680165388Z" level=info msg="RemoveContainer for \"c6a1f60051ffa9a5e13907eca51415a47caaa354a9fb202b5064a561e8fc2b50\"" Sep 10 00:00:23.685239 containerd[1546]: time="2025-09-10T00:00:23.685195220Z" level=info msg="RemoveContainer for \"c6a1f60051ffa9a5e13907eca51415a47caaa354a9fb202b5064a561e8fc2b50\" returns successfully" Sep 10 00:00:23.685449 kubelet[2648]: I0910 00:00:23.685393 2648 scope.go:117] "RemoveContainer" containerID="57d93693681c2cb9c69f87e5bdd564d13a0997f233b6891fea322a6f448b7494" Sep 10 00:00:23.687992 containerd[1546]: time="2025-09-10T00:00:23.687848072Z" level=info msg="RemoveContainer for \"57d93693681c2cb9c69f87e5bdd564d13a0997f233b6891fea322a6f448b7494\"" Sep 10 00:00:23.692523 containerd[1546]: time="2025-09-10T00:00:23.692418115Z" level=info msg="RemoveContainer for \"57d93693681c2cb9c69f87e5bdd564d13a0997f233b6891fea322a6f448b7494\" returns successfully" Sep 10 00:00:23.692666 kubelet[2648]: I0910 00:00:23.692636 2648 scope.go:117] "RemoveContainer" containerID="d4caf2e1705751bfebb0a4fd092ed9d39f692c1d3b5acc67cb1d38c68d9d1446" Sep 10 00:00:23.696337 containerd[1546]: time="2025-09-10T00:00:23.696306736Z" level=info msg="RemoveContainer for \"d4caf2e1705751bfebb0a4fd092ed9d39f692c1d3b5acc67cb1d38c68d9d1446\"" Sep 10 00:00:23.699616 containerd[1546]: time="2025-09-10T00:00:23.699578372Z" level=info msg="RemoveContainer for \"d4caf2e1705751bfebb0a4fd092ed9d39f692c1d3b5acc67cb1d38c68d9d1446\" returns successfully" Sep 10 00:00:23.699908 kubelet[2648]: I0910 00:00:23.699801 2648 scope.go:117] "RemoveContainer" containerID="ee41102290b193009f7819e93afb5bff5767f7b309d4fad1253f5dd46dd588a4" Sep 10 00:00:23.701320 containerd[1546]: time="2025-09-10T00:00:23.701291769Z" level=info msg="RemoveContainer for \"ee41102290b193009f7819e93afb5bff5767f7b309d4fad1253f5dd46dd588a4\"" Sep 10 00:00:23.707707 containerd[1546]: time="2025-09-10T00:00:23.707606727Z" level=info msg="RemoveContainer for \"ee41102290b193009f7819e93afb5bff5767f7b309d4fad1253f5dd46dd588a4\" returns successfully" Sep 10 00:00:23.707975 kubelet[2648]: I0910 00:00:23.707944 2648 scope.go:117] "RemoveContainer" containerID="e4658e9c48027ddf0b9ec472c54bcbe425cba64a1f18b1b432b17ecd8fd4cbb9" Sep 10 00:00:23.708281 containerd[1546]: time="2025-09-10T00:00:23.708191672Z" level=error msg="ContainerStatus for \"e4658e9c48027ddf0b9ec472c54bcbe425cba64a1f18b1b432b17ecd8fd4cbb9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e4658e9c48027ddf0b9ec472c54bcbe425cba64a1f18b1b432b17ecd8fd4cbb9\": not found" Sep 10 00:00:23.708421 kubelet[2648]: E0910 00:00:23.708397 2648 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e4658e9c48027ddf0b9ec472c54bcbe425cba64a1f18b1b432b17ecd8fd4cbb9\": not found" containerID="e4658e9c48027ddf0b9ec472c54bcbe425cba64a1f18b1b432b17ecd8fd4cbb9" Sep 10 00:00:23.708458 kubelet[2648]: I0910 00:00:23.708431 2648 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e4658e9c48027ddf0b9ec472c54bcbe425cba64a1f18b1b432b17ecd8fd4cbb9"} err="failed to get container status \"e4658e9c48027ddf0b9ec472c54bcbe425cba64a1f18b1b432b17ecd8fd4cbb9\": rpc error: code = NotFound desc = an error occurred when try to find container \"e4658e9c48027ddf0b9ec472c54bcbe425cba64a1f18b1b432b17ecd8fd4cbb9\": not found" Sep 10 00:00:23.708458 kubelet[2648]: I0910 00:00:23.708455 2648 scope.go:117] "RemoveContainer" containerID="c6a1f60051ffa9a5e13907eca51415a47caaa354a9fb202b5064a561e8fc2b50" Sep 10 00:00:23.708676 containerd[1546]: time="2025-09-10T00:00:23.708637701Z" level=error msg="ContainerStatus for \"c6a1f60051ffa9a5e13907eca51415a47caaa354a9fb202b5064a561e8fc2b50\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c6a1f60051ffa9a5e13907eca51415a47caaa354a9fb202b5064a561e8fc2b50\": not found" Sep 10 00:00:23.708909 kubelet[2648]: E0910 00:00:23.708770 2648 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c6a1f60051ffa9a5e13907eca51415a47caaa354a9fb202b5064a561e8fc2b50\": not found" containerID="c6a1f60051ffa9a5e13907eca51415a47caaa354a9fb202b5064a561e8fc2b50" Sep 10 00:00:23.708909 kubelet[2648]: I0910 00:00:23.708796 2648 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c6a1f60051ffa9a5e13907eca51415a47caaa354a9fb202b5064a561e8fc2b50"} err="failed to get container status \"c6a1f60051ffa9a5e13907eca51415a47caaa354a9fb202b5064a561e8fc2b50\": rpc error: code = NotFound desc = an error occurred when try to find container \"c6a1f60051ffa9a5e13907eca51415a47caaa354a9fb202b5064a561e8fc2b50\": not found" Sep 10 00:00:23.708909 kubelet[2648]: I0910 00:00:23.708816 2648 scope.go:117] "RemoveContainer" containerID="57d93693681c2cb9c69f87e5bdd564d13a0997f233b6891fea322a6f448b7494" Sep 10 00:00:23.709139 containerd[1546]: time="2025-09-10T00:00:23.709109689Z" level=error msg="ContainerStatus for \"57d93693681c2cb9c69f87e5bdd564d13a0997f233b6891fea322a6f448b7494\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"57d93693681c2cb9c69f87e5bdd564d13a0997f233b6891fea322a6f448b7494\": not found" Sep 10 00:00:23.709465 kubelet[2648]: E0910 00:00:23.709417 2648 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"57d93693681c2cb9c69f87e5bdd564d13a0997f233b6891fea322a6f448b7494\": not found" containerID="57d93693681c2cb9c69f87e5bdd564d13a0997f233b6891fea322a6f448b7494" Sep 10 00:00:23.709639 kubelet[2648]: I0910 00:00:23.709575 2648 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"57d93693681c2cb9c69f87e5bdd564d13a0997f233b6891fea322a6f448b7494"} err="failed to get container status \"57d93693681c2cb9c69f87e5bdd564d13a0997f233b6891fea322a6f448b7494\": rpc error: code = NotFound desc = an error occurred when try to find container \"57d93693681c2cb9c69f87e5bdd564d13a0997f233b6891fea322a6f448b7494\": not found" Sep 10 00:00:23.709639 kubelet[2648]: I0910 00:00:23.709599 2648 scope.go:117] "RemoveContainer" containerID="d4caf2e1705751bfebb0a4fd092ed9d39f692c1d3b5acc67cb1d38c68d9d1446" Sep 10 00:00:23.709904 containerd[1546]: time="2025-09-10T00:00:23.709866710Z" level=error msg="ContainerStatus for \"d4caf2e1705751bfebb0a4fd092ed9d39f692c1d3b5acc67cb1d38c68d9d1446\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d4caf2e1705751bfebb0a4fd092ed9d39f692c1d3b5acc67cb1d38c68d9d1446\": not found" Sep 10 00:00:23.710107 kubelet[2648]: E0910 00:00:23.710086 2648 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d4caf2e1705751bfebb0a4fd092ed9d39f692c1d3b5acc67cb1d38c68d9d1446\": not found" containerID="d4caf2e1705751bfebb0a4fd092ed9d39f692c1d3b5acc67cb1d38c68d9d1446" Sep 10 00:00:23.710152 kubelet[2648]: I0910 00:00:23.710122 2648 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d4caf2e1705751bfebb0a4fd092ed9d39f692c1d3b5acc67cb1d38c68d9d1446"} err="failed to get container status \"d4caf2e1705751bfebb0a4fd092ed9d39f692c1d3b5acc67cb1d38c68d9d1446\": rpc error: code = NotFound desc = an error occurred when try to find container \"d4caf2e1705751bfebb0a4fd092ed9d39f692c1d3b5acc67cb1d38c68d9d1446\": not found" Sep 10 00:00:23.710152 kubelet[2648]: I0910 00:00:23.710138 2648 scope.go:117] "RemoveContainer" containerID="ee41102290b193009f7819e93afb5bff5767f7b309d4fad1253f5dd46dd588a4" Sep 10 00:00:23.710422 containerd[1546]: time="2025-09-10T00:00:23.710389576Z" level=error msg="ContainerStatus for \"ee41102290b193009f7819e93afb5bff5767f7b309d4fad1253f5dd46dd588a4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ee41102290b193009f7819e93afb5bff5767f7b309d4fad1253f5dd46dd588a4\": not found" Sep 10 00:00:23.710599 kubelet[2648]: E0910 00:00:23.710574 2648 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ee41102290b193009f7819e93afb5bff5767f7b309d4fad1253f5dd46dd588a4\": not found" containerID="ee41102290b193009f7819e93afb5bff5767f7b309d4fad1253f5dd46dd588a4" Sep 10 00:00:23.710642 kubelet[2648]: I0910 00:00:23.710604 2648 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ee41102290b193009f7819e93afb5bff5767f7b309d4fad1253f5dd46dd588a4"} err="failed to get container status \"ee41102290b193009f7819e93afb5bff5767f7b309d4fad1253f5dd46dd588a4\": rpc error: code = NotFound desc = an error occurred when try to find container \"ee41102290b193009f7819e93afb5bff5767f7b309d4fad1253f5dd46dd588a4\": not found" Sep 10 00:00:23.726859 kubelet[2648]: I0910 00:00:23.726813 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/974989e6-23e2-445e-b544-682979f8bef6-cni-path\") pod \"974989e6-23e2-445e-b544-682979f8bef6\" (UID: \"974989e6-23e2-445e-b544-682979f8bef6\") " Sep 10 00:00:23.727014 kubelet[2648]: I0910 00:00:23.726995 2648 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/974989e6-23e2-445e-b544-682979f8bef6-cni-path" (OuterVolumeSpecName: "cni-path") pod "974989e6-23e2-445e-b544-682979f8bef6" (UID: "974989e6-23e2-445e-b544-682979f8bef6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 00:00:23.727942 kubelet[2648]: I0910 00:00:23.727080 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/31f242ad-af2c-4d0c-ac05-4bd5506759b0-cilium-config-path\") pod \"31f242ad-af2c-4d0c-ac05-4bd5506759b0\" (UID: \"31f242ad-af2c-4d0c-ac05-4bd5506759b0\") " Sep 10 00:00:23.727942 kubelet[2648]: I0910 00:00:23.727602 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/974989e6-23e2-445e-b544-682979f8bef6-cilium-cgroup\") pod \"974989e6-23e2-445e-b544-682979f8bef6\" (UID: \"974989e6-23e2-445e-b544-682979f8bef6\") " Sep 10 00:00:23.727942 kubelet[2648]: I0910 00:00:23.727627 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/974989e6-23e2-445e-b544-682979f8bef6-clustermesh-secrets\") pod \"974989e6-23e2-445e-b544-682979f8bef6\" (UID: \"974989e6-23e2-445e-b544-682979f8bef6\") " Sep 10 00:00:23.727942 kubelet[2648]: I0910 00:00:23.727645 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/974989e6-23e2-445e-b544-682979f8bef6-bpf-maps\") pod \"974989e6-23e2-445e-b544-682979f8bef6\" (UID: \"974989e6-23e2-445e-b544-682979f8bef6\") " Sep 10 00:00:23.727942 kubelet[2648]: I0910 00:00:23.727661 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/974989e6-23e2-445e-b544-682979f8bef6-host-proc-sys-net\") pod \"974989e6-23e2-445e-b544-682979f8bef6\" (UID: \"974989e6-23e2-445e-b544-682979f8bef6\") " Sep 10 00:00:23.727942 kubelet[2648]: I0910 00:00:23.727680 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/974989e6-23e2-445e-b544-682979f8bef6-hubble-tls\") pod \"974989e6-23e2-445e-b544-682979f8bef6\" (UID: \"974989e6-23e2-445e-b544-682979f8bef6\") " Sep 10 00:00:23.728126 kubelet[2648]: I0910 00:00:23.727688 2648 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/974989e6-23e2-445e-b544-682979f8bef6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "974989e6-23e2-445e-b544-682979f8bef6" (UID: "974989e6-23e2-445e-b544-682979f8bef6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 00:00:23.728126 kubelet[2648]: I0910 00:00:23.727698 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9n2h2\" (UniqueName: \"kubernetes.io/projected/31f242ad-af2c-4d0c-ac05-4bd5506759b0-kube-api-access-9n2h2\") pod \"31f242ad-af2c-4d0c-ac05-4bd5506759b0\" (UID: \"31f242ad-af2c-4d0c-ac05-4bd5506759b0\") " Sep 10 00:00:23.728126 kubelet[2648]: I0910 00:00:23.727761 2648 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/974989e6-23e2-445e-b544-682979f8bef6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "974989e6-23e2-445e-b544-682979f8bef6" (UID: "974989e6-23e2-445e-b544-682979f8bef6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 00:00:23.728126 kubelet[2648]: I0910 00:00:23.727834 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/974989e6-23e2-445e-b544-682979f8bef6-lib-modules\") pod \"974989e6-23e2-445e-b544-682979f8bef6\" (UID: \"974989e6-23e2-445e-b544-682979f8bef6\") " Sep 10 00:00:23.728126 kubelet[2648]: I0910 00:00:23.727854 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/974989e6-23e2-445e-b544-682979f8bef6-cilium-run\") pod \"974989e6-23e2-445e-b544-682979f8bef6\" (UID: \"974989e6-23e2-445e-b544-682979f8bef6\") " Sep 10 00:00:23.728267 kubelet[2648]: I0910 00:00:23.727873 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/974989e6-23e2-445e-b544-682979f8bef6-cilium-config-path\") pod \"974989e6-23e2-445e-b544-682979f8bef6\" (UID: \"974989e6-23e2-445e-b544-682979f8bef6\") " Sep 10 00:00:23.728267 kubelet[2648]: I0910 00:00:23.727991 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/974989e6-23e2-445e-b544-682979f8bef6-xtables-lock\") pod \"974989e6-23e2-445e-b544-682979f8bef6\" (UID: \"974989e6-23e2-445e-b544-682979f8bef6\") " Sep 10 00:00:23.728267 kubelet[2648]: I0910 00:00:23.728012 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/974989e6-23e2-445e-b544-682979f8bef6-etc-cni-netd\") pod \"974989e6-23e2-445e-b544-682979f8bef6\" (UID: \"974989e6-23e2-445e-b544-682979f8bef6\") " Sep 10 00:00:23.728267 kubelet[2648]: I0910 00:00:23.728030 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/974989e6-23e2-445e-b544-682979f8bef6-host-proc-sys-kernel\") pod \"974989e6-23e2-445e-b544-682979f8bef6\" (UID: \"974989e6-23e2-445e-b544-682979f8bef6\") " Sep 10 00:00:23.728267 kubelet[2648]: I0910 00:00:23.728047 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/974989e6-23e2-445e-b544-682979f8bef6-hostproc\") pod \"974989e6-23e2-445e-b544-682979f8bef6\" (UID: \"974989e6-23e2-445e-b544-682979f8bef6\") " Sep 10 00:00:23.728267 kubelet[2648]: I0910 00:00:23.728065 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jv5d\" (UniqueName: \"kubernetes.io/projected/974989e6-23e2-445e-b544-682979f8bef6-kube-api-access-8jv5d\") pod \"974989e6-23e2-445e-b544-682979f8bef6\" (UID: \"974989e6-23e2-445e-b544-682979f8bef6\") " Sep 10 00:00:23.728389 kubelet[2648]: I0910 00:00:23.728104 2648 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/974989e6-23e2-445e-b544-682979f8bef6-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 10 00:00:23.728389 kubelet[2648]: I0910 00:00:23.728114 2648 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/974989e6-23e2-445e-b544-682979f8bef6-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 10 00:00:23.728389 kubelet[2648]: I0910 00:00:23.728123 2648 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/974989e6-23e2-445e-b544-682979f8bef6-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 10 00:00:23.728805 kubelet[2648]: I0910 00:00:23.728777 2648 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/974989e6-23e2-445e-b544-682979f8bef6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "974989e6-23e2-445e-b544-682979f8bef6" (UID: "974989e6-23e2-445e-b544-682979f8bef6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 00:00:23.728805 kubelet[2648]: I0910 00:00:23.728795 2648 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31f242ad-af2c-4d0c-ac05-4bd5506759b0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "31f242ad-af2c-4d0c-ac05-4bd5506759b0" (UID: "31f242ad-af2c-4d0c-ac05-4bd5506759b0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 10 00:00:23.728885 kubelet[2648]: I0910 00:00:23.728840 2648 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/974989e6-23e2-445e-b544-682979f8bef6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "974989e6-23e2-445e-b544-682979f8bef6" (UID: "974989e6-23e2-445e-b544-682979f8bef6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 00:00:23.728885 kubelet[2648]: I0910 00:00:23.728855 2648 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/974989e6-23e2-445e-b544-682979f8bef6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "974989e6-23e2-445e-b544-682979f8bef6" (UID: "974989e6-23e2-445e-b544-682979f8bef6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 00:00:23.728885 kubelet[2648]: I0910 00:00:23.728870 2648 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/974989e6-23e2-445e-b544-682979f8bef6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "974989e6-23e2-445e-b544-682979f8bef6" (UID: "974989e6-23e2-445e-b544-682979f8bef6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 00:00:23.729083 kubelet[2648]: I0910 00:00:23.728991 2648 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/974989e6-23e2-445e-b544-682979f8bef6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "974989e6-23e2-445e-b544-682979f8bef6" (UID: "974989e6-23e2-445e-b544-682979f8bef6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 00:00:23.729083 kubelet[2648]: I0910 00:00:23.729028 2648 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/974989e6-23e2-445e-b544-682979f8bef6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "974989e6-23e2-445e-b544-682979f8bef6" (UID: "974989e6-23e2-445e-b544-682979f8bef6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 00:00:23.729083 kubelet[2648]: I0910 00:00:23.729050 2648 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/974989e6-23e2-445e-b544-682979f8bef6-hostproc" (OuterVolumeSpecName: "hostproc") pod "974989e6-23e2-445e-b544-682979f8bef6" (UID: "974989e6-23e2-445e-b544-682979f8bef6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 00:00:23.730614 kubelet[2648]: I0910 00:00:23.730585 2648 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/974989e6-23e2-445e-b544-682979f8bef6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "974989e6-23e2-445e-b544-682979f8bef6" (UID: "974989e6-23e2-445e-b544-682979f8bef6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 10 00:00:23.730806 kubelet[2648]: I0910 00:00:23.730779 2648 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/974989e6-23e2-445e-b544-682979f8bef6-kube-api-access-8jv5d" (OuterVolumeSpecName: "kube-api-access-8jv5d") pod "974989e6-23e2-445e-b544-682979f8bef6" (UID: "974989e6-23e2-445e-b544-682979f8bef6"). InnerVolumeSpecName "kube-api-access-8jv5d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 10 00:00:23.731601 kubelet[2648]: I0910 00:00:23.731574 2648 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/974989e6-23e2-445e-b544-682979f8bef6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "974989e6-23e2-445e-b544-682979f8bef6" (UID: "974989e6-23e2-445e-b544-682979f8bef6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 10 00:00:23.731806 kubelet[2648]: I0910 00:00:23.731780 2648 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/974989e6-23e2-445e-b544-682979f8bef6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "974989e6-23e2-445e-b544-682979f8bef6" (UID: "974989e6-23e2-445e-b544-682979f8bef6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 10 00:00:23.731944 kubelet[2648]: I0910 00:00:23.731912 2648 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31f242ad-af2c-4d0c-ac05-4bd5506759b0-kube-api-access-9n2h2" (OuterVolumeSpecName: "kube-api-access-9n2h2") pod "31f242ad-af2c-4d0c-ac05-4bd5506759b0" (UID: "31f242ad-af2c-4d0c-ac05-4bd5506759b0"). InnerVolumeSpecName "kube-api-access-9n2h2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 10 00:00:23.829352 kubelet[2648]: I0910 00:00:23.829312 2648 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/974989e6-23e2-445e-b544-682979f8bef6-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 10 00:00:23.829352 kubelet[2648]: I0910 00:00:23.829346 2648 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/974989e6-23e2-445e-b544-682979f8bef6-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 10 00:00:23.829352 kubelet[2648]: I0910 00:00:23.829356 2648 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/974989e6-23e2-445e-b544-682979f8bef6-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 10 00:00:23.829561 kubelet[2648]: I0910 00:00:23.829364 2648 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/974989e6-23e2-445e-b544-682979f8bef6-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 10 00:00:23.829561 kubelet[2648]: I0910 00:00:23.829376 2648 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/974989e6-23e2-445e-b544-682979f8bef6-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 10 00:00:23.829561 kubelet[2648]: I0910 00:00:23.829384 2648 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9n2h2\" (UniqueName: \"kubernetes.io/projected/31f242ad-af2c-4d0c-ac05-4bd5506759b0-kube-api-access-9n2h2\") on node \"localhost\" DevicePath \"\"" Sep 10 00:00:23.829561 kubelet[2648]: I0910 00:00:23.829392 2648 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/974989e6-23e2-445e-b544-682979f8bef6-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 10 00:00:23.829561 kubelet[2648]: I0910 00:00:23.829400 2648 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/974989e6-23e2-445e-b544-682979f8bef6-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 10 00:00:23.829561 kubelet[2648]: I0910 00:00:23.829407 2648 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/974989e6-23e2-445e-b544-682979f8bef6-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 10 00:00:23.829561 kubelet[2648]: I0910 00:00:23.829416 2648 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/974989e6-23e2-445e-b544-682979f8bef6-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 10 00:00:23.829561 kubelet[2648]: I0910 00:00:23.829424 2648 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/974989e6-23e2-445e-b544-682979f8bef6-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 10 00:00:23.829723 kubelet[2648]: I0910 00:00:23.829432 2648 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8jv5d\" (UniqueName: \"kubernetes.io/projected/974989e6-23e2-445e-b544-682979f8bef6-kube-api-access-8jv5d\") on node \"localhost\" DevicePath \"\"" Sep 10 00:00:23.829723 kubelet[2648]: I0910 00:00:23.829439 2648 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/31f242ad-af2c-4d0c-ac05-4bd5506759b0-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 10 00:00:23.955686 systemd[1]: Removed slice kubepods-besteffort-pod31f242ad_af2c_4d0c_ac05_4bd5506759b0.slice - libcontainer container kubepods-besteffort-pod31f242ad_af2c_4d0c_ac05_4bd5506759b0.slice. Sep 10 00:00:23.967146 systemd[1]: Removed slice kubepods-burstable-pod974989e6_23e2_445e_b544_682979f8bef6.slice - libcontainer container kubepods-burstable-pod974989e6_23e2_445e_b544_682979f8bef6.slice. Sep 10 00:00:23.967391 systemd[1]: kubepods-burstable-pod974989e6_23e2_445e_b544_682979f8bef6.slice: Consumed 6.463s CPU time, 121.9M memory peak, 303K read from disk, 12.9M written to disk. Sep 10 00:00:24.423018 kubelet[2648]: E0910 00:00:24.422944 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:00:24.423133 kubelet[2648]: E0910 00:00:24.423051 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:00:24.541071 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-10c6443b8e57bb0741c17c0ad357113264f579fb119f23b24074695b782d1539-shm.mount: Deactivated successfully. Sep 10 00:00:24.541211 systemd[1]: var-lib-kubelet-pods-974989e6\x2d23e2\x2d445e\x2db544\x2d682979f8bef6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 10 00:00:24.541273 systemd[1]: var-lib-kubelet-pods-974989e6\x2d23e2\x2d445e\x2db544\x2d682979f8bef6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 10 00:00:24.541323 systemd[1]: var-lib-kubelet-pods-31f242ad\x2daf2c\x2d4d0c\x2dac05\x2d4bd5506759b0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9n2h2.mount: Deactivated successfully. Sep 10 00:00:24.541378 systemd[1]: var-lib-kubelet-pods-974989e6\x2d23e2\x2d445e\x2db544\x2d682979f8bef6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8jv5d.mount: Deactivated successfully. Sep 10 00:00:25.368173 sshd[4293]: Connection closed by 10.0.0.1 port 47118 Sep 10 00:00:25.368679 sshd-session[4290]: pam_unix(sshd:session): session closed for user core Sep 10 00:00:25.380988 systemd[1]: sshd@23-10.0.0.125:22-10.0.0.1:47118.service: Deactivated successfully. Sep 10 00:00:25.382856 systemd[1]: session-24.scope: Deactivated successfully. Sep 10 00:00:25.383117 systemd[1]: session-24.scope: Consumed 1.468s CPU time, 24.7M memory peak. Sep 10 00:00:25.383676 systemd-logind[1520]: Session 24 logged out. Waiting for processes to exit. Sep 10 00:00:25.386333 systemd[1]: Started sshd@24-10.0.0.125:22-10.0.0.1:47172.service - OpenSSH per-connection server daemon (10.0.0.1:47172). Sep 10 00:00:25.387266 systemd-logind[1520]: Removed session 24. Sep 10 00:00:25.424550 kubelet[2648]: I0910 00:00:25.424338 2648 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31f242ad-af2c-4d0c-ac05-4bd5506759b0" path="/var/lib/kubelet/pods/31f242ad-af2c-4d0c-ac05-4bd5506759b0/volumes" Sep 10 00:00:25.425123 kubelet[2648]: I0910 00:00:25.425100 2648 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="974989e6-23e2-445e-b544-682979f8bef6" path="/var/lib/kubelet/pods/974989e6-23e2-445e-b544-682979f8bef6/volumes" Sep 10 00:00:25.454419 sshd[4444]: Accepted publickey for core from 10.0.0.1 port 47172 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 10 00:00:25.456114 sshd-session[4444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:00:25.461419 systemd-logind[1520]: New session 25 of user core. Sep 10 00:00:25.468743 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 10 00:00:26.232723 sshd[4447]: Connection closed by 10.0.0.1 port 47172 Sep 10 00:00:26.233914 sshd-session[4444]: pam_unix(sshd:session): session closed for user core Sep 10 00:00:26.247381 systemd[1]: sshd@24-10.0.0.125:22-10.0.0.1:47172.service: Deactivated successfully. Sep 10 00:00:26.249309 systemd[1]: session-25.scope: Deactivated successfully. Sep 10 00:00:26.256572 systemd-logind[1520]: Session 25 logged out. Waiting for processes to exit. Sep 10 00:00:26.261901 kubelet[2648]: I0910 00:00:26.261375 2648 memory_manager.go:355] "RemoveStaleState removing state" podUID="31f242ad-af2c-4d0c-ac05-4bd5506759b0" containerName="cilium-operator" Sep 10 00:00:26.261901 kubelet[2648]: I0910 00:00:26.261404 2648 memory_manager.go:355] "RemoveStaleState removing state" podUID="974989e6-23e2-445e-b544-682979f8bef6" containerName="cilium-agent" Sep 10 00:00:26.262831 systemd[1]: Started sshd@25-10.0.0.125:22-10.0.0.1:47188.service - OpenSSH per-connection server daemon (10.0.0.1:47188). Sep 10 00:00:26.264354 systemd-logind[1520]: Removed session 25. Sep 10 00:00:26.272218 systemd[1]: Created slice kubepods-burstable-pod9a021de9_f266_46df_9bd7_b81e7c10e1b1.slice - libcontainer container kubepods-burstable-pod9a021de9_f266_46df_9bd7_b81e7c10e1b1.slice. Sep 10 00:00:26.335657 sshd[4460]: Accepted publickey for core from 10.0.0.1 port 47188 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 10 00:00:26.337109 sshd-session[4460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:00:26.342018 systemd-logind[1520]: New session 26 of user core. Sep 10 00:00:26.346099 kubelet[2648]: I0910 00:00:26.346060 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9a021de9-f266-46df-9bd7-b81e7c10e1b1-hostproc\") pod \"cilium-cxpw6\" (UID: \"9a021de9-f266-46df-9bd7-b81e7c10e1b1\") " pod="kube-system/cilium-cxpw6" Sep 10 00:00:26.346099 kubelet[2648]: I0910 00:00:26.346098 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9a021de9-f266-46df-9bd7-b81e7c10e1b1-cni-path\") pod \"cilium-cxpw6\" (UID: \"9a021de9-f266-46df-9bd7-b81e7c10e1b1\") " pod="kube-system/cilium-cxpw6" Sep 10 00:00:26.346198 kubelet[2648]: I0910 00:00:26.346115 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a021de9-f266-46df-9bd7-b81e7c10e1b1-etc-cni-netd\") pod \"cilium-cxpw6\" (UID: \"9a021de9-f266-46df-9bd7-b81e7c10e1b1\") " pod="kube-system/cilium-cxpw6" Sep 10 00:00:26.346198 kubelet[2648]: I0910 00:00:26.346133 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9a021de9-f266-46df-9bd7-b81e7c10e1b1-host-proc-sys-kernel\") pod \"cilium-cxpw6\" (UID: \"9a021de9-f266-46df-9bd7-b81e7c10e1b1\") " pod="kube-system/cilium-cxpw6" Sep 10 00:00:26.346198 kubelet[2648]: I0910 00:00:26.346151 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrndk\" (UniqueName: \"kubernetes.io/projected/9a021de9-f266-46df-9bd7-b81e7c10e1b1-kube-api-access-nrndk\") pod \"cilium-cxpw6\" (UID: \"9a021de9-f266-46df-9bd7-b81e7c10e1b1\") " pod="kube-system/cilium-cxpw6" Sep 10 00:00:26.346198 kubelet[2648]: I0910 00:00:26.346168 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9a021de9-f266-46df-9bd7-b81e7c10e1b1-bpf-maps\") pod \"cilium-cxpw6\" (UID: \"9a021de9-f266-46df-9bd7-b81e7c10e1b1\") " pod="kube-system/cilium-cxpw6" Sep 10 00:00:26.346198 kubelet[2648]: I0910 00:00:26.346182 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9a021de9-f266-46df-9bd7-b81e7c10e1b1-clustermesh-secrets\") pod \"cilium-cxpw6\" (UID: \"9a021de9-f266-46df-9bd7-b81e7c10e1b1\") " pod="kube-system/cilium-cxpw6" Sep 10 00:00:26.346301 kubelet[2648]: I0910 00:00:26.346199 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9a021de9-f266-46df-9bd7-b81e7c10e1b1-cilium-ipsec-secrets\") pod \"cilium-cxpw6\" (UID: \"9a021de9-f266-46df-9bd7-b81e7c10e1b1\") " pod="kube-system/cilium-cxpw6" Sep 10 00:00:26.346301 kubelet[2648]: I0910 00:00:26.346215 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9a021de9-f266-46df-9bd7-b81e7c10e1b1-cilium-cgroup\") pod \"cilium-cxpw6\" (UID: \"9a021de9-f266-46df-9bd7-b81e7c10e1b1\") " pod="kube-system/cilium-cxpw6" Sep 10 00:00:26.346301 kubelet[2648]: I0910 00:00:26.346230 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9a021de9-f266-46df-9bd7-b81e7c10e1b1-hubble-tls\") pod \"cilium-cxpw6\" (UID: \"9a021de9-f266-46df-9bd7-b81e7c10e1b1\") " pod="kube-system/cilium-cxpw6" Sep 10 00:00:26.346301 kubelet[2648]: I0910 00:00:26.346246 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9a021de9-f266-46df-9bd7-b81e7c10e1b1-cilium-config-path\") pod \"cilium-cxpw6\" (UID: \"9a021de9-f266-46df-9bd7-b81e7c10e1b1\") " pod="kube-system/cilium-cxpw6" Sep 10 00:00:26.346301 kubelet[2648]: I0910 00:00:26.346262 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a021de9-f266-46df-9bd7-b81e7c10e1b1-lib-modules\") pod \"cilium-cxpw6\" (UID: \"9a021de9-f266-46df-9bd7-b81e7c10e1b1\") " pod="kube-system/cilium-cxpw6" Sep 10 00:00:26.346301 kubelet[2648]: I0910 00:00:26.346278 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a021de9-f266-46df-9bd7-b81e7c10e1b1-xtables-lock\") pod \"cilium-cxpw6\" (UID: \"9a021de9-f266-46df-9bd7-b81e7c10e1b1\") " pod="kube-system/cilium-cxpw6" Sep 10 00:00:26.346416 kubelet[2648]: I0910 00:00:26.346294 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9a021de9-f266-46df-9bd7-b81e7c10e1b1-host-proc-sys-net\") pod \"cilium-cxpw6\" (UID: \"9a021de9-f266-46df-9bd7-b81e7c10e1b1\") " pod="kube-system/cilium-cxpw6" Sep 10 00:00:26.346416 kubelet[2648]: I0910 00:00:26.346308 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9a021de9-f266-46df-9bd7-b81e7c10e1b1-cilium-run\") pod \"cilium-cxpw6\" (UID: \"9a021de9-f266-46df-9bd7-b81e7c10e1b1\") " pod="kube-system/cilium-cxpw6" Sep 10 00:00:26.356716 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 10 00:00:26.408187 sshd[4463]: Connection closed by 10.0.0.1 port 47188 Sep 10 00:00:26.408450 sshd-session[4460]: pam_unix(sshd:session): session closed for user core Sep 10 00:00:26.418623 systemd[1]: sshd@25-10.0.0.125:22-10.0.0.1:47188.service: Deactivated successfully. Sep 10 00:00:26.420165 systemd[1]: session-26.scope: Deactivated successfully. Sep 10 00:00:26.422962 systemd-logind[1520]: Session 26 logged out. Waiting for processes to exit. Sep 10 00:00:26.424422 systemd[1]: Started sshd@26-10.0.0.125:22-10.0.0.1:47190.service - OpenSSH per-connection server daemon (10.0.0.1:47190). Sep 10 00:00:26.426609 systemd-logind[1520]: Removed session 26. Sep 10 00:00:26.491690 sshd[4470]: Accepted publickey for core from 10.0.0.1 port 47190 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 10 00:00:26.492956 sshd-session[4470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:00:26.496431 systemd-logind[1520]: New session 27 of user core. Sep 10 00:00:26.504692 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 10 00:00:26.578190 kubelet[2648]: E0910 00:00:26.578154 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:00:26.579867 containerd[1546]: time="2025-09-10T00:00:26.579815837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cxpw6,Uid:9a021de9-f266-46df-9bd7-b81e7c10e1b1,Namespace:kube-system,Attempt:0,}" Sep 10 00:00:26.603464 containerd[1546]: time="2025-09-10T00:00:26.603418761Z" level=info msg="connecting to shim cd0fa099178f530defb13a597685c7ca1fed515c24e078520f4ab521050a4984" address="unix:///run/containerd/s/eb5b3c47e5d313d78049d16fc15ab8585b1c193b64a1115c83f5e78018ae3bb0" namespace=k8s.io protocol=ttrpc version=3 Sep 10 00:00:26.649738 systemd[1]: Started cri-containerd-cd0fa099178f530defb13a597685c7ca1fed515c24e078520f4ab521050a4984.scope - libcontainer container cd0fa099178f530defb13a597685c7ca1fed515c24e078520f4ab521050a4984. Sep 10 00:00:26.713271 containerd[1546]: time="2025-09-10T00:00:26.713166495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cxpw6,Uid:9a021de9-f266-46df-9bd7-b81e7c10e1b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd0fa099178f530defb13a597685c7ca1fed515c24e078520f4ab521050a4984\"" Sep 10 00:00:26.714195 kubelet[2648]: E0910 00:00:26.714145 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:00:26.716557 containerd[1546]: time="2025-09-10T00:00:26.716474537Z" level=info msg="CreateContainer within sandbox \"cd0fa099178f530defb13a597685c7ca1fed515c24e078520f4ab521050a4984\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 10 00:00:26.725380 containerd[1546]: time="2025-09-10T00:00:26.725331128Z" level=info msg="Container aa1cf18819174c48e563d0380a1670c19596290d6fa98e1f603b727318aed082: CDI devices from CRI Config.CDIDevices: []" Sep 10 00:00:26.731395 containerd[1546]: time="2025-09-10T00:00:26.731330667Z" level=info msg="CreateContainer within sandbox \"cd0fa099178f530defb13a597685c7ca1fed515c24e078520f4ab521050a4984\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"aa1cf18819174c48e563d0380a1670c19596290d6fa98e1f603b727318aed082\"" Sep 10 00:00:26.731824 containerd[1546]: time="2025-09-10T00:00:26.731795576Z" level=info msg="StartContainer for \"aa1cf18819174c48e563d0380a1670c19596290d6fa98e1f603b727318aed082\"" Sep 10 00:00:26.732573 containerd[1546]: time="2025-09-10T00:00:26.732541758Z" level=info msg="connecting to shim aa1cf18819174c48e563d0380a1670c19596290d6fa98e1f603b727318aed082" address="unix:///run/containerd/s/eb5b3c47e5d313d78049d16fc15ab8585b1c193b64a1115c83f5e78018ae3bb0" protocol=ttrpc version=3 Sep 10 00:00:26.757766 systemd[1]: Started cri-containerd-aa1cf18819174c48e563d0380a1670c19596290d6fa98e1f603b727318aed082.scope - libcontainer container aa1cf18819174c48e563d0380a1670c19596290d6fa98e1f603b727318aed082. Sep 10 00:00:26.789544 containerd[1546]: time="2025-09-10T00:00:26.789493856Z" level=info msg="StartContainer for \"aa1cf18819174c48e563d0380a1670c19596290d6fa98e1f603b727318aed082\" returns successfully" Sep 10 00:00:26.797360 systemd[1]: cri-containerd-aa1cf18819174c48e563d0380a1670c19596290d6fa98e1f603b727318aed082.scope: Deactivated successfully. Sep 10 00:00:26.799159 containerd[1546]: time="2025-09-10T00:00:26.799114189Z" level=info msg="received exit event container_id:\"aa1cf18819174c48e563d0380a1670c19596290d6fa98e1f603b727318aed082\" id:\"aa1cf18819174c48e563d0380a1670c19596290d6fa98e1f603b727318aed082\" pid:4542 exited_at:{seconds:1757462426 nanos:798606841}" Sep 10 00:00:26.799270 containerd[1546]: time="2025-09-10T00:00:26.799242146Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aa1cf18819174c48e563d0380a1670c19596290d6fa98e1f603b727318aed082\" id:\"aa1cf18819174c48e563d0380a1670c19596290d6fa98e1f603b727318aed082\" pid:4542 exited_at:{seconds:1757462426 nanos:798606841}" Sep 10 00:00:27.673169 kubelet[2648]: E0910 00:00:27.673115 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:00:27.676365 containerd[1546]: time="2025-09-10T00:00:27.676309935Z" level=info msg="CreateContainer within sandbox \"cd0fa099178f530defb13a597685c7ca1fed515c24e078520f4ab521050a4984\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 10 00:00:27.693332 containerd[1546]: time="2025-09-10T00:00:27.692997912Z" level=info msg="Container 81dcab0304ebb79de9a7fe574e7c89afe71197192819a6acb151784e2ea59e06: CDI devices from CRI Config.CDIDevices: []" Sep 10 00:00:27.699012 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount684602374.mount: Deactivated successfully. Sep 10 00:00:27.708344 containerd[1546]: time="2025-09-10T00:00:27.708065526Z" level=info msg="CreateContainer within sandbox \"cd0fa099178f530defb13a597685c7ca1fed515c24e078520f4ab521050a4984\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"81dcab0304ebb79de9a7fe574e7c89afe71197192819a6acb151784e2ea59e06\"" Sep 10 00:00:27.708905 containerd[1546]: time="2025-09-10T00:00:27.708838748Z" level=info msg="StartContainer for \"81dcab0304ebb79de9a7fe574e7c89afe71197192819a6acb151784e2ea59e06\"" Sep 10 00:00:27.710302 containerd[1546]: time="2025-09-10T00:00:27.710263915Z" level=info msg="connecting to shim 81dcab0304ebb79de9a7fe574e7c89afe71197192819a6acb151784e2ea59e06" address="unix:///run/containerd/s/eb5b3c47e5d313d78049d16fc15ab8585b1c193b64a1115c83f5e78018ae3bb0" protocol=ttrpc version=3 Sep 10 00:00:27.737721 systemd[1]: Started cri-containerd-81dcab0304ebb79de9a7fe574e7c89afe71197192819a6acb151784e2ea59e06.scope - libcontainer container 81dcab0304ebb79de9a7fe574e7c89afe71197192819a6acb151784e2ea59e06. Sep 10 00:00:27.763969 containerd[1546]: time="2025-09-10T00:00:27.763905244Z" level=info msg="StartContainer for \"81dcab0304ebb79de9a7fe574e7c89afe71197192819a6acb151784e2ea59e06\" returns successfully" Sep 10 00:00:27.768364 systemd[1]: cri-containerd-81dcab0304ebb79de9a7fe574e7c89afe71197192819a6acb151784e2ea59e06.scope: Deactivated successfully. Sep 10 00:00:27.770191 containerd[1546]: time="2025-09-10T00:00:27.770159821Z" level=info msg="received exit event container_id:\"81dcab0304ebb79de9a7fe574e7c89afe71197192819a6acb151784e2ea59e06\" id:\"81dcab0304ebb79de9a7fe574e7c89afe71197192819a6acb151784e2ea59e06\" pid:4588 exited_at:{seconds:1757462427 nanos:769982505}" Sep 10 00:00:27.770250 containerd[1546]: time="2025-09-10T00:00:27.770230019Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81dcab0304ebb79de9a7fe574e7c89afe71197192819a6acb151784e2ea59e06\" id:\"81dcab0304ebb79de9a7fe574e7c89afe71197192819a6acb151784e2ea59e06\" pid:4588 exited_at:{seconds:1757462427 nanos:769982505}" Sep 10 00:00:27.787051 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81dcab0304ebb79de9a7fe574e7c89afe71197192819a6acb151784e2ea59e06-rootfs.mount: Deactivated successfully. Sep 10 00:00:28.479114 kubelet[2648]: E0910 00:00:28.479057 2648 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 10 00:00:28.679848 kubelet[2648]: E0910 00:00:28.679809 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:00:28.682702 containerd[1546]: time="2025-09-10T00:00:28.682663050Z" level=info msg="CreateContainer within sandbox \"cd0fa099178f530defb13a597685c7ca1fed515c24e078520f4ab521050a4984\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 10 00:00:28.694752 containerd[1546]: time="2025-09-10T00:00:28.694699981Z" level=info msg="Container 2d8f8b4e276d8027ac8ae61246b2b7935738f5e5848eb724de97b0d0cb9d175b: CDI devices from CRI Config.CDIDevices: []" Sep 10 00:00:28.707589 containerd[1546]: time="2025-09-10T00:00:28.707481575Z" level=info msg="CreateContainer within sandbox \"cd0fa099178f530defb13a597685c7ca1fed515c24e078520f4ab521050a4984\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2d8f8b4e276d8027ac8ae61246b2b7935738f5e5848eb724de97b0d0cb9d175b\"" Sep 10 00:00:28.709043 containerd[1546]: time="2025-09-10T00:00:28.708906784Z" level=info msg="StartContainer for \"2d8f8b4e276d8027ac8ae61246b2b7935738f5e5848eb724de97b0d0cb9d175b\"" Sep 10 00:00:28.712471 containerd[1546]: time="2025-09-10T00:00:28.712150671Z" level=info msg="connecting to shim 2d8f8b4e276d8027ac8ae61246b2b7935738f5e5848eb724de97b0d0cb9d175b" address="unix:///run/containerd/s/eb5b3c47e5d313d78049d16fc15ab8585b1c193b64a1115c83f5e78018ae3bb0" protocol=ttrpc version=3 Sep 10 00:00:28.742706 systemd[1]: Started cri-containerd-2d8f8b4e276d8027ac8ae61246b2b7935738f5e5848eb724de97b0d0cb9d175b.scope - libcontainer container 2d8f8b4e276d8027ac8ae61246b2b7935738f5e5848eb724de97b0d0cb9d175b. Sep 10 00:00:28.782047 systemd[1]: cri-containerd-2d8f8b4e276d8027ac8ae61246b2b7935738f5e5848eb724de97b0d0cb9d175b.scope: Deactivated successfully. Sep 10 00:00:28.783910 containerd[1546]: time="2025-09-10T00:00:28.782923369Z" level=info msg="StartContainer for \"2d8f8b4e276d8027ac8ae61246b2b7935738f5e5848eb724de97b0d0cb9d175b\" returns successfully" Sep 10 00:00:28.784491 containerd[1546]: time="2025-09-10T00:00:28.784455855Z" level=info msg="received exit event container_id:\"2d8f8b4e276d8027ac8ae61246b2b7935738f5e5848eb724de97b0d0cb9d175b\" id:\"2d8f8b4e276d8027ac8ae61246b2b7935738f5e5848eb724de97b0d0cb9d175b\" pid:4633 exited_at:{seconds:1757462428 nanos:784272779}" Sep 10 00:00:28.785893 containerd[1546]: time="2025-09-10T00:00:28.785810825Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2d8f8b4e276d8027ac8ae61246b2b7935738f5e5848eb724de97b0d0cb9d175b\" id:\"2d8f8b4e276d8027ac8ae61246b2b7935738f5e5848eb724de97b0d0cb9d175b\" pid:4633 exited_at:{seconds:1757462428 nanos:784272779}" Sep 10 00:00:28.803731 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d8f8b4e276d8027ac8ae61246b2b7935738f5e5848eb724de97b0d0cb9d175b-rootfs.mount: Deactivated successfully. Sep 10 00:00:29.687323 kubelet[2648]: E0910 00:00:29.687293 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:00:29.692140 containerd[1546]: time="2025-09-10T00:00:29.692104613Z" level=info msg="CreateContainer within sandbox \"cd0fa099178f530defb13a597685c7ca1fed515c24e078520f4ab521050a4984\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 10 00:00:29.721663 containerd[1546]: time="2025-09-10T00:00:29.721615491Z" level=info msg="Container 9fe6047d6ceb6a186aebb6e4c9ccbce0eeeaa22dd93ba04fe2664cd5db6fb877: CDI devices from CRI Config.CDIDevices: []" Sep 10 00:00:29.763207 containerd[1546]: time="2025-09-10T00:00:29.763142827Z" level=info msg="CreateContainer within sandbox \"cd0fa099178f530defb13a597685c7ca1fed515c24e078520f4ab521050a4984\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9fe6047d6ceb6a186aebb6e4c9ccbce0eeeaa22dd93ba04fe2664cd5db6fb877\"" Sep 10 00:00:29.764064 containerd[1546]: time="2025-09-10T00:00:29.763728414Z" level=info msg="StartContainer for \"9fe6047d6ceb6a186aebb6e4c9ccbce0eeeaa22dd93ba04fe2664cd5db6fb877\"" Sep 10 00:00:29.764780 containerd[1546]: time="2025-09-10T00:00:29.764747152Z" level=info msg="connecting to shim 9fe6047d6ceb6a186aebb6e4c9ccbce0eeeaa22dd93ba04fe2664cd5db6fb877" address="unix:///run/containerd/s/eb5b3c47e5d313d78049d16fc15ab8585b1c193b64a1115c83f5e78018ae3bb0" protocol=ttrpc version=3 Sep 10 00:00:29.788702 systemd[1]: Started cri-containerd-9fe6047d6ceb6a186aebb6e4c9ccbce0eeeaa22dd93ba04fe2664cd5db6fb877.scope - libcontainer container 9fe6047d6ceb6a186aebb6e4c9ccbce0eeeaa22dd93ba04fe2664cd5db6fb877. Sep 10 00:00:29.811491 systemd[1]: cri-containerd-9fe6047d6ceb6a186aebb6e4c9ccbce0eeeaa22dd93ba04fe2664cd5db6fb877.scope: Deactivated successfully. Sep 10 00:00:29.813970 containerd[1546]: time="2025-09-10T00:00:29.813467491Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9fe6047d6ceb6a186aebb6e4c9ccbce0eeeaa22dd93ba04fe2664cd5db6fb877\" id:\"9fe6047d6ceb6a186aebb6e4c9ccbce0eeeaa22dd93ba04fe2664cd5db6fb877\" pid:4672 exited_at:{seconds:1757462429 nanos:812899584}" Sep 10 00:00:29.813970 containerd[1546]: time="2025-09-10T00:00:29.813473931Z" level=info msg="received exit event container_id:\"9fe6047d6ceb6a186aebb6e4c9ccbce0eeeaa22dd93ba04fe2664cd5db6fb877\" id:\"9fe6047d6ceb6a186aebb6e4c9ccbce0eeeaa22dd93ba04fe2664cd5db6fb877\" pid:4672 exited_at:{seconds:1757462429 nanos:812899584}" Sep 10 00:00:29.821103 containerd[1546]: time="2025-09-10T00:00:29.821019687Z" level=info msg="StartContainer for \"9fe6047d6ceb6a186aebb6e4c9ccbce0eeeaa22dd93ba04fe2664cd5db6fb877\" returns successfully" Sep 10 00:00:29.832671 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9fe6047d6ceb6a186aebb6e4c9ccbce0eeeaa22dd93ba04fe2664cd5db6fb877-rootfs.mount: Deactivated successfully. Sep 10 00:00:30.422940 kubelet[2648]: E0910 00:00:30.422898 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:00:30.696050 kubelet[2648]: E0910 00:00:30.695864 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:00:30.707823 containerd[1546]: time="2025-09-10T00:00:30.707703585Z" level=info msg="CreateContainer within sandbox \"cd0fa099178f530defb13a597685c7ca1fed515c24e078520f4ab521050a4984\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 10 00:00:30.748635 containerd[1546]: time="2025-09-10T00:00:30.748589318Z" level=info msg="Container 53281ca72a071959f9e6722adca19ee799eb0f9bae2dd22256d4f023337a5e9e: CDI devices from CRI Config.CDIDevices: []" Sep 10 00:00:30.765718 containerd[1546]: time="2025-09-10T00:00:30.765662316Z" level=info msg="CreateContainer within sandbox \"cd0fa099178f530defb13a597685c7ca1fed515c24e078520f4ab521050a4984\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"53281ca72a071959f9e6722adca19ee799eb0f9bae2dd22256d4f023337a5e9e\"" Sep 10 00:00:30.768447 containerd[1546]: time="2025-09-10T00:00:30.768395818Z" level=info msg="StartContainer for \"53281ca72a071959f9e6722adca19ee799eb0f9bae2dd22256d4f023337a5e9e\"" Sep 10 00:00:30.769576 containerd[1546]: time="2025-09-10T00:00:30.769539434Z" level=info msg="connecting to shim 53281ca72a071959f9e6722adca19ee799eb0f9bae2dd22256d4f023337a5e9e" address="unix:///run/containerd/s/eb5b3c47e5d313d78049d16fc15ab8585b1c193b64a1115c83f5e78018ae3bb0" protocol=ttrpc version=3 Sep 10 00:00:30.804730 systemd[1]: Started cri-containerd-53281ca72a071959f9e6722adca19ee799eb0f9bae2dd22256d4f023337a5e9e.scope - libcontainer container 53281ca72a071959f9e6722adca19ee799eb0f9bae2dd22256d4f023337a5e9e. Sep 10 00:00:30.838048 containerd[1546]: time="2025-09-10T00:00:30.837998703Z" level=info msg="StartContainer for \"53281ca72a071959f9e6722adca19ee799eb0f9bae2dd22256d4f023337a5e9e\" returns successfully" Sep 10 00:00:30.894664 containerd[1546]: time="2025-09-10T00:00:30.894621942Z" level=info msg="TaskExit event in podsandbox handler container_id:\"53281ca72a071959f9e6722adca19ee799eb0f9bae2dd22256d4f023337a5e9e\" id:\"1e020e46663b9f5f1671f27a7ebee294c4508b9761a58a781304a76b1774b465\" pid:4741 exited_at:{seconds:1757462430 nanos:894304509}" Sep 10 00:00:31.109537 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 10 00:00:31.702927 kubelet[2648]: E0910 00:00:31.702861 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:00:31.736356 kubelet[2648]: I0910 00:00:31.736261 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cxpw6" podStartSLOduration=5.736246418 podStartE2EDuration="5.736246418s" podCreationTimestamp="2025-09-10 00:00:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:00:31.730966647 +0000 UTC m=+88.410373049" watchObservedRunningTime="2025-09-10 00:00:31.736246418 +0000 UTC m=+88.415652820" Sep 10 00:00:32.705409 kubelet[2648]: E0910 00:00:32.705367 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:00:32.960808 containerd[1546]: time="2025-09-10T00:00:32.960401957Z" level=info msg="TaskExit event in podsandbox handler container_id:\"53281ca72a071959f9e6722adca19ee799eb0f9bae2dd22256d4f023337a5e9e\" id:\"2e3ed917b5a8b5e8f59dde0aae38390687f6afe3dc17bf784387a012b1584070\" pid:4902 exit_status:1 exited_at:{seconds:1757462432 nanos:960025605}" Sep 10 00:00:34.124108 systemd-networkd[1424]: lxc_health: Link UP Sep 10 00:00:34.130776 systemd-networkd[1424]: lxc_health: Gained carrier Sep 10 00:00:34.579923 kubelet[2648]: E0910 00:00:34.579637 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:00:34.709216 kubelet[2648]: E0910 00:00:34.709182 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:00:35.121834 containerd[1546]: time="2025-09-10T00:00:35.121783111Z" level=info msg="TaskExit event in podsandbox handler container_id:\"53281ca72a071959f9e6722adca19ee799eb0f9bae2dd22256d4f023337a5e9e\" id:\"abdd4cd6b3b953b2bcc7a4945a90ebb4fa5185b4b3caaaf1633b531a8b210aa9\" pid:5273 exited_at:{seconds:1757462435 nanos:119794148}" Sep 10 00:00:35.418680 systemd-networkd[1424]: lxc_health: Gained IPv6LL Sep 10 00:00:35.711547 kubelet[2648]: E0910 00:00:35.710856 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:00:37.248363 containerd[1546]: time="2025-09-10T00:00:37.248300245Z" level=info msg="TaskExit event in podsandbox handler container_id:\"53281ca72a071959f9e6722adca19ee799eb0f9bae2dd22256d4f023337a5e9e\" id:\"a90f91b778ddda0a46ac0baf9025333760197aabafb6a3d5e7d770ecfcc819ef\" pid:5308 exited_at:{seconds:1757462437 nanos:247576738}" Sep 10 00:00:39.375919 containerd[1546]: time="2025-09-10T00:00:39.375868575Z" level=info msg="TaskExit event in podsandbox handler container_id:\"53281ca72a071959f9e6722adca19ee799eb0f9bae2dd22256d4f023337a5e9e\" id:\"005288d8cec1c336ab7e0cb169ae604a9cafa9b3c40919cc00f131612518c005\" pid:5337 exited_at:{seconds:1757462439 nanos:375506181}" Sep 10 00:00:39.387572 sshd[4477]: Connection closed by 10.0.0.1 port 47190 Sep 10 00:00:39.388134 sshd-session[4470]: pam_unix(sshd:session): session closed for user core Sep 10 00:00:39.392551 systemd[1]: sshd@26-10.0.0.125:22-10.0.0.1:47190.service: Deactivated successfully. Sep 10 00:00:39.394388 systemd[1]: session-27.scope: Deactivated successfully. Sep 10 00:00:39.395066 systemd-logind[1520]: Session 27 logged out. Waiting for processes to exit. Sep 10 00:00:39.397194 systemd-logind[1520]: Removed session 27.