Sep 12 17:23:33.781791 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 12 17:23:33.781813 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Sep 12 15:37:01 -00 2025 Sep 12 17:23:33.781828 kernel: KASLR enabled Sep 12 17:23:33.781837 kernel: efi: EFI v2.7 by EDK II Sep 12 17:23:33.781842 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb21fd18 Sep 12 17:23:33.781848 kernel: random: crng init done Sep 12 17:23:33.781855 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Sep 12 17:23:33.781861 kernel: secureboot: Secure boot enabled Sep 12 17:23:33.781867 kernel: ACPI: Early table checksum verification disabled Sep 12 17:23:33.781874 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) Sep 12 17:23:33.781880 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 12 17:23:33.781886 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:23:33.781892 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:23:33.781898 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:23:33.781906 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:23:33.781913 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:23:33.781920 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:23:33.781926 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:23:33.781932 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:23:33.781939 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:23:33.781945 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 12 17:23:33.781951 kernel: ACPI: Use ACPI SPCR as default console: No Sep 12 17:23:33.781958 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 12 17:23:33.781964 kernel: NODE_DATA(0) allocated [mem 0xdc736a00-0xdc73dfff] Sep 12 17:23:33.781970 kernel: Zone ranges: Sep 12 17:23:33.781977 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 12 17:23:33.781984 kernel: DMA32 empty Sep 12 17:23:33.781990 kernel: Normal empty Sep 12 17:23:33.781996 kernel: Device empty Sep 12 17:23:33.782002 kernel: Movable zone start for each node Sep 12 17:23:33.782008 kernel: Early memory node ranges Sep 12 17:23:33.782014 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] Sep 12 17:23:33.782020 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] Sep 12 17:23:33.782027 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] Sep 12 17:23:33.782033 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] Sep 12 17:23:33.782039 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] Sep 12 17:23:33.782045 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] Sep 12 17:23:33.782053 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] Sep 12 17:23:33.782059 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] Sep 12 17:23:33.782065 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 12 17:23:33.782074 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 12 17:23:33.782081 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 12 17:23:33.782087 kernel: cma: Reserved 16 MiB at 0x00000000d7a00000 on node -1 Sep 12 17:23:33.782094 kernel: psci: probing for conduit method from ACPI. Sep 12 17:23:33.782102 kernel: psci: PSCIv1.1 detected in firmware. Sep 12 17:23:33.782109 kernel: psci: Using standard PSCI v0.2 function IDs Sep 12 17:23:33.782115 kernel: psci: Trusted OS migration not required Sep 12 17:23:33.782122 kernel: psci: SMC Calling Convention v1.1 Sep 12 17:23:33.782129 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 12 17:23:33.782135 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 12 17:23:33.782142 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 12 17:23:33.782149 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 12 17:23:33.782156 kernel: Detected PIPT I-cache on CPU0 Sep 12 17:23:33.782164 kernel: CPU features: detected: GIC system register CPU interface Sep 12 17:23:33.782170 kernel: CPU features: detected: Spectre-v4 Sep 12 17:23:33.782177 kernel: CPU features: detected: Spectre-BHB Sep 12 17:23:33.782184 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 12 17:23:33.782190 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 12 17:23:33.782197 kernel: CPU features: detected: ARM erratum 1418040 Sep 12 17:23:33.782211 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 12 17:23:33.782218 kernel: alternatives: applying boot alternatives Sep 12 17:23:33.782226 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9b01894f6bb04aff3ec9b8554b3ae56a087d51961f1a01981bc4d4f54ccefc09 Sep 12 17:23:33.782233 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:23:33.782239 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 17:23:33.782249 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:23:33.782256 kernel: Fallback order for Node 0: 0 Sep 12 17:23:33.782262 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 12 17:23:33.782269 kernel: Policy zone: DMA Sep 12 17:23:33.782275 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:23:33.782282 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 12 17:23:33.782288 kernel: software IO TLB: area num 4. Sep 12 17:23:33.782295 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 12 17:23:33.782302 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) Sep 12 17:23:33.782308 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 12 17:23:33.782315 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:23:33.782322 kernel: rcu: RCU event tracing is enabled. Sep 12 17:23:33.782331 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 12 17:23:33.782337 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:23:33.782344 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:23:33.782351 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:23:33.782358 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 12 17:23:33.782365 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:23:33.782372 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:23:33.782379 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 12 17:23:33.782385 kernel: GICv3: 256 SPIs implemented Sep 12 17:23:33.782392 kernel: GICv3: 0 Extended SPIs implemented Sep 12 17:23:33.782398 kernel: Root IRQ handler: gic_handle_irq Sep 12 17:23:33.782406 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 12 17:23:33.782474 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 12 17:23:33.782481 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 12 17:23:33.782488 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 12 17:23:33.782495 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 12 17:23:33.782501 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 12 17:23:33.782508 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 12 17:23:33.782515 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 12 17:23:33.782522 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:23:33.782528 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 17:23:33.782535 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 12 17:23:33.782542 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 12 17:23:33.782551 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 12 17:23:33.782558 kernel: arm-pv: using stolen time PV Sep 12 17:23:33.782565 kernel: Console: colour dummy device 80x25 Sep 12 17:23:33.782571 kernel: ACPI: Core revision 20240827 Sep 12 17:23:33.782578 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 12 17:23:33.782585 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:23:33.782592 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 12 17:23:33.782599 kernel: landlock: Up and running. Sep 12 17:23:33.782605 kernel: SELinux: Initializing. Sep 12 17:23:33.782613 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:23:33.782620 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:23:33.782627 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:23:33.782634 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:23:33.782641 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 12 17:23:33.782648 kernel: Remapping and enabling EFI services. Sep 12 17:23:33.782655 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:23:33.782661 kernel: Detected PIPT I-cache on CPU1 Sep 12 17:23:33.782668 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 12 17:23:33.782676 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 12 17:23:33.782688 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 17:23:33.782695 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 12 17:23:33.782704 kernel: Detected PIPT I-cache on CPU2 Sep 12 17:23:33.782711 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 12 17:23:33.782718 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 12 17:23:33.782725 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 17:23:33.782732 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 12 17:23:33.782740 kernel: Detected PIPT I-cache on CPU3 Sep 12 17:23:33.782748 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 12 17:23:33.782755 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 12 17:23:33.782763 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 17:23:33.782769 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 12 17:23:33.782777 kernel: smp: Brought up 1 node, 4 CPUs Sep 12 17:23:33.782784 kernel: SMP: Total of 4 processors activated. Sep 12 17:23:33.782791 kernel: CPU: All CPU(s) started at EL1 Sep 12 17:23:33.782798 kernel: CPU features: detected: 32-bit EL0 Support Sep 12 17:23:33.782805 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 12 17:23:33.782814 kernel: CPU features: detected: Common not Private translations Sep 12 17:23:33.782821 kernel: CPU features: detected: CRC32 instructions Sep 12 17:23:33.782828 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 12 17:23:33.782835 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 12 17:23:33.782843 kernel: CPU features: detected: LSE atomic instructions Sep 12 17:23:33.782850 kernel: CPU features: detected: Privileged Access Never Sep 12 17:23:33.782857 kernel: CPU features: detected: RAS Extension Support Sep 12 17:23:33.782864 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 12 17:23:33.782871 kernel: alternatives: applying system-wide alternatives Sep 12 17:23:33.782879 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 12 17:23:33.782887 kernel: Memory: 2422432K/2572288K available (11136K kernel code, 2440K rwdata, 9068K rodata, 38912K init, 1038K bss, 127520K reserved, 16384K cma-reserved) Sep 12 17:23:33.782895 kernel: devtmpfs: initialized Sep 12 17:23:33.782902 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:23:33.782909 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 12 17:23:33.782916 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 12 17:23:33.782923 kernel: 0 pages in range for non-PLT usage Sep 12 17:23:33.782930 kernel: 508576 pages in range for PLT usage Sep 12 17:23:33.782938 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:23:33.782946 kernel: SMBIOS 3.0.0 present. Sep 12 17:23:33.782953 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 12 17:23:33.782960 kernel: DMI: Memory slots populated: 1/1 Sep 12 17:23:33.782968 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:23:33.782975 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 12 17:23:33.782982 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 12 17:23:33.782989 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 12 17:23:33.782997 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:23:33.783004 kernel: audit: type=2000 audit(0.025:1): state=initialized audit_enabled=0 res=1 Sep 12 17:23:33.783013 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:23:33.783020 kernel: cpuidle: using governor menu Sep 12 17:23:33.783028 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 12 17:23:33.783035 kernel: ASID allocator initialised with 32768 entries Sep 12 17:23:33.783042 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:23:33.783049 kernel: Serial: AMBA PL011 UART driver Sep 12 17:23:33.783056 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:23:33.783063 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:23:33.783072 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 12 17:23:33.783079 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 12 17:23:33.783086 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:23:33.783093 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:23:33.783100 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 12 17:23:33.783107 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 12 17:23:33.783114 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:23:33.783121 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:23:33.783129 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:23:33.783136 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 17:23:33.783144 kernel: ACPI: Interpreter enabled Sep 12 17:23:33.783152 kernel: ACPI: Using GIC for interrupt routing Sep 12 17:23:33.783159 kernel: ACPI: MCFG table detected, 1 entries Sep 12 17:23:33.783165 kernel: ACPI: CPU0 has been hot-added Sep 12 17:23:33.783173 kernel: ACPI: CPU1 has been hot-added Sep 12 17:23:33.783179 kernel: ACPI: CPU2 has been hot-added Sep 12 17:23:33.783187 kernel: ACPI: CPU3 has been hot-added Sep 12 17:23:33.783194 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 12 17:23:33.783206 kernel: printk: legacy console [ttyAMA0] enabled Sep 12 17:23:33.783215 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 17:23:33.783358 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:23:33.783442 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 12 17:23:33.783506 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 12 17:23:33.783566 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 12 17:23:33.783625 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 12 17:23:33.783634 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 12 17:23:33.783645 kernel: PCI host bridge to bus 0000:00 Sep 12 17:23:33.783714 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 12 17:23:33.783770 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 12 17:23:33.783825 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 12 17:23:33.783879 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 17:23:33.783957 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 12 17:23:33.784032 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 12 17:23:33.784098 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 12 17:23:33.784159 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 12 17:23:33.784234 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 12 17:23:33.784298 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 12 17:23:33.784359 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 12 17:23:33.784446 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 12 17:23:33.784512 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 12 17:23:33.784567 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 12 17:23:33.784622 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 12 17:23:33.784631 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 12 17:23:33.784639 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 12 17:23:33.784646 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 12 17:23:33.784653 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 12 17:23:33.784660 kernel: iommu: Default domain type: Translated Sep 12 17:23:33.784669 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 12 17:23:33.784677 kernel: efivars: Registered efivars operations Sep 12 17:23:33.784684 kernel: vgaarb: loaded Sep 12 17:23:33.784691 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 12 17:23:33.784698 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:23:33.784705 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:23:33.784712 kernel: pnp: PnP ACPI init Sep 12 17:23:33.784786 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 12 17:23:33.784797 kernel: pnp: PnP ACPI: found 1 devices Sep 12 17:23:33.784806 kernel: NET: Registered PF_INET protocol family Sep 12 17:23:33.784814 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 17:23:33.784821 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 17:23:33.784828 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:23:33.784836 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:23:33.784843 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 17:23:33.784850 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 17:23:33.784857 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:23:33.784865 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:23:33.784873 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:23:33.784881 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:23:33.784887 kernel: kvm [1]: HYP mode not available Sep 12 17:23:33.784895 kernel: Initialise system trusted keyrings Sep 12 17:23:33.784902 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 17:23:33.784909 kernel: Key type asymmetric registered Sep 12 17:23:33.784916 kernel: Asymmetric key parser 'x509' registered Sep 12 17:23:33.784923 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 12 17:23:33.784931 kernel: io scheduler mq-deadline registered Sep 12 17:23:33.784939 kernel: io scheduler kyber registered Sep 12 17:23:33.784946 kernel: io scheduler bfq registered Sep 12 17:23:33.784954 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 12 17:23:33.784961 kernel: ACPI: button: Power Button [PWRB] Sep 12 17:23:33.784969 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 12 17:23:33.785033 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 12 17:23:33.785043 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:23:33.785050 kernel: thunder_xcv, ver 1.0 Sep 12 17:23:33.785057 kernel: thunder_bgx, ver 1.0 Sep 12 17:23:33.785066 kernel: nicpf, ver 1.0 Sep 12 17:23:33.785073 kernel: nicvf, ver 1.0 Sep 12 17:23:33.785143 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 12 17:23:33.785210 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-12T17:23:33 UTC (1757697813) Sep 12 17:23:33.785221 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 17:23:33.785228 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 12 17:23:33.785235 kernel: watchdog: NMI not fully supported Sep 12 17:23:33.785243 kernel: watchdog: Hard watchdog permanently disabled Sep 12 17:23:33.785252 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:23:33.785260 kernel: Segment Routing with IPv6 Sep 12 17:23:33.785267 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:23:33.785274 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:23:33.785281 kernel: Key type dns_resolver registered Sep 12 17:23:33.785288 kernel: registered taskstats version 1 Sep 12 17:23:33.785295 kernel: Loading compiled-in X.509 certificates Sep 12 17:23:33.785303 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: 7675c1947f324bc6524fdc1ee0f8f5f343acfea7' Sep 12 17:23:33.785310 kernel: Demotion targets for Node 0: null Sep 12 17:23:33.785319 kernel: Key type .fscrypt registered Sep 12 17:23:33.785326 kernel: Key type fscrypt-provisioning registered Sep 12 17:23:33.785333 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:23:33.785340 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:23:33.785347 kernel: ima: No architecture policies found Sep 12 17:23:33.785354 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 12 17:23:33.785361 kernel: clk: Disabling unused clocks Sep 12 17:23:33.785368 kernel: PM: genpd: Disabling unused power domains Sep 12 17:23:33.785375 kernel: Warning: unable to open an initial console. Sep 12 17:23:33.785384 kernel: Freeing unused kernel memory: 38912K Sep 12 17:23:33.785391 kernel: Run /init as init process Sep 12 17:23:33.785398 kernel: with arguments: Sep 12 17:23:33.785405 kernel: /init Sep 12 17:23:33.785430 kernel: with environment: Sep 12 17:23:33.785438 kernel: HOME=/ Sep 12 17:23:33.785445 kernel: TERM=linux Sep 12 17:23:33.785452 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:23:33.785460 systemd[1]: Successfully made /usr/ read-only. Sep 12 17:23:33.785472 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 17:23:33.785480 systemd[1]: Detected virtualization kvm. Sep 12 17:23:33.785488 systemd[1]: Detected architecture arm64. Sep 12 17:23:33.785495 systemd[1]: Running in initrd. Sep 12 17:23:33.785503 systemd[1]: No hostname configured, using default hostname. Sep 12 17:23:33.785510 systemd[1]: Hostname set to . Sep 12 17:23:33.785518 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:23:33.785527 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:23:33.785535 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:23:33.785542 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:23:33.785550 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:23:33.785558 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:23:33.785566 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:23:33.785575 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:23:33.785585 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:23:33.785593 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:23:33.785601 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:23:33.785609 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:23:33.785617 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:23:33.785625 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:23:33.785632 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:23:33.785640 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:23:33.785650 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:23:33.785658 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:23:33.785665 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:23:33.785673 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 17:23:33.785681 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:23:33.785689 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:23:33.785696 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:23:33.785704 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:23:33.785713 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:23:33.785721 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:23:33.785729 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:23:33.785737 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 12 17:23:33.785745 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:23:33.785753 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:23:33.785760 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:23:33.785768 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:23:33.785776 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:23:33.785786 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:23:33.785793 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:23:33.785801 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:23:33.785826 systemd-journald[243]: Collecting audit messages is disabled. Sep 12 17:23:33.785847 systemd-journald[243]: Journal started Sep 12 17:23:33.785865 systemd-journald[243]: Runtime Journal (/run/log/journal/0b1c87ecace342f88d8a1aee9fd23136) is 6M, max 48.5M, 42.4M free. Sep 12 17:23:33.778524 systemd-modules-load[244]: Inserted module 'overlay' Sep 12 17:23:33.789074 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:23:33.791361 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:23:33.794997 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:23:33.795534 systemd-modules-load[244]: Inserted module 'br_netfilter' Sep 12 17:23:33.796459 kernel: Bridge firewalling registered Sep 12 17:23:33.796571 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:23:33.798255 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:23:33.800093 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:23:33.811648 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:23:33.814546 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:23:33.815907 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:23:33.819313 systemd-tmpfiles[263]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 12 17:23:33.822174 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:23:33.830538 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:23:33.831548 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:23:33.834938 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:23:33.836948 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:23:33.839553 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:23:33.865767 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9b01894f6bb04aff3ec9b8554b3ae56a087d51961f1a01981bc4d4f54ccefc09 Sep 12 17:23:33.881448 systemd-resolved[287]: Positive Trust Anchors: Sep 12 17:23:33.881467 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:23:33.881499 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:23:33.887094 systemd-resolved[287]: Defaulting to hostname 'linux'. Sep 12 17:23:33.889126 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:23:33.890209 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:23:33.945444 kernel: SCSI subsystem initialized Sep 12 17:23:33.952440 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:23:33.963450 kernel: iscsi: registered transport (tcp) Sep 12 17:23:33.976440 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:23:33.976470 kernel: QLogic iSCSI HBA Driver Sep 12 17:23:33.993922 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:23:34.015661 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:23:34.017008 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:23:34.069883 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:23:34.072066 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:23:34.140457 kernel: raid6: neonx8 gen() 15624 MB/s Sep 12 17:23:34.157467 kernel: raid6: neonx4 gen() 5049 MB/s Sep 12 17:23:34.174470 kernel: raid6: neonx2 gen() 8327 MB/s Sep 12 17:23:34.192057 kernel: raid6: neonx1 gen() 7403 MB/s Sep 12 17:23:34.208449 kernel: raid6: int64x8 gen() 6593 MB/s Sep 12 17:23:34.226457 kernel: raid6: int64x4 gen() 7305 MB/s Sep 12 17:23:34.244452 kernel: raid6: int64x2 gen() 2113 MB/s Sep 12 17:23:34.261439 kernel: raid6: int64x1 gen() 2185 MB/s Sep 12 17:23:34.261462 kernel: raid6: using algorithm neonx8 gen() 15624 MB/s Sep 12 17:23:34.278444 kernel: raid6: .... xor() 11976 MB/s, rmw enabled Sep 12 17:23:34.278474 kernel: raid6: using neon recovery algorithm Sep 12 17:23:34.283640 kernel: xor: measuring software checksum speed Sep 12 17:23:34.283668 kernel: 8regs : 20956 MB/sec Sep 12 17:23:34.284723 kernel: 32regs : 21676 MB/sec Sep 12 17:23:34.284741 kernel: arm64_neon : 27123 MB/sec Sep 12 17:23:34.284750 kernel: xor: using function: arm64_neon (27123 MB/sec) Sep 12 17:23:34.338444 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:23:34.344456 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:23:34.346761 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:23:34.379222 systemd-udevd[497]: Using default interface naming scheme 'v255'. Sep 12 17:23:34.383802 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:23:34.385653 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:23:34.412224 dracut-pre-trigger[504]: rd.md=0: removing MD RAID activation Sep 12 17:23:34.436554 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:23:34.438927 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:23:34.503854 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:23:34.506111 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:23:34.554441 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 12 17:23:34.561399 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 12 17:23:34.566948 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:23:34.567097 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:23:34.570667 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 17:23:34.570690 kernel: GPT:9289727 != 19775487 Sep 12 17:23:34.570700 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 17:23:34.570704 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:23:34.585520 kernel: GPT:9289727 != 19775487 Sep 12 17:23:34.585661 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:23:34.588381 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:23:34.588402 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:23:34.612314 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:23:34.627388 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 17:23:34.628844 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:23:34.638132 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 17:23:34.652834 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 17:23:34.659935 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 17:23:34.660927 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 17:23:34.661922 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:23:34.664499 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:23:34.666523 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:23:34.669100 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:23:34.670888 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:23:34.700903 disk-uuid[589]: Primary Header is updated. Sep 12 17:23:34.700903 disk-uuid[589]: Secondary Entries is updated. Sep 12 17:23:34.700903 disk-uuid[589]: Secondary Header is updated. Sep 12 17:23:34.704693 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:23:34.700993 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:23:35.713441 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:23:35.713494 disk-uuid[597]: The operation has completed successfully. Sep 12 17:23:35.734137 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:23:35.734247 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:23:35.768738 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:23:35.791250 sh[609]: Success Sep 12 17:23:35.803794 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:23:35.803828 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:23:35.804938 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 12 17:23:35.814919 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 12 17:23:35.839583 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:23:35.842155 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:23:35.862527 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:23:35.869942 kernel: BTRFS: device fsid 752cb955-bdfa-486a-ad02-b54d5e61d194 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (621) Sep 12 17:23:35.869987 kernel: BTRFS info (device dm-0): first mount of filesystem 752cb955-bdfa-486a-ad02-b54d5e61d194 Sep 12 17:23:35.870996 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:23:35.874839 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:23:35.874861 kernel: BTRFS info (device dm-0): enabling free space tree Sep 12 17:23:35.875861 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:23:35.877071 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 12 17:23:35.878149 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:23:35.878927 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:23:35.881769 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:23:35.910502 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (652) Sep 12 17:23:35.910552 kernel: BTRFS info (device vda6): first mount of filesystem 5f4a7913-42f7-487c-8331-8ab180fe9df7 Sep 12 17:23:35.912231 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:23:35.914443 kernel: BTRFS info (device vda6): turning on async discard Sep 12 17:23:35.914474 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 17:23:35.918443 kernel: BTRFS info (device vda6): last unmount of filesystem 5f4a7913-42f7-487c-8331-8ab180fe9df7 Sep 12 17:23:35.920502 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:23:35.922304 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:23:35.990964 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:23:35.993623 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:23:36.033109 systemd-networkd[801]: lo: Link UP Sep 12 17:23:36.033121 systemd-networkd[801]: lo: Gained carrier Sep 12 17:23:36.033810 ignition[695]: Ignition 2.21.0 Sep 12 17:23:36.033947 systemd-networkd[801]: Enumeration completed Sep 12 17:23:36.033817 ignition[695]: Stage: fetch-offline Sep 12 17:23:36.034334 systemd-networkd[801]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:23:36.033845 ignition[695]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:23:36.034338 systemd-networkd[801]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:23:36.033853 ignition[695]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:23:36.034636 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:23:36.034015 ignition[695]: parsed url from cmdline: "" Sep 12 17:23:36.035336 systemd-networkd[801]: eth0: Link UP Sep 12 17:23:36.034018 ignition[695]: no config URL provided Sep 12 17:23:36.035435 systemd-networkd[801]: eth0: Gained carrier Sep 12 17:23:36.034023 ignition[695]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:23:36.035445 systemd-networkd[801]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:23:36.034029 ignition[695]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:23:36.036204 systemd[1]: Reached target network.target - Network. Sep 12 17:23:36.034047 ignition[695]: op(1): [started] loading QEMU firmware config module Sep 12 17:23:36.034051 ignition[695]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 12 17:23:36.041834 ignition[695]: op(1): [finished] loading QEMU firmware config module Sep 12 17:23:36.056481 systemd-networkd[801]: eth0: DHCPv4 address 10.0.0.110/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 17:23:36.089248 ignition[695]: parsing config with SHA512: cc27815cfa69efcaa326a25abd54e8dc850ed2ae1eba580c7ca5997df14cbeacc413d9c287eb5875949df612d7c245705a34d2d3bfe4c1582882e2c976efbf43 Sep 12 17:23:36.095011 unknown[695]: fetched base config from "system" Sep 12 17:23:36.095023 unknown[695]: fetched user config from "qemu" Sep 12 17:23:36.095433 ignition[695]: fetch-offline: fetch-offline passed Sep 12 17:23:36.095485 ignition[695]: Ignition finished successfully Sep 12 17:23:36.098261 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:23:36.099489 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 12 17:23:36.100179 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:23:36.141695 ignition[810]: Ignition 2.21.0 Sep 12 17:23:36.141713 ignition[810]: Stage: kargs Sep 12 17:23:36.141838 ignition[810]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:23:36.141847 ignition[810]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:23:36.143050 ignition[810]: kargs: kargs passed Sep 12 17:23:36.143101 ignition[810]: Ignition finished successfully Sep 12 17:23:36.146471 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:23:36.148142 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:23:36.176148 ignition[818]: Ignition 2.21.0 Sep 12 17:23:36.176168 ignition[818]: Stage: disks Sep 12 17:23:36.176309 ignition[818]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:23:36.176317 ignition[818]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:23:36.178256 ignition[818]: disks: disks passed Sep 12 17:23:36.178308 ignition[818]: Ignition finished successfully Sep 12 17:23:36.183471 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:23:36.184461 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:23:36.185888 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:23:36.187454 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:23:36.188988 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:23:36.190339 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:23:36.192520 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:23:36.212385 systemd-fsck[829]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 12 17:23:36.217463 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:23:36.219776 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:23:36.277479 kernel: EXT4-fs (vda9): mounted filesystem c902100c-52b7-422c-84ac-d834d4db2717 r/w with ordered data mode. Quota mode: none. Sep 12 17:23:36.278230 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:23:36.279466 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:23:36.282131 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:23:36.284138 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:23:36.285063 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 17:23:36.285103 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:23:36.285128 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:23:36.294005 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:23:36.295888 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:23:36.299519 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (838) Sep 12 17:23:36.299550 kernel: BTRFS info (device vda6): first mount of filesystem 5f4a7913-42f7-487c-8331-8ab180fe9df7 Sep 12 17:23:36.301132 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:23:36.304637 kernel: BTRFS info (device vda6): turning on async discard Sep 12 17:23:36.304686 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 17:23:36.306370 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:23:36.333076 initrd-setup-root[862]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:23:36.337330 initrd-setup-root[869]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:23:36.341335 initrd-setup-root[876]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:23:36.344642 initrd-setup-root[883]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:23:36.409020 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:23:36.411079 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:23:36.413567 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:23:36.436439 kernel: BTRFS info (device vda6): last unmount of filesystem 5f4a7913-42f7-487c-8331-8ab180fe9df7 Sep 12 17:23:36.449542 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:23:36.461851 ignition[954]: INFO : Ignition 2.21.0 Sep 12 17:23:36.461851 ignition[954]: INFO : Stage: mount Sep 12 17:23:36.463140 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:23:36.463140 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:23:36.463140 ignition[954]: INFO : mount: mount passed Sep 12 17:23:36.466361 ignition[954]: INFO : Ignition finished successfully Sep 12 17:23:36.465926 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:23:36.468544 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:23:36.867980 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:23:36.870850 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:23:36.896422 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (965) Sep 12 17:23:36.896462 kernel: BTRFS info (device vda6): first mount of filesystem 5f4a7913-42f7-487c-8331-8ab180fe9df7 Sep 12 17:23:36.896472 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:23:36.899542 kernel: BTRFS info (device vda6): turning on async discard Sep 12 17:23:36.899564 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 17:23:36.901044 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:23:36.934990 ignition[982]: INFO : Ignition 2.21.0 Sep 12 17:23:36.934990 ignition[982]: INFO : Stage: files Sep 12 17:23:36.934990 ignition[982]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:23:36.934990 ignition[982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:23:36.939025 ignition[982]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:23:36.939025 ignition[982]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:23:36.939025 ignition[982]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:23:36.942243 ignition[982]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:23:36.942243 ignition[982]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:23:36.942243 ignition[982]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:23:36.942243 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 12 17:23:36.942243 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 12 17:23:36.940367 unknown[982]: wrote ssh authorized keys file for user: core Sep 12 17:23:36.998948 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:23:37.245400 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 12 17:23:37.245400 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 17:23:37.248385 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 12 17:23:37.478622 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 17:23:37.557818 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 17:23:37.557818 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:23:37.560796 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:23:37.560796 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:23:37.560796 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:23:37.560796 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:23:37.560796 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:23:37.560796 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:23:37.560796 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:23:37.560796 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:23:37.560796 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:23:37.560796 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 17:23:37.574366 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 17:23:37.574366 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 17:23:37.574366 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 12 17:23:37.576441 systemd-networkd[801]: eth0: Gained IPv6LL Sep 12 17:23:37.905574 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 17:23:38.179393 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 17:23:38.179393 ignition[982]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 12 17:23:38.182616 ignition[982]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:23:38.182616 ignition[982]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:23:38.182616 ignition[982]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 12 17:23:38.182616 ignition[982]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 12 17:23:38.182616 ignition[982]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 17:23:38.182616 ignition[982]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 17:23:38.182616 ignition[982]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 12 17:23:38.182616 ignition[982]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 12 17:23:38.196706 ignition[982]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 17:23:38.199747 ignition[982]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 17:23:38.200889 ignition[982]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 12 17:23:38.200889 ignition[982]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:23:38.200889 ignition[982]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:23:38.200889 ignition[982]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:23:38.200889 ignition[982]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:23:38.200889 ignition[982]: INFO : files: files passed Sep 12 17:23:38.200889 ignition[982]: INFO : Ignition finished successfully Sep 12 17:23:38.202402 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:23:38.206357 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:23:38.210590 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:23:38.223636 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:23:38.225702 initrd-setup-root-after-ignition[1010]: grep: /sysroot/oem/oem-release: No such file or directory Sep 12 17:23:38.224517 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:23:38.227741 initrd-setup-root-after-ignition[1012]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:23:38.227741 initrd-setup-root-after-ignition[1012]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:23:38.231504 initrd-setup-root-after-ignition[1017]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:23:38.230430 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:23:38.232931 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:23:38.235149 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:23:38.283194 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:23:38.283304 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:23:38.285217 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:23:38.286692 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:23:38.288089 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:23:38.288867 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:23:38.321643 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:23:38.323727 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:23:38.347293 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:23:38.348363 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:23:38.349977 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:23:38.351269 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:23:38.351399 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:23:38.353276 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:23:38.354875 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:23:38.356181 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:23:38.357586 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:23:38.359265 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:23:38.360694 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 12 17:23:38.362147 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:23:38.363532 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:23:38.365009 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:23:38.366463 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:23:38.367894 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:23:38.369018 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:23:38.369144 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:23:38.371010 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:23:38.372484 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:23:38.374150 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:23:38.374265 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:23:38.375814 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:23:38.375929 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:23:38.378100 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:23:38.378228 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:23:38.379629 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:23:38.380784 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:23:38.384482 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:23:38.385529 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:23:38.387156 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:23:38.388335 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:23:38.388436 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:23:38.389591 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:23:38.389669 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:23:38.390830 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:23:38.390943 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:23:38.392291 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:23:38.392393 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:23:38.394323 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:23:38.396295 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:23:38.397096 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:23:38.397219 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:23:38.398610 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:23:38.398711 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:23:38.403335 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:23:38.408552 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:23:38.416778 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:23:38.426449 ignition[1037]: INFO : Ignition 2.21.0 Sep 12 17:23:38.426449 ignition[1037]: INFO : Stage: umount Sep 12 17:23:38.428054 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:23:38.428054 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:23:38.428054 ignition[1037]: INFO : umount: umount passed Sep 12 17:23:38.428054 ignition[1037]: INFO : Ignition finished successfully Sep 12 17:23:38.429942 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:23:38.430067 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:23:38.431325 systemd[1]: Stopped target network.target - Network. Sep 12 17:23:38.433521 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:23:38.433591 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:23:38.435251 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:23:38.435301 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:23:38.437023 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:23:38.437078 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:23:38.438690 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:23:38.438738 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:23:38.440538 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:23:38.442282 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:23:38.451498 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:23:38.451623 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:23:38.455780 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 17:23:38.456185 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:23:38.456324 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:23:38.459656 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 17:23:38.460249 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 12 17:23:38.461951 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:23:38.461988 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:23:38.466525 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:23:38.467587 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:23:38.467651 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:23:38.469586 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:23:38.469633 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:23:38.473307 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:23:38.473353 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:23:38.475314 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:23:38.475368 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:23:38.478313 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:23:38.482861 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 17:23:38.482928 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:23:38.489097 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:23:38.489214 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:23:38.490677 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:23:38.490723 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:23:38.496783 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:23:38.496934 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:23:38.499479 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:23:38.499560 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:23:38.501031 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:23:38.501064 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:23:38.502702 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:23:38.502761 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:23:38.505394 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:23:38.505468 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:23:38.507893 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:23:38.507954 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:23:38.511555 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:23:38.513208 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 12 17:23:38.513276 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:23:38.516434 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:23:38.516486 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:23:38.519558 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 17:23:38.519610 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:23:38.522603 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:23:38.522656 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:23:38.524791 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:23:38.524847 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:23:38.528540 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 12 17:23:38.528592 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 12 17:23:38.528623 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 12 17:23:38.528660 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:23:38.528989 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:23:38.529125 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:23:38.530557 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:23:38.531511 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:23:38.535113 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:23:38.537198 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:23:38.555376 systemd[1]: Switching root. Sep 12 17:23:38.584722 systemd-journald[243]: Journal stopped Sep 12 17:23:39.425348 systemd-journald[243]: Received SIGTERM from PID 1 (systemd). Sep 12 17:23:39.425399 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:23:39.425479 kernel: SELinux: policy capability open_perms=1 Sep 12 17:23:39.425492 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:23:39.425506 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:23:39.425518 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:23:39.425537 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:23:39.425550 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:23:39.425559 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:23:39.425573 kernel: SELinux: policy capability userspace_initial_context=0 Sep 12 17:23:39.425582 kernel: audit: type=1403 audit(1757697818.788:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:23:39.425592 systemd[1]: Successfully loaded SELinux policy in 72.078ms. Sep 12 17:23:39.425616 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.493ms. Sep 12 17:23:39.425627 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 17:23:39.425637 systemd[1]: Detected virtualization kvm. Sep 12 17:23:39.425648 systemd[1]: Detected architecture arm64. Sep 12 17:23:39.425658 systemd[1]: Detected first boot. Sep 12 17:23:39.425668 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:23:39.425680 zram_generator::config[1084]: No configuration found. Sep 12 17:23:39.425691 kernel: NET: Registered PF_VSOCK protocol family Sep 12 17:23:39.425700 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:23:39.425710 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 17:23:39.425720 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 17:23:39.425732 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 17:23:39.425742 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 17:23:39.425752 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:23:39.425762 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:23:39.425772 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:23:39.425782 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:23:39.425792 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:23:39.425802 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:23:39.425812 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:23:39.425823 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:23:39.425833 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:23:39.425843 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:23:39.425853 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:23:39.425864 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:23:39.425873 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:23:39.425883 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:23:39.425893 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 12 17:23:39.425904 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:23:39.425914 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:23:39.425924 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 17:23:39.425934 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 17:23:39.425944 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 17:23:39.425954 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:23:39.425964 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:23:39.425973 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:23:39.425984 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:23:39.425994 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:23:39.426004 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:23:39.426014 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:23:39.426024 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 17:23:39.426034 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:23:39.426044 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:23:39.426054 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:23:39.426064 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:23:39.426074 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:23:39.426085 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:23:39.426095 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:23:39.426105 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:23:39.426114 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:23:39.426124 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:23:39.426134 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:23:39.426144 systemd[1]: Reached target machines.target - Containers. Sep 12 17:23:39.426154 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:23:39.426173 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:23:39.426187 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:23:39.426197 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:23:39.426206 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:23:39.426221 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:23:39.426231 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:23:39.426241 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:23:39.426253 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:23:39.426263 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:23:39.426274 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 17:23:39.426284 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 17:23:39.426294 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 17:23:39.426304 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 17:23:39.426315 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:23:39.426324 kernel: fuse: init (API version 7.41) Sep 12 17:23:39.426334 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:23:39.426343 kernel: loop: module loaded Sep 12 17:23:39.426354 kernel: ACPI: bus type drm_connector registered Sep 12 17:23:39.426364 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:23:39.426375 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:23:39.426386 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:23:39.426396 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 17:23:39.426406 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:23:39.426425 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 17:23:39.426435 systemd[1]: Stopped verity-setup.service. Sep 12 17:23:39.426447 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:23:39.426457 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:23:39.426467 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:23:39.426480 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:23:39.426513 systemd-journald[1152]: Collecting audit messages is disabled. Sep 12 17:23:39.426535 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:23:39.426546 systemd-journald[1152]: Journal started Sep 12 17:23:39.426568 systemd-journald[1152]: Runtime Journal (/run/log/journal/0b1c87ecace342f88d8a1aee9fd23136) is 6M, max 48.5M, 42.4M free. Sep 12 17:23:39.161849 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:23:39.181605 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 17:23:39.182020 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 17:23:39.428364 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:23:39.428989 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:23:39.430535 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:23:39.432022 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:23:39.433646 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:23:39.433887 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:23:39.436467 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:23:39.436729 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:23:39.437946 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:23:39.438098 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:23:39.439222 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:23:39.439379 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:23:39.440770 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:23:39.440937 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:23:39.442014 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:23:39.442161 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:23:39.443506 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:23:39.444650 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:23:39.446030 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:23:39.447470 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 17:23:39.458932 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:23:39.461255 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:23:39.463330 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:23:39.464398 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:23:39.464505 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:23:39.466121 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 17:23:39.475334 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:23:39.476406 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:23:39.477622 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:23:39.479554 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:23:39.480813 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:23:39.483077 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:23:39.484152 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:23:39.487565 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:23:39.491150 systemd-journald[1152]: Time spent on flushing to /var/log/journal/0b1c87ecace342f88d8a1aee9fd23136 is 30.291ms for 890 entries. Sep 12 17:23:39.491150 systemd-journald[1152]: System Journal (/var/log/journal/0b1c87ecace342f88d8a1aee9fd23136) is 8M, max 195.6M, 187.6M free. Sep 12 17:23:39.530671 systemd-journald[1152]: Received client request to flush runtime journal. Sep 12 17:23:39.530726 kernel: loop0: detected capacity change from 0 to 100608 Sep 12 17:23:39.530744 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:23:39.490696 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:23:39.496679 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:23:39.499623 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:23:39.500994 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:23:39.502574 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:23:39.505656 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:23:39.508344 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:23:39.512755 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 17:23:39.519571 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Sep 12 17:23:39.519581 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Sep 12 17:23:39.526880 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:23:39.532690 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:23:39.535202 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:23:39.544462 kernel: loop1: detected capacity change from 0 to 119320 Sep 12 17:23:39.545064 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:23:39.550234 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 17:23:39.570073 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:23:39.571178 kernel: loop2: detected capacity change from 0 to 203944 Sep 12 17:23:39.573060 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:23:39.593376 systemd-tmpfiles[1222]: ACLs are not supported, ignoring. Sep 12 17:23:39.593396 systemd-tmpfiles[1222]: ACLs are not supported, ignoring. Sep 12 17:23:39.596892 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:23:39.600547 kernel: loop3: detected capacity change from 0 to 100608 Sep 12 17:23:39.608438 kernel: loop4: detected capacity change from 0 to 119320 Sep 12 17:23:39.614432 kernel: loop5: detected capacity change from 0 to 203944 Sep 12 17:23:39.618489 (sd-merge)[1226]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 12 17:23:39.618865 (sd-merge)[1226]: Merged extensions into '/usr'. Sep 12 17:23:39.623115 systemd[1]: Reload requested from client PID 1200 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:23:39.623132 systemd[1]: Reloading... Sep 12 17:23:39.688656 zram_generator::config[1252]: No configuration found. Sep 12 17:23:39.775316 ldconfig[1195]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:23:39.840124 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:23:39.840657 systemd[1]: Reloading finished in 216 ms. Sep 12 17:23:39.866460 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:23:39.867658 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:23:39.889745 systemd[1]: Starting ensure-sysext.service... Sep 12 17:23:39.891465 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:23:39.901141 systemd[1]: Reload requested from client PID 1286 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:23:39.901159 systemd[1]: Reloading... Sep 12 17:23:39.907985 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 12 17:23:39.908020 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 12 17:23:39.908295 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:23:39.908542 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:23:39.909252 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:23:39.909595 systemd-tmpfiles[1288]: ACLs are not supported, ignoring. Sep 12 17:23:39.909651 systemd-tmpfiles[1288]: ACLs are not supported, ignoring. Sep 12 17:23:39.912462 systemd-tmpfiles[1288]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:23:39.912474 systemd-tmpfiles[1288]: Skipping /boot Sep 12 17:23:39.918333 systemd-tmpfiles[1288]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:23:39.918348 systemd-tmpfiles[1288]: Skipping /boot Sep 12 17:23:39.947590 zram_generator::config[1315]: No configuration found. Sep 12 17:23:40.079207 systemd[1]: Reloading finished in 177 ms. Sep 12 17:23:40.102237 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:23:40.108010 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:23:40.119655 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:23:40.122534 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:23:40.124777 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:23:40.128573 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:23:40.135921 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:23:40.140822 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:23:40.146302 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:23:40.147619 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:23:40.151695 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:23:40.154292 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:23:40.156069 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:23:40.156201 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:23:40.158455 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:23:40.160514 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:23:40.162660 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:23:40.164471 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:23:40.170322 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:23:40.170788 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:23:40.175239 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:23:40.178007 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:23:40.180380 systemd-udevd[1360]: Using default interface naming scheme 'v255'. Sep 12 17:23:40.180622 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:23:40.182654 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:23:40.182832 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:23:40.187736 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:23:40.192386 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:23:40.194200 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:23:40.194379 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:23:40.196054 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:23:40.198552 augenrules[1386]: No rules Sep 12 17:23:40.199654 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:23:40.201429 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:23:40.201609 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:23:40.204033 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:23:40.207023 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:23:40.209943 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:23:40.214098 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:23:40.215004 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:23:40.217003 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:23:40.236632 systemd[1]: Finished ensure-sysext.service. Sep 12 17:23:40.241602 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:23:40.242579 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:23:40.244643 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:23:40.247641 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:23:40.252241 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:23:40.263292 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:23:40.265569 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:23:40.265615 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:23:40.267275 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:23:40.270340 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 17:23:40.271244 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:23:40.271804 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:23:40.272709 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:23:40.274873 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:23:40.275021 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:23:40.276029 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:23:40.276188 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:23:40.282701 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:23:40.283498 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:23:40.286337 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:23:40.286401 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:23:40.292982 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 12 17:23:40.296705 augenrules[1431]: /sbin/augenrules: No change Sep 12 17:23:40.307635 augenrules[1463]: No rules Sep 12 17:23:40.310217 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:23:40.312602 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:23:40.336870 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 17:23:40.339001 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:23:40.366894 systemd-resolved[1354]: Positive Trust Anchors: Sep 12 17:23:40.366912 systemd-resolved[1354]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:23:40.366943 systemd-resolved[1354]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:23:40.369621 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:23:40.374054 systemd-resolved[1354]: Defaulting to hostname 'linux'. Sep 12 17:23:40.375477 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:23:40.376574 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:23:40.386774 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 17:23:40.386806 systemd-networkd[1437]: lo: Link UP Sep 12 17:23:40.386810 systemd-networkd[1437]: lo: Gained carrier Sep 12 17:23:40.387555 systemd-networkd[1437]: Enumeration completed Sep 12 17:23:40.387949 systemd-networkd[1437]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:23:40.387952 systemd-networkd[1437]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:23:40.388070 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:23:40.389310 systemd-networkd[1437]: eth0: Link UP Sep 12 17:23:40.389428 systemd-networkd[1437]: eth0: Gained carrier Sep 12 17:23:40.389443 systemd-networkd[1437]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:23:40.389528 systemd[1]: Reached target network.target - Network. Sep 12 17:23:40.390474 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:23:40.391754 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:23:40.393146 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:23:40.394564 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:23:40.395857 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:23:40.395883 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:23:40.396718 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:23:40.397911 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:23:40.399285 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:23:40.400531 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:23:40.402103 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:23:40.404780 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:23:40.408036 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 17:23:40.409970 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 17:23:40.410483 systemd-networkd[1437]: eth0: DHCPv4 address 10.0.0.110/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 17:23:40.411556 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 17:23:40.414931 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:23:40.414949 systemd-timesyncd[1439]: Network configuration changed, trying to establish connection. Sep 12 17:23:40.416070 systemd-timesyncd[1439]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 12 17:23:40.416116 systemd-timesyncd[1439]: Initial clock synchronization to Fri 2025-09-12 17:23:40.126245 UTC. Sep 12 17:23:40.416882 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 17:23:40.420278 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 17:23:40.422383 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:23:40.425747 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:23:40.430173 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:23:40.431891 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:23:40.433651 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:23:40.433738 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:23:40.434756 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:23:40.437085 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:23:40.440637 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:23:40.444102 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:23:40.446051 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:23:40.447358 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:23:40.450694 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:23:40.453657 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:23:40.456399 jq[1498]: false Sep 12 17:23:40.456649 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:23:40.461514 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:23:40.465496 extend-filesystems[1499]: Found /dev/vda6 Sep 12 17:23:40.467345 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:23:40.469110 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:23:40.469627 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:23:40.470200 extend-filesystems[1499]: Found /dev/vda9 Sep 12 17:23:40.471095 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:23:40.472672 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:23:40.475730 extend-filesystems[1499]: Checking size of /dev/vda9 Sep 12 17:23:40.478477 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 17:23:40.483928 jq[1514]: true Sep 12 17:23:40.483739 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:23:40.485482 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:23:40.487440 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:23:40.487888 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:23:40.488066 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:23:40.488575 extend-filesystems[1499]: Resized partition /dev/vda9 Sep 12 17:23:40.490094 extend-filesystems[1525]: resize2fs 1.47.2 (1-Jan-2025) Sep 12 17:23:40.490867 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:23:40.491021 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:23:40.501426 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 12 17:23:40.502965 update_engine[1512]: I20250912 17:23:40.502696 1512 main.cc:92] Flatcar Update Engine starting Sep 12 17:23:40.509980 (ntainerd)[1528]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:23:40.515850 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:23:40.526441 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 12 17:23:40.533348 jq[1527]: true Sep 12 17:23:40.541150 extend-filesystems[1525]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 17:23:40.541150 extend-filesystems[1525]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 17:23:40.541150 extend-filesystems[1525]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 12 17:23:40.544797 extend-filesystems[1499]: Resized filesystem in /dev/vda9 Sep 12 17:23:40.543148 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:23:40.542753 dbus-daemon[1496]: [system] SELinux support is enabled Sep 12 17:23:40.545877 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:23:40.546073 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:23:40.550686 update_engine[1512]: I20250912 17:23:40.550634 1512 update_check_scheduler.cc:74] Next update check in 5m27s Sep 12 17:23:40.551216 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:23:40.551251 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:23:40.552875 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:23:40.552895 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:23:40.554147 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:23:40.563215 systemd-logind[1507]: Watching system buttons on /dev/input/event0 (Power Button) Sep 12 17:23:40.564774 systemd-logind[1507]: New seat seat0. Sep 12 17:23:40.567455 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:23:40.568596 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:23:40.572602 tar[1526]: linux-arm64/helm Sep 12 17:23:40.594099 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:23:40.609588 bash[1566]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:23:40.613516 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:23:40.615704 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 17:23:40.671723 locksmithd[1549]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:23:40.697640 containerd[1528]: time="2025-09-12T17:23:40Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 12 17:23:40.698629 containerd[1528]: time="2025-09-12T17:23:40.698592880Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 12 17:23:40.707135 containerd[1528]: time="2025-09-12T17:23:40.707099760Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.88µs" Sep 12 17:23:40.707227 containerd[1528]: time="2025-09-12T17:23:40.707210320Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 12 17:23:40.707281 containerd[1528]: time="2025-09-12T17:23:40.707268720Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 12 17:23:40.707496 containerd[1528]: time="2025-09-12T17:23:40.707475040Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 12 17:23:40.707568 containerd[1528]: time="2025-09-12T17:23:40.707555200Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 12 17:23:40.707630 containerd[1528]: time="2025-09-12T17:23:40.707617880Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 17:23:40.707738 containerd[1528]: time="2025-09-12T17:23:40.707719880Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 17:23:40.707793 containerd[1528]: time="2025-09-12T17:23:40.707779720Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 17:23:40.708054 containerd[1528]: time="2025-09-12T17:23:40.708029000Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 17:23:40.708113 containerd[1528]: time="2025-09-12T17:23:40.708100240Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 17:23:40.708173 containerd[1528]: time="2025-09-12T17:23:40.708147120Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 17:23:40.708227 containerd[1528]: time="2025-09-12T17:23:40.708215360Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 12 17:23:40.708379 containerd[1528]: time="2025-09-12T17:23:40.708363960Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 12 17:23:40.708649 containerd[1528]: time="2025-09-12T17:23:40.708626320Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 17:23:40.708731 containerd[1528]: time="2025-09-12T17:23:40.708716080Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 17:23:40.708779 containerd[1528]: time="2025-09-12T17:23:40.708766000Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 12 17:23:40.708855 containerd[1528]: time="2025-09-12T17:23:40.708840480Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 12 17:23:40.709125 containerd[1528]: time="2025-09-12T17:23:40.709104880Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 12 17:23:40.709252 containerd[1528]: time="2025-09-12T17:23:40.709235000Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:23:40.713713 containerd[1528]: time="2025-09-12T17:23:40.713687280Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 12 17:23:40.713835 containerd[1528]: time="2025-09-12T17:23:40.713818920Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 12 17:23:40.713898 containerd[1528]: time="2025-09-12T17:23:40.713885480Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 12 17:23:40.713955 containerd[1528]: time="2025-09-12T17:23:40.713943280Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 12 17:23:40.714013 containerd[1528]: time="2025-09-12T17:23:40.714002160Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 12 17:23:40.714064 containerd[1528]: time="2025-09-12T17:23:40.714052520Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 12 17:23:40.714114 containerd[1528]: time="2025-09-12T17:23:40.714102520Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 12 17:23:40.714178 containerd[1528]: time="2025-09-12T17:23:40.714149800Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 12 17:23:40.714240 containerd[1528]: time="2025-09-12T17:23:40.714227320Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 12 17:23:40.714288 containerd[1528]: time="2025-09-12T17:23:40.714277600Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 12 17:23:40.714332 containerd[1528]: time="2025-09-12T17:23:40.714321120Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 12 17:23:40.714385 containerd[1528]: time="2025-09-12T17:23:40.714373400Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 12 17:23:40.714563 containerd[1528]: time="2025-09-12T17:23:40.714542200Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 12 17:23:40.714634 containerd[1528]: time="2025-09-12T17:23:40.714620000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 12 17:23:40.714706 containerd[1528]: time="2025-09-12T17:23:40.714691840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 12 17:23:40.714758 containerd[1528]: time="2025-09-12T17:23:40.714746720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 12 17:23:40.714805 containerd[1528]: time="2025-09-12T17:23:40.714793640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 12 17:23:40.714851 containerd[1528]: time="2025-09-12T17:23:40.714839520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 12 17:23:40.714909 containerd[1528]: time="2025-09-12T17:23:40.714897760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 12 17:23:40.714964 containerd[1528]: time="2025-09-12T17:23:40.714952000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 12 17:23:40.715013 containerd[1528]: time="2025-09-12T17:23:40.715002360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 12 17:23:40.715059 containerd[1528]: time="2025-09-12T17:23:40.715048440Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 12 17:23:40.715105 containerd[1528]: time="2025-09-12T17:23:40.715094560Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 12 17:23:40.715343 containerd[1528]: time="2025-09-12T17:23:40.715327040Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 12 17:23:40.715404 containerd[1528]: time="2025-09-12T17:23:40.715394080Z" level=info msg="Start snapshots syncer" Sep 12 17:23:40.715505 containerd[1528]: time="2025-09-12T17:23:40.715489200Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 12 17:23:40.715774 containerd[1528]: time="2025-09-12T17:23:40.715736560Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 12 17:23:40.715913 containerd[1528]: time="2025-09-12T17:23:40.715897520Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 12 17:23:40.716045 containerd[1528]: time="2025-09-12T17:23:40.716028120Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 12 17:23:40.716787 containerd[1528]: time="2025-09-12T17:23:40.716746600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 12 17:23:40.716851 containerd[1528]: time="2025-09-12T17:23:40.716803160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 12 17:23:40.716851 containerd[1528]: time="2025-09-12T17:23:40.716843280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 12 17:23:40.716886 containerd[1528]: time="2025-09-12T17:23:40.716858840Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 12 17:23:40.716886 containerd[1528]: time="2025-09-12T17:23:40.716878040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 12 17:23:40.716918 containerd[1528]: time="2025-09-12T17:23:40.716894360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 12 17:23:40.716918 containerd[1528]: time="2025-09-12T17:23:40.716910720Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 12 17:23:40.716967 containerd[1528]: time="2025-09-12T17:23:40.716944640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 12 17:23:40.716996 containerd[1528]: time="2025-09-12T17:23:40.716971840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 12 17:23:40.716996 containerd[1528]: time="2025-09-12T17:23:40.716989680Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 12 17:23:40.717054 containerd[1528]: time="2025-09-12T17:23:40.717035800Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 17:23:40.717077 containerd[1528]: time="2025-09-12T17:23:40.717059920Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 17:23:40.717095 containerd[1528]: time="2025-09-12T17:23:40.717075760Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 17:23:40.717113 containerd[1528]: time="2025-09-12T17:23:40.717090560Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 17:23:40.717113 containerd[1528]: time="2025-09-12T17:23:40.717101160Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 12 17:23:40.717150 containerd[1528]: time="2025-09-12T17:23:40.717115800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 12 17:23:40.717150 containerd[1528]: time="2025-09-12T17:23:40.717131320Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 12 17:23:40.717549 containerd[1528]: time="2025-09-12T17:23:40.717267720Z" level=info msg="runtime interface created" Sep 12 17:23:40.717549 containerd[1528]: time="2025-09-12T17:23:40.717291560Z" level=info msg="created NRI interface" Sep 12 17:23:40.717549 containerd[1528]: time="2025-09-12T17:23:40.717313880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 12 17:23:40.717549 containerd[1528]: time="2025-09-12T17:23:40.717342720Z" level=info msg="Connect containerd service" Sep 12 17:23:40.717549 containerd[1528]: time="2025-09-12T17:23:40.717389960Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:23:40.718189 containerd[1528]: time="2025-09-12T17:23:40.718135360Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:23:40.795120 containerd[1528]: time="2025-09-12T17:23:40.795048400Z" level=info msg="Start subscribing containerd event" Sep 12 17:23:40.795120 containerd[1528]: time="2025-09-12T17:23:40.795124840Z" level=info msg="Start recovering state" Sep 12 17:23:40.796541 containerd[1528]: time="2025-09-12T17:23:40.795237000Z" level=info msg="Start event monitor" Sep 12 17:23:40.796541 containerd[1528]: time="2025-09-12T17:23:40.795251440Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:23:40.796541 containerd[1528]: time="2025-09-12T17:23:40.795260480Z" level=info msg="Start streaming server" Sep 12 17:23:40.796541 containerd[1528]: time="2025-09-12T17:23:40.795268960Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 12 17:23:40.796541 containerd[1528]: time="2025-09-12T17:23:40.795276200Z" level=info msg="runtime interface starting up..." Sep 12 17:23:40.796541 containerd[1528]: time="2025-09-12T17:23:40.795281840Z" level=info msg="starting plugins..." Sep 12 17:23:40.796541 containerd[1528]: time="2025-09-12T17:23:40.795293560Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 12 17:23:40.796541 containerd[1528]: time="2025-09-12T17:23:40.795710200Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:23:40.796541 containerd[1528]: time="2025-09-12T17:23:40.795752200Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:23:40.796541 containerd[1528]: time="2025-09-12T17:23:40.795800120Z" level=info msg="containerd successfully booted in 0.098506s" Sep 12 17:23:40.795947 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:23:40.840674 tar[1526]: linux-arm64/LICENSE Sep 12 17:23:40.840769 tar[1526]: linux-arm64/README.md Sep 12 17:23:40.862461 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:23:41.233658 sshd_keygen[1521]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:23:41.251863 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:23:41.255063 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:23:41.273294 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:23:41.273532 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:23:41.275844 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:23:41.299999 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:23:41.303534 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:23:41.306273 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 12 17:23:41.307489 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:23:42.182566 systemd-networkd[1437]: eth0: Gained IPv6LL Sep 12 17:23:42.184954 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:23:42.187547 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:23:42.190471 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 12 17:23:42.193177 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:23:42.215058 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:23:42.231650 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 12 17:23:42.231873 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 12 17:23:42.234371 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:23:42.235377 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:23:42.758581 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:23:42.760123 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:23:42.763110 (kubelet)[1638]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:23:42.766509 systemd[1]: Startup finished in 2.024s (kernel) + 5.165s (initrd) + 4.050s (userspace) = 11.240s. Sep 12 17:23:43.149562 kubelet[1638]: E0912 17:23:43.149459 1638 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:23:43.151854 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:23:43.151992 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:23:43.152334 systemd[1]: kubelet.service: Consumed 782ms CPU time, 258.1M memory peak. Sep 12 17:23:46.305217 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:23:46.307046 systemd[1]: Started sshd@0-10.0.0.110:22-10.0.0.1:34340.service - OpenSSH per-connection server daemon (10.0.0.1:34340). Sep 12 17:23:46.376317 sshd[1651]: Accepted publickey for core from 10.0.0.1 port 34340 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:23:46.378817 sshd-session[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:23:46.385378 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:23:46.386269 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:23:46.391257 systemd-logind[1507]: New session 1 of user core. Sep 12 17:23:46.411453 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:23:46.414224 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:23:46.431951 (systemd)[1656]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:23:46.435257 systemd-logind[1507]: New session c1 of user core. Sep 12 17:23:46.583351 systemd[1656]: Queued start job for default target default.target. Sep 12 17:23:46.592402 systemd[1656]: Created slice app.slice - User Application Slice. Sep 12 17:23:46.592453 systemd[1656]: Reached target paths.target - Paths. Sep 12 17:23:46.592492 systemd[1656]: Reached target timers.target - Timers. Sep 12 17:23:46.593688 systemd[1656]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:23:46.608032 systemd[1656]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:23:46.608291 systemd[1656]: Reached target sockets.target - Sockets. Sep 12 17:23:46.608421 systemd[1656]: Reached target basic.target - Basic System. Sep 12 17:23:46.608521 systemd[1656]: Reached target default.target - Main User Target. Sep 12 17:23:46.608615 systemd[1656]: Startup finished in 166ms. Sep 12 17:23:46.609513 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:23:46.611061 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:23:46.678001 systemd[1]: Started sshd@1-10.0.0.110:22-10.0.0.1:34354.service - OpenSSH per-connection server daemon (10.0.0.1:34354). Sep 12 17:23:46.731087 sshd[1667]: Accepted publickey for core from 10.0.0.1 port 34354 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:23:46.733099 sshd-session[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:23:46.737009 systemd-logind[1507]: New session 2 of user core. Sep 12 17:23:46.747603 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:23:46.797469 sshd[1670]: Connection closed by 10.0.0.1 port 34354 Sep 12 17:23:46.798035 sshd-session[1667]: pam_unix(sshd:session): session closed for user core Sep 12 17:23:46.808545 systemd[1]: sshd@1-10.0.0.110:22-10.0.0.1:34354.service: Deactivated successfully. Sep 12 17:23:46.809944 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 17:23:46.812951 systemd-logind[1507]: Session 2 logged out. Waiting for processes to exit. Sep 12 17:23:46.814981 systemd[1]: Started sshd@2-10.0.0.110:22-10.0.0.1:34360.service - OpenSSH per-connection server daemon (10.0.0.1:34360). Sep 12 17:23:46.817124 systemd-logind[1507]: Removed session 2. Sep 12 17:23:46.879639 sshd[1676]: Accepted publickey for core from 10.0.0.1 port 34360 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:23:46.880869 sshd-session[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:23:46.885101 systemd-logind[1507]: New session 3 of user core. Sep 12 17:23:46.892652 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:23:46.940604 sshd[1679]: Connection closed by 10.0.0.1 port 34360 Sep 12 17:23:46.941368 sshd-session[1676]: pam_unix(sshd:session): session closed for user core Sep 12 17:23:46.951928 systemd[1]: sshd@2-10.0.0.110:22-10.0.0.1:34360.service: Deactivated successfully. Sep 12 17:23:46.954596 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 17:23:46.956368 systemd-logind[1507]: Session 3 logged out. Waiting for processes to exit. Sep 12 17:23:46.960663 systemd[1]: Started sshd@3-10.0.0.110:22-10.0.0.1:34364.service - OpenSSH per-connection server daemon (10.0.0.1:34364). Sep 12 17:23:46.962613 systemd-logind[1507]: Removed session 3. Sep 12 17:23:47.025883 sshd[1685]: Accepted publickey for core from 10.0.0.1 port 34364 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:23:47.027364 sshd-session[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:23:47.032494 systemd-logind[1507]: New session 4 of user core. Sep 12 17:23:47.051593 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:23:47.103396 sshd[1688]: Connection closed by 10.0.0.1 port 34364 Sep 12 17:23:47.102228 sshd-session[1685]: pam_unix(sshd:session): session closed for user core Sep 12 17:23:47.115259 systemd[1]: sshd@3-10.0.0.110:22-10.0.0.1:34364.service: Deactivated successfully. Sep 12 17:23:47.116613 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:23:47.120495 systemd-logind[1507]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:23:47.123619 systemd[1]: Started sshd@4-10.0.0.110:22-10.0.0.1:34368.service - OpenSSH per-connection server daemon (10.0.0.1:34368). Sep 12 17:23:47.124560 systemd-logind[1507]: Removed session 4. Sep 12 17:23:47.193639 sshd[1694]: Accepted publickey for core from 10.0.0.1 port 34368 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:23:47.194888 sshd-session[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:23:47.203487 systemd-logind[1507]: New session 5 of user core. Sep 12 17:23:47.210864 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:23:47.267658 sudo[1698]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:23:47.267929 sudo[1698]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:23:47.285654 sudo[1698]: pam_unix(sudo:session): session closed for user root Sep 12 17:23:47.287260 sshd[1697]: Connection closed by 10.0.0.1 port 34368 Sep 12 17:23:47.287790 sshd-session[1694]: pam_unix(sshd:session): session closed for user core Sep 12 17:23:47.301492 systemd[1]: sshd@4-10.0.0.110:22-10.0.0.1:34368.service: Deactivated successfully. Sep 12 17:23:47.303780 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:23:47.304432 systemd-logind[1507]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:23:47.306487 systemd[1]: Started sshd@5-10.0.0.110:22-10.0.0.1:34374.service - OpenSSH per-connection server daemon (10.0.0.1:34374). Sep 12 17:23:47.307987 systemd-logind[1507]: Removed session 5. Sep 12 17:23:47.350273 sshd[1704]: Accepted publickey for core from 10.0.0.1 port 34374 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:23:47.351623 sshd-session[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:23:47.356746 systemd-logind[1507]: New session 6 of user core. Sep 12 17:23:47.368604 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:23:47.421834 sudo[1709]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:23:47.422081 sudo[1709]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:23:47.494854 sudo[1709]: pam_unix(sudo:session): session closed for user root Sep 12 17:23:47.500393 sudo[1708]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 17:23:47.500921 sudo[1708]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:23:47.512123 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:23:47.548232 augenrules[1731]: No rules Sep 12 17:23:47.548809 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:23:47.548993 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:23:47.549951 sudo[1708]: pam_unix(sudo:session): session closed for user root Sep 12 17:23:47.551893 sshd[1707]: Connection closed by 10.0.0.1 port 34374 Sep 12 17:23:47.552327 sshd-session[1704]: pam_unix(sshd:session): session closed for user core Sep 12 17:23:47.563092 systemd[1]: sshd@5-10.0.0.110:22-10.0.0.1:34374.service: Deactivated successfully. Sep 12 17:23:47.565686 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:23:47.566432 systemd-logind[1507]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:23:47.568694 systemd[1]: Started sshd@6-10.0.0.110:22-10.0.0.1:34386.service - OpenSSH per-connection server daemon (10.0.0.1:34386). Sep 12 17:23:47.569292 systemd-logind[1507]: Removed session 6. Sep 12 17:23:47.644751 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 34386 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:23:47.646644 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:23:47.654054 systemd-logind[1507]: New session 7 of user core. Sep 12 17:23:47.664634 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:23:47.718923 sudo[1744]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:23:47.719180 sudo[1744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:23:48.031861 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:23:48.042782 (dockerd)[1765]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:23:48.238748 dockerd[1765]: time="2025-09-12T17:23:48.238696834Z" level=info msg="Starting up" Sep 12 17:23:48.241221 dockerd[1765]: time="2025-09-12T17:23:48.241177442Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 12 17:23:48.253387 dockerd[1765]: time="2025-09-12T17:23:48.253336601Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 12 17:23:48.270992 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1305505544-merged.mount: Deactivated successfully. Sep 12 17:23:48.291624 dockerd[1765]: time="2025-09-12T17:23:48.291525818Z" level=info msg="Loading containers: start." Sep 12 17:23:48.301442 kernel: Initializing XFRM netlink socket Sep 12 17:23:48.528672 systemd-networkd[1437]: docker0: Link UP Sep 12 17:23:48.532295 dockerd[1765]: time="2025-09-12T17:23:48.532254882Z" level=info msg="Loading containers: done." Sep 12 17:23:48.546955 dockerd[1765]: time="2025-09-12T17:23:48.546858135Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:23:48.546955 dockerd[1765]: time="2025-09-12T17:23:48.546935223Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 12 17:23:48.547086 dockerd[1765]: time="2025-09-12T17:23:48.547019882Z" level=info msg="Initializing buildkit" Sep 12 17:23:48.574361 dockerd[1765]: time="2025-09-12T17:23:48.574315560Z" level=info msg="Completed buildkit initialization" Sep 12 17:23:48.582429 dockerd[1765]: time="2025-09-12T17:23:48.580478476Z" level=info msg="Daemon has completed initialization" Sep 12 17:23:48.582429 dockerd[1765]: time="2025-09-12T17:23:48.580554815Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:23:48.580696 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:23:49.289278 containerd[1528]: time="2025-09-12T17:23:49.289238659Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 12 17:23:49.880252 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2388036493.mount: Deactivated successfully. Sep 12 17:23:50.849760 containerd[1528]: time="2025-09-12T17:23:50.849701405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:50.850505 containerd[1528]: time="2025-09-12T17:23:50.850477026Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=25687327" Sep 12 17:23:50.852207 containerd[1528]: time="2025-09-12T17:23:50.852167455Z" level=info msg="ImageCreate event name:\"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:50.858856 containerd[1528]: time="2025-09-12T17:23:50.858816688Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:50.859843 containerd[1528]: time="2025-09-12T17:23:50.859808370Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"25683924\" in 1.570527628s" Sep 12 17:23:50.859843 containerd[1528]: time="2025-09-12T17:23:50.859842712Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\"" Sep 12 17:23:50.861166 containerd[1528]: time="2025-09-12T17:23:50.861140860Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 12 17:23:52.044915 containerd[1528]: time="2025-09-12T17:23:52.044844830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:52.047380 containerd[1528]: time="2025-09-12T17:23:52.047148780Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=22459769" Sep 12 17:23:52.048289 containerd[1528]: time="2025-09-12T17:23:52.048229731Z" level=info msg="ImageCreate event name:\"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:52.050805 containerd[1528]: time="2025-09-12T17:23:52.050773504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:52.051830 containerd[1528]: time="2025-09-12T17:23:52.051777424Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"24028542\" in 1.190605126s" Sep 12 17:23:52.051991 containerd[1528]: time="2025-09-12T17:23:52.051939144Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\"" Sep 12 17:23:52.052564 containerd[1528]: time="2025-09-12T17:23:52.052512840Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 12 17:23:53.144160 containerd[1528]: time="2025-09-12T17:23:53.144051372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:53.145587 containerd[1528]: time="2025-09-12T17:23:53.145547775Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=17127508" Sep 12 17:23:53.147155 containerd[1528]: time="2025-09-12T17:23:53.147110490Z" level=info msg="ImageCreate event name:\"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:53.150707 containerd[1528]: time="2025-09-12T17:23:53.150661553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:53.151906 containerd[1528]: time="2025-09-12T17:23:53.151851092Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"18696299\" in 1.099277871s" Sep 12 17:23:53.151906 containerd[1528]: time="2025-09-12T17:23:53.151890601Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\"" Sep 12 17:23:53.153025 containerd[1528]: time="2025-09-12T17:23:53.152989883Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 12 17:23:53.402353 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:23:53.404850 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:23:53.564064 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:23:53.569129 (kubelet)[2059]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:23:53.610134 kubelet[2059]: E0912 17:23:53.610061 2059 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:23:53.613099 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:23:53.613226 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:23:53.613582 systemd[1]: kubelet.service: Consumed 145ms CPU time, 107.8M memory peak. Sep 12 17:23:54.210585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2592511686.mount: Deactivated successfully. Sep 12 17:23:54.544477 containerd[1528]: time="2025-09-12T17:23:54.544342258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:54.545483 containerd[1528]: time="2025-09-12T17:23:54.545454671Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=26954909" Sep 12 17:23:54.546362 containerd[1528]: time="2025-09-12T17:23:54.546296861Z" level=info msg="ImageCreate event name:\"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:54.549606 containerd[1528]: time="2025-09-12T17:23:54.549571582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:54.550782 containerd[1528]: time="2025-09-12T17:23:54.550746752Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"26953926\" in 1.397723864s" Sep 12 17:23:54.550823 containerd[1528]: time="2025-09-12T17:23:54.550780138Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\"" Sep 12 17:23:54.551227 containerd[1528]: time="2025-09-12T17:23:54.551208705Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 17:23:55.054203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3743490960.mount: Deactivated successfully. Sep 12 17:23:55.780507 containerd[1528]: time="2025-09-12T17:23:55.780457110Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:55.781294 containerd[1528]: time="2025-09-12T17:23:55.781257972Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 12 17:23:55.783280 containerd[1528]: time="2025-09-12T17:23:55.783235107Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:55.785419 containerd[1528]: time="2025-09-12T17:23:55.785347044Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:55.786764 containerd[1528]: time="2025-09-12T17:23:55.786638430Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.235400228s" Sep 12 17:23:55.786764 containerd[1528]: time="2025-09-12T17:23:55.786674190Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 12 17:23:55.787112 containerd[1528]: time="2025-09-12T17:23:55.787092997Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:23:56.226811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4244115561.mount: Deactivated successfully. Sep 12 17:23:56.233466 containerd[1528]: time="2025-09-12T17:23:56.233399597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:23:56.234464 containerd[1528]: time="2025-09-12T17:23:56.234426952Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 12 17:23:56.235588 containerd[1528]: time="2025-09-12T17:23:56.235557081Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:23:56.238217 containerd[1528]: time="2025-09-12T17:23:56.238165510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:23:56.239126 containerd[1528]: time="2025-09-12T17:23:56.239000092Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 451.823358ms" Sep 12 17:23:56.239126 containerd[1528]: time="2025-09-12T17:23:56.239038304Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 12 17:23:56.240019 containerd[1528]: time="2025-09-12T17:23:56.239851152Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 12 17:23:56.806724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4010068111.mount: Deactivated successfully. Sep 12 17:23:58.460687 containerd[1528]: time="2025-09-12T17:23:58.460630611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:58.462549 containerd[1528]: time="2025-09-12T17:23:58.462495686Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537163" Sep 12 17:23:58.463668 containerd[1528]: time="2025-09-12T17:23:58.463613010Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:58.465896 containerd[1528]: time="2025-09-12T17:23:58.465848533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:58.466962 containerd[1528]: time="2025-09-12T17:23:58.466935451Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.226957989s" Sep 12 17:23:58.467005 containerd[1528]: time="2025-09-12T17:23:58.466964741Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 12 17:24:03.638146 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 17:24:03.641621 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:24:03.656060 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 17:24:03.656328 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 17:24:03.657503 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:24:03.660038 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:24:03.686048 systemd[1]: Reload requested from client PID 2216 ('systemctl') (unit session-7.scope)... Sep 12 17:24:03.686069 systemd[1]: Reloading... Sep 12 17:24:03.765446 zram_generator::config[2268]: No configuration found. Sep 12 17:24:03.968500 systemd[1]: Reloading finished in 282 ms. Sep 12 17:24:04.034000 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 17:24:04.034086 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 17:24:04.034383 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:24:04.034451 systemd[1]: kubelet.service: Consumed 92ms CPU time, 95M memory peak. Sep 12 17:24:04.036012 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:24:04.152922 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:24:04.157856 (kubelet)[2304]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:24:04.199455 kubelet[2304]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:24:04.199455 kubelet[2304]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 17:24:04.199455 kubelet[2304]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:24:04.199455 kubelet[2304]: I0912 17:24:04.199344 2304 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:24:05.483818 kubelet[2304]: I0912 17:24:05.483760 2304 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 17:24:05.483818 kubelet[2304]: I0912 17:24:05.483804 2304 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:24:05.484212 kubelet[2304]: I0912 17:24:05.484087 2304 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 17:24:05.502022 kubelet[2304]: E0912 17:24:05.501969 2304 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.110:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:24:05.505213 kubelet[2304]: I0912 17:24:05.505175 2304 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:24:05.520142 kubelet[2304]: I0912 17:24:05.520107 2304 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 17:24:05.523917 kubelet[2304]: I0912 17:24:05.523884 2304 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:24:05.524763 kubelet[2304]: I0912 17:24:05.524729 2304 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 17:24:05.524935 kubelet[2304]: I0912 17:24:05.524906 2304 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:24:05.525131 kubelet[2304]: I0912 17:24:05.524934 2304 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:24:05.525271 kubelet[2304]: I0912 17:24:05.525260 2304 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:24:05.525297 kubelet[2304]: I0912 17:24:05.525272 2304 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 17:24:05.525549 kubelet[2304]: I0912 17:24:05.525536 2304 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:24:05.528456 kubelet[2304]: I0912 17:24:05.528355 2304 kubelet.go:408] "Attempting to sync node with API server" Sep 12 17:24:05.528456 kubelet[2304]: I0912 17:24:05.528395 2304 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:24:05.528456 kubelet[2304]: I0912 17:24:05.528436 2304 kubelet.go:314] "Adding apiserver pod source" Sep 12 17:24:05.528456 kubelet[2304]: I0912 17:24:05.528452 2304 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:24:05.531423 kubelet[2304]: W0912 17:24:05.529371 2304 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.110:6443: connect: connection refused Sep 12 17:24:05.531423 kubelet[2304]: E0912 17:24:05.531361 2304 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:24:05.531704 kubelet[2304]: W0912 17:24:05.530047 2304 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.110:6443: connect: connection refused Sep 12 17:24:05.531704 kubelet[2304]: E0912 17:24:05.531658 2304 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:24:05.532479 kubelet[2304]: I0912 17:24:05.532343 2304 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 17:24:05.533126 kubelet[2304]: I0912 17:24:05.533093 2304 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:24:05.533292 kubelet[2304]: W0912 17:24:05.533279 2304 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:24:05.534387 kubelet[2304]: I0912 17:24:05.534353 2304 server.go:1274] "Started kubelet" Sep 12 17:24:05.537874 kubelet[2304]: I0912 17:24:05.535951 2304 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:24:05.537874 kubelet[2304]: I0912 17:24:05.536285 2304 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:24:05.537874 kubelet[2304]: I0912 17:24:05.536396 2304 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:24:05.537874 kubelet[2304]: I0912 17:24:05.536397 2304 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:24:05.538426 kubelet[2304]: I0912 17:24:05.538337 2304 server.go:449] "Adding debug handlers to kubelet server" Sep 12 17:24:05.538746 kubelet[2304]: I0912 17:24:05.538718 2304 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:24:05.541048 kubelet[2304]: I0912 17:24:05.539727 2304 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 17:24:05.541048 kubelet[2304]: I0912 17:24:05.539864 2304 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 17:24:05.541048 kubelet[2304]: I0912 17:24:05.539939 2304 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:24:05.541048 kubelet[2304]: W0912 17:24:05.540367 2304 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.110:6443: connect: connection refused Sep 12 17:24:05.541048 kubelet[2304]: E0912 17:24:05.540433 2304 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:24:05.541048 kubelet[2304]: E0912 17:24:05.540744 2304 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:24:05.541048 kubelet[2304]: E0912 17:24:05.540827 2304 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.110:6443: connect: connection refused" interval="200ms" Sep 12 17:24:05.541048 kubelet[2304]: I0912 17:24:05.541002 2304 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:24:05.547138 kubelet[2304]: I0912 17:24:05.547089 2304 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:24:05.548453 kubelet[2304]: E0912 17:24:05.547460 2304 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.110:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.110:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186498e0b98af1b3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 17:24:05.534323123 +0000 UTC m=+1.373132082,LastTimestamp:2025-09-12 17:24:05.534323123 +0000 UTC m=+1.373132082,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 17:24:05.550359 kubelet[2304]: I0912 17:24:05.549526 2304 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:24:05.562873 kubelet[2304]: I0912 17:24:05.562852 2304 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 17:24:05.562978 kubelet[2304]: I0912 17:24:05.562967 2304 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 17:24:05.563053 kubelet[2304]: I0912 17:24:05.563043 2304 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:24:05.565828 kubelet[2304]: I0912 17:24:05.565627 2304 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:24:05.567516 kubelet[2304]: I0912 17:24:05.567489 2304 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:24:05.567516 kubelet[2304]: I0912 17:24:05.567521 2304 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 17:24:05.567626 kubelet[2304]: I0912 17:24:05.567547 2304 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 17:24:05.567626 kubelet[2304]: E0912 17:24:05.567588 2304 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:24:05.640918 kubelet[2304]: E0912 17:24:05.640829 2304 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:24:05.668226 kubelet[2304]: E0912 17:24:05.668151 2304 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 17:24:05.709785 kubelet[2304]: I0912 17:24:05.709718 2304 policy_none.go:49] "None policy: Start" Sep 12 17:24:05.711211 kubelet[2304]: I0912 17:24:05.710885 2304 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 17:24:05.711211 kubelet[2304]: I0912 17:24:05.710923 2304 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:24:05.711211 kubelet[2304]: W0912 17:24:05.710976 2304 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.110:6443: connect: connection refused Sep 12 17:24:05.711426 kubelet[2304]: E0912 17:24:05.711355 2304 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:24:05.723260 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 17:24:05.741973 kubelet[2304]: E0912 17:24:05.741817 2304 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.110:6443: connect: connection refused" interval="400ms" Sep 12 17:24:05.741973 kubelet[2304]: E0912 17:24:05.741870 2304 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:24:05.742746 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 17:24:05.756536 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 17:24:05.759466 kubelet[2304]: I0912 17:24:05.759042 2304 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:24:05.759466 kubelet[2304]: I0912 17:24:05.759334 2304 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:24:05.760034 kubelet[2304]: I0912 17:24:05.759876 2304 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:24:05.760823 kubelet[2304]: I0912 17:24:05.760206 2304 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:24:05.761989 kubelet[2304]: E0912 17:24:05.761956 2304 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 12 17:24:05.862226 kubelet[2304]: I0912 17:24:05.862182 2304 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:24:05.862848 kubelet[2304]: E0912 17:24:05.862813 2304 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.110:6443/api/v1/nodes\": dial tcp 10.0.0.110:6443: connect: connection refused" node="localhost" Sep 12 17:24:05.881802 systemd[1]: Created slice kubepods-burstable-pod702e8ecab7779b1d563e8c2797dae2dc.slice - libcontainer container kubepods-burstable-pod702e8ecab7779b1d563e8c2797dae2dc.slice. Sep 12 17:24:05.904735 systemd[1]: Created slice kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice - libcontainer container kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice. Sep 12 17:24:05.910106 systemd[1]: Created slice kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice - libcontainer container kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice. Sep 12 17:24:05.944039 kubelet[2304]: I0912 17:24:05.943688 2304 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:24:05.944039 kubelet[2304]: I0912 17:24:05.943742 2304 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 12 17:24:05.944039 kubelet[2304]: I0912 17:24:05.943773 2304 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/702e8ecab7779b1d563e8c2797dae2dc-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"702e8ecab7779b1d563e8c2797dae2dc\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:24:05.944039 kubelet[2304]: I0912 17:24:05.943807 2304 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/702e8ecab7779b1d563e8c2797dae2dc-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"702e8ecab7779b1d563e8c2797dae2dc\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:24:05.944039 kubelet[2304]: I0912 17:24:05.943825 2304 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:24:05.944281 kubelet[2304]: I0912 17:24:05.943840 2304 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:24:05.944281 kubelet[2304]: I0912 17:24:05.943855 2304 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:24:05.944281 kubelet[2304]: I0912 17:24:05.943870 2304 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:24:05.944281 kubelet[2304]: I0912 17:24:05.943886 2304 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/702e8ecab7779b1d563e8c2797dae2dc-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"702e8ecab7779b1d563e8c2797dae2dc\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:24:06.065312 kubelet[2304]: I0912 17:24:06.064833 2304 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:24:06.065312 kubelet[2304]: E0912 17:24:06.065202 2304 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.110:6443/api/v1/nodes\": dial tcp 10.0.0.110:6443: connect: connection refused" node="localhost" Sep 12 17:24:06.143577 kubelet[2304]: E0912 17:24:06.143524 2304 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.110:6443: connect: connection refused" interval="800ms" Sep 12 17:24:06.204729 containerd[1528]: time="2025-09-12T17:24:06.204151132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:702e8ecab7779b1d563e8c2797dae2dc,Namespace:kube-system,Attempt:0,}" Sep 12 17:24:06.211464 containerd[1528]: time="2025-09-12T17:24:06.210824381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,}" Sep 12 17:24:06.213652 containerd[1528]: time="2025-09-12T17:24:06.213504730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,}" Sep 12 17:24:06.278448 containerd[1528]: time="2025-09-12T17:24:06.278109798Z" level=info msg="connecting to shim 57e1b46a5bfdd5637c6c55966f53dac43bb6d4e3e63792ae617bf31c148a7967" address="unix:///run/containerd/s/b06c1ba2408624e073e5d205659cb5f6715a085f500122685812d6b2babcbc99" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:24:06.285643 containerd[1528]: time="2025-09-12T17:24:06.285447791Z" level=info msg="connecting to shim 990d7d8431805bc256aacf541065160de3ea60ea4f295a0957cfe9407cceefc8" address="unix:///run/containerd/s/ff358d2a3aed1f9c90ea765487521e1e197338ed6d7d10d0c94a757d84b4c87f" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:24:06.289800 containerd[1528]: time="2025-09-12T17:24:06.289731317Z" level=info msg="connecting to shim 3ef2c9b74dc35574e8178de555aeed272d817c870d40a24e7bace7c3346bf4b4" address="unix:///run/containerd/s/3337775740753052fe69675d49d25006efdb65bc53b72aea5a2cf4cd2d06bfbe" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:24:06.316623 systemd[1]: Started cri-containerd-990d7d8431805bc256aacf541065160de3ea60ea4f295a0957cfe9407cceefc8.scope - libcontainer container 990d7d8431805bc256aacf541065160de3ea60ea4f295a0957cfe9407cceefc8. Sep 12 17:24:06.321589 systemd[1]: Started cri-containerd-3ef2c9b74dc35574e8178de555aeed272d817c870d40a24e7bace7c3346bf4b4.scope - libcontainer container 3ef2c9b74dc35574e8178de555aeed272d817c870d40a24e7bace7c3346bf4b4. Sep 12 17:24:06.323759 systemd[1]: Started cri-containerd-57e1b46a5bfdd5637c6c55966f53dac43bb6d4e3e63792ae617bf31c148a7967.scope - libcontainer container 57e1b46a5bfdd5637c6c55966f53dac43bb6d4e3e63792ae617bf31c148a7967. Sep 12 17:24:06.367012 containerd[1528]: time="2025-09-12T17:24:06.366967564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ef2c9b74dc35574e8178de555aeed272d817c870d40a24e7bace7c3346bf4b4\"" Sep 12 17:24:06.371239 containerd[1528]: time="2025-09-12T17:24:06.371196000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"990d7d8431805bc256aacf541065160de3ea60ea4f295a0957cfe9407cceefc8\"" Sep 12 17:24:06.371519 containerd[1528]: time="2025-09-12T17:24:06.371494536Z" level=info msg="CreateContainer within sandbox \"3ef2c9b74dc35574e8178de555aeed272d817c870d40a24e7bace7c3346bf4b4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:24:06.375093 containerd[1528]: time="2025-09-12T17:24:06.375052915Z" level=info msg="CreateContainer within sandbox \"990d7d8431805bc256aacf541065160de3ea60ea4f295a0957cfe9407cceefc8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:24:06.381623 containerd[1528]: time="2025-09-12T17:24:06.381379490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:702e8ecab7779b1d563e8c2797dae2dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"57e1b46a5bfdd5637c6c55966f53dac43bb6d4e3e63792ae617bf31c148a7967\"" Sep 12 17:24:06.384123 containerd[1528]: time="2025-09-12T17:24:06.384069627Z" level=info msg="CreateContainer within sandbox \"57e1b46a5bfdd5637c6c55966f53dac43bb6d4e3e63792ae617bf31c148a7967\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:24:06.385080 containerd[1528]: time="2025-09-12T17:24:06.385046529Z" level=info msg="Container 54b19adcfb9b23e540eec3647dffd79aae5c87db29b0d5154712020ccdc535c2: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:24:06.385926 containerd[1528]: time="2025-09-12T17:24:06.385890243Z" level=info msg="Container d815e205467737c179093b6c1a053567250412bfa2f569b2b79d491afe397a60: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:24:06.397563 containerd[1528]: time="2025-09-12T17:24:06.397511042Z" level=info msg="Container 3f42f40f216023af484bb71bb8a88c82a5c46ebb09351d0889a1aeb01cc23f03: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:24:06.397694 containerd[1528]: time="2025-09-12T17:24:06.397604402Z" level=info msg="CreateContainer within sandbox \"3ef2c9b74dc35574e8178de555aeed272d817c870d40a24e7bace7c3346bf4b4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"54b19adcfb9b23e540eec3647dffd79aae5c87db29b0d5154712020ccdc535c2\"" Sep 12 17:24:06.398483 containerd[1528]: time="2025-09-12T17:24:06.398452390Z" level=info msg="StartContainer for \"54b19adcfb9b23e540eec3647dffd79aae5c87db29b0d5154712020ccdc535c2\"" Sep 12 17:24:06.400423 containerd[1528]: time="2025-09-12T17:24:06.400292262Z" level=info msg="connecting to shim 54b19adcfb9b23e540eec3647dffd79aae5c87db29b0d5154712020ccdc535c2" address="unix:///run/containerd/s/3337775740753052fe69675d49d25006efdb65bc53b72aea5a2cf4cd2d06bfbe" protocol=ttrpc version=3 Sep 12 17:24:06.401226 containerd[1528]: time="2025-09-12T17:24:06.401186950Z" level=info msg="CreateContainer within sandbox \"990d7d8431805bc256aacf541065160de3ea60ea4f295a0957cfe9407cceefc8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d815e205467737c179093b6c1a053567250412bfa2f569b2b79d491afe397a60\"" Sep 12 17:24:06.402079 containerd[1528]: time="2025-09-12T17:24:06.402049519Z" level=info msg="StartContainer for \"d815e205467737c179093b6c1a053567250412bfa2f569b2b79d491afe397a60\"" Sep 12 17:24:06.403595 containerd[1528]: time="2025-09-12T17:24:06.403561892Z" level=info msg="connecting to shim d815e205467737c179093b6c1a053567250412bfa2f569b2b79d491afe397a60" address="unix:///run/containerd/s/ff358d2a3aed1f9c90ea765487521e1e197338ed6d7d10d0c94a757d84b4c87f" protocol=ttrpc version=3 Sep 12 17:24:06.406437 containerd[1528]: time="2025-09-12T17:24:06.406067187Z" level=info msg="CreateContainer within sandbox \"57e1b46a5bfdd5637c6c55966f53dac43bb6d4e3e63792ae617bf31c148a7967\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3f42f40f216023af484bb71bb8a88c82a5c46ebb09351d0889a1aeb01cc23f03\"" Sep 12 17:24:06.406722 containerd[1528]: time="2025-09-12T17:24:06.406696417Z" level=info msg="StartContainer for \"3f42f40f216023af484bb71bb8a88c82a5c46ebb09351d0889a1aeb01cc23f03\"" Sep 12 17:24:06.408364 containerd[1528]: time="2025-09-12T17:24:06.408326678Z" level=info msg="connecting to shim 3f42f40f216023af484bb71bb8a88c82a5c46ebb09351d0889a1aeb01cc23f03" address="unix:///run/containerd/s/b06c1ba2408624e073e5d205659cb5f6715a085f500122685812d6b2babcbc99" protocol=ttrpc version=3 Sep 12 17:24:06.411354 kubelet[2304]: W0912 17:24:06.411271 2304 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.110:6443: connect: connection refused Sep 12 17:24:06.411354 kubelet[2304]: E0912 17:24:06.411348 2304 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:24:06.424659 systemd[1]: Started cri-containerd-54b19adcfb9b23e540eec3647dffd79aae5c87db29b0d5154712020ccdc535c2.scope - libcontainer container 54b19adcfb9b23e540eec3647dffd79aae5c87db29b0d5154712020ccdc535c2. Sep 12 17:24:06.429713 systemd[1]: Started cri-containerd-3f42f40f216023af484bb71bb8a88c82a5c46ebb09351d0889a1aeb01cc23f03.scope - libcontainer container 3f42f40f216023af484bb71bb8a88c82a5c46ebb09351d0889a1aeb01cc23f03. Sep 12 17:24:06.431781 systemd[1]: Started cri-containerd-d815e205467737c179093b6c1a053567250412bfa2f569b2b79d491afe397a60.scope - libcontainer container d815e205467737c179093b6c1a053567250412bfa2f569b2b79d491afe397a60. Sep 12 17:24:06.469781 kubelet[2304]: I0912 17:24:06.468703 2304 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:24:06.469781 kubelet[2304]: E0912 17:24:06.469097 2304 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.110:6443/api/v1/nodes\": dial tcp 10.0.0.110:6443: connect: connection refused" node="localhost" Sep 12 17:24:06.488314 containerd[1528]: time="2025-09-12T17:24:06.487881540Z" level=info msg="StartContainer for \"54b19adcfb9b23e540eec3647dffd79aae5c87db29b0d5154712020ccdc535c2\" returns successfully" Sep 12 17:24:06.488314 containerd[1528]: time="2025-09-12T17:24:06.487933833Z" level=info msg="StartContainer for \"3f42f40f216023af484bb71bb8a88c82a5c46ebb09351d0889a1aeb01cc23f03\" returns successfully" Sep 12 17:24:06.492494 containerd[1528]: time="2025-09-12T17:24:06.491536315Z" level=info msg="StartContainer for \"d815e205467737c179093b6c1a053567250412bfa2f569b2b79d491afe397a60\" returns successfully" Sep 12 17:24:06.500680 kubelet[2304]: W0912 17:24:06.500591 2304 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.110:6443: connect: connection refused Sep 12 17:24:06.500680 kubelet[2304]: E0912 17:24:06.500679 2304 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:24:07.271161 kubelet[2304]: I0912 17:24:07.271126 2304 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:24:08.007678 kubelet[2304]: E0912 17:24:08.007631 2304 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 12 17:24:08.175439 kubelet[2304]: I0912 17:24:08.175227 2304 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 12 17:24:08.175932 kubelet[2304]: E0912 17:24:08.175626 2304 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 12 17:24:08.531897 kubelet[2304]: I0912 17:24:08.531856 2304 apiserver.go:52] "Watching apiserver" Sep 12 17:24:08.540612 kubelet[2304]: I0912 17:24:08.540582 2304 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 17:24:08.594890 kubelet[2304]: E0912 17:24:08.594850 2304 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 12 17:24:10.095921 systemd[1]: Reload requested from client PID 2579 ('systemctl') (unit session-7.scope)... Sep 12 17:24:10.095942 systemd[1]: Reloading... Sep 12 17:24:10.163470 zram_generator::config[2622]: No configuration found. Sep 12 17:24:10.414027 systemd[1]: Reloading finished in 317 ms. Sep 12 17:24:10.444853 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:24:10.458787 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:24:10.459144 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:24:10.459266 systemd[1]: kubelet.service: Consumed 1.747s CPU time, 128.7M memory peak. Sep 12 17:24:10.461535 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:24:10.629810 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:24:10.644953 (kubelet)[2664]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:24:10.686975 kubelet[2664]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:24:10.686975 kubelet[2664]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 17:24:10.686975 kubelet[2664]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:24:10.686975 kubelet[2664]: I0912 17:24:10.686942 2664 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:24:10.694038 kubelet[2664]: I0912 17:24:10.694001 2664 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 17:24:10.694038 kubelet[2664]: I0912 17:24:10.694031 2664 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:24:10.695029 kubelet[2664]: I0912 17:24:10.694442 2664 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 17:24:10.696613 kubelet[2664]: I0912 17:24:10.696579 2664 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 17:24:10.698732 kubelet[2664]: I0912 17:24:10.698693 2664 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:24:10.704433 kubelet[2664]: I0912 17:24:10.703098 2664 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 17:24:10.705625 kubelet[2664]: I0912 17:24:10.705605 2664 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:24:10.705811 kubelet[2664]: I0912 17:24:10.705797 2664 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 17:24:10.705990 kubelet[2664]: I0912 17:24:10.705968 2664 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:24:10.706233 kubelet[2664]: I0912 17:24:10.706039 2664 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:24:10.706360 kubelet[2664]: I0912 17:24:10.706347 2664 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:24:10.706427 kubelet[2664]: I0912 17:24:10.706406 2664 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 17:24:10.706518 kubelet[2664]: I0912 17:24:10.706507 2664 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:24:10.706742 kubelet[2664]: I0912 17:24:10.706726 2664 kubelet.go:408] "Attempting to sync node with API server" Sep 12 17:24:10.706818 kubelet[2664]: I0912 17:24:10.706808 2664 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:24:10.706880 kubelet[2664]: I0912 17:24:10.706870 2664 kubelet.go:314] "Adding apiserver pod source" Sep 12 17:24:10.706934 kubelet[2664]: I0912 17:24:10.706926 2664 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:24:10.709438 kubelet[2664]: I0912 17:24:10.708121 2664 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 17:24:10.709438 kubelet[2664]: I0912 17:24:10.708663 2664 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:24:10.709438 kubelet[2664]: I0912 17:24:10.709031 2664 server.go:1274] "Started kubelet" Sep 12 17:24:10.710984 kubelet[2664]: I0912 17:24:10.710955 2664 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:24:10.711898 kubelet[2664]: I0912 17:24:10.711851 2664 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:24:10.712189 kubelet[2664]: I0912 17:24:10.712159 2664 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:24:10.712724 kubelet[2664]: I0912 17:24:10.712698 2664 server.go:449] "Adding debug handlers to kubelet server" Sep 12 17:24:10.717204 kubelet[2664]: I0912 17:24:10.717175 2664 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:24:10.718690 kubelet[2664]: I0912 17:24:10.718670 2664 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:24:10.719286 kubelet[2664]: I0912 17:24:10.719262 2664 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 17:24:10.719366 kubelet[2664]: E0912 17:24:10.719343 2664 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:24:10.719459 kubelet[2664]: I0912 17:24:10.719446 2664 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 17:24:10.720776 kubelet[2664]: I0912 17:24:10.720757 2664 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:24:10.725426 kubelet[2664]: I0912 17:24:10.723138 2664 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:24:10.725426 kubelet[2664]: I0912 17:24:10.723244 2664 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:24:10.725426 kubelet[2664]: E0912 17:24:10.723895 2664 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:24:10.735166 kubelet[2664]: I0912 17:24:10.734277 2664 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:24:10.735377 kubelet[2664]: I0912 17:24:10.735348 2664 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:24:10.736179 kubelet[2664]: I0912 17:24:10.736149 2664 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:24:10.736179 kubelet[2664]: I0912 17:24:10.736172 2664 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 17:24:10.736263 kubelet[2664]: I0912 17:24:10.736190 2664 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 17:24:10.736263 kubelet[2664]: E0912 17:24:10.736224 2664 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:24:10.772710 kubelet[2664]: I0912 17:24:10.772668 2664 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 17:24:10.772710 kubelet[2664]: I0912 17:24:10.772704 2664 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 17:24:10.772841 kubelet[2664]: I0912 17:24:10.772726 2664 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:24:10.772872 kubelet[2664]: I0912 17:24:10.772863 2664 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:24:10.772891 kubelet[2664]: I0912 17:24:10.772873 2664 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:24:10.772912 kubelet[2664]: I0912 17:24:10.772890 2664 policy_none.go:49] "None policy: Start" Sep 12 17:24:10.773772 kubelet[2664]: I0912 17:24:10.773758 2664 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 17:24:10.774471 kubelet[2664]: I0912 17:24:10.773852 2664 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:24:10.774471 kubelet[2664]: I0912 17:24:10.774017 2664 state_mem.go:75] "Updated machine memory state" Sep 12 17:24:10.778024 kubelet[2664]: I0912 17:24:10.778005 2664 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:24:10.778437 kubelet[2664]: I0912 17:24:10.778398 2664 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:24:10.778612 kubelet[2664]: I0912 17:24:10.778559 2664 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:24:10.778791 kubelet[2664]: I0912 17:24:10.778774 2664 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:24:10.848958 kubelet[2664]: E0912 17:24:10.848906 2664 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:24:10.881559 kubelet[2664]: I0912 17:24:10.881531 2664 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:24:10.890368 kubelet[2664]: I0912 17:24:10.890330 2664 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 12 17:24:10.890476 kubelet[2664]: I0912 17:24:10.890405 2664 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 12 17:24:10.922780 kubelet[2664]: I0912 17:24:10.922677 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 12 17:24:10.922780 kubelet[2664]: I0912 17:24:10.922714 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/702e8ecab7779b1d563e8c2797dae2dc-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"702e8ecab7779b1d563e8c2797dae2dc\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:24:10.922780 kubelet[2664]: I0912 17:24:10.922735 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/702e8ecab7779b1d563e8c2797dae2dc-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"702e8ecab7779b1d563e8c2797dae2dc\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:24:10.922780 kubelet[2664]: I0912 17:24:10.922752 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:24:10.922953 kubelet[2664]: I0912 17:24:10.922804 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:24:10.922953 kubelet[2664]: I0912 17:24:10.922835 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:24:10.922953 kubelet[2664]: I0912 17:24:10.922854 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:24:10.922953 kubelet[2664]: I0912 17:24:10.922907 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:24:10.922953 kubelet[2664]: I0912 17:24:10.922923 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/702e8ecab7779b1d563e8c2797dae2dc-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"702e8ecab7779b1d563e8c2797dae2dc\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:24:11.095215 sudo[2698]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 12 17:24:11.095960 sudo[2698]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 12 17:24:11.428790 sudo[2698]: pam_unix(sudo:session): session closed for user root Sep 12 17:24:11.708452 kubelet[2664]: I0912 17:24:11.708287 2664 apiserver.go:52] "Watching apiserver" Sep 12 17:24:11.720510 kubelet[2664]: I0912 17:24:11.720472 2664 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 17:24:11.776435 kubelet[2664]: I0912 17:24:11.776284 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.7762679289999999 podStartE2EDuration="1.776267929s" podCreationTimestamp="2025-09-12 17:24:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:24:11.775992951 +0000 UTC m=+1.127985929" watchObservedRunningTime="2025-09-12 17:24:11.776267929 +0000 UTC m=+1.128260907" Sep 12 17:24:11.784402 kubelet[2664]: I0912 17:24:11.784310 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.7842919940000002 podStartE2EDuration="1.784291994s" podCreationTimestamp="2025-09-12 17:24:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:24:11.783713456 +0000 UTC m=+1.135706394" watchObservedRunningTime="2025-09-12 17:24:11.784291994 +0000 UTC m=+1.136284972" Sep 12 17:24:11.811214 kubelet[2664]: I0912 17:24:11.811123 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.811107938 podStartE2EDuration="2.811107938s" podCreationTimestamp="2025-09-12 17:24:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:24:11.800274207 +0000 UTC m=+1.152267185" watchObservedRunningTime="2025-09-12 17:24:11.811107938 +0000 UTC m=+1.163100916" Sep 12 17:24:12.935737 sudo[1744]: pam_unix(sudo:session): session closed for user root Sep 12 17:24:12.937016 sshd[1743]: Connection closed by 10.0.0.1 port 34386 Sep 12 17:24:12.940114 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Sep 12 17:24:12.944173 systemd[1]: sshd@6-10.0.0.110:22-10.0.0.1:34386.service: Deactivated successfully. Sep 12 17:24:12.946753 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:24:12.946988 systemd[1]: session-7.scope: Consumed 7.017s CPU time, 257.2M memory peak. Sep 12 17:24:12.947968 systemd-logind[1507]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:24:12.949066 systemd-logind[1507]: Removed session 7. Sep 12 17:24:16.947020 kubelet[2664]: I0912 17:24:16.946988 2664 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:24:16.947589 kubelet[2664]: I0912 17:24:16.947469 2664 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:24:16.947627 containerd[1528]: time="2025-09-12T17:24:16.947283314Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:24:17.783906 systemd[1]: Created slice kubepods-besteffort-podedaf4e4b_38b0_4592_a986_c0b8afb54382.slice - libcontainer container kubepods-besteffort-podedaf4e4b_38b0_4592_a986_c0b8afb54382.slice. Sep 12 17:24:17.799041 systemd[1]: Created slice kubepods-burstable-pod6ee086ec_2815_40b9_afcf_94289483ccc9.slice - libcontainer container kubepods-burstable-pod6ee086ec_2815_40b9_afcf_94289483ccc9.slice. Sep 12 17:24:17.862890 kubelet[2664]: I0912 17:24:17.862814 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrhws\" (UniqueName: \"kubernetes.io/projected/edaf4e4b-38b0-4592-a986-c0b8afb54382-kube-api-access-jrhws\") pod \"kube-proxy-szh67\" (UID: \"edaf4e4b-38b0-4592-a986-c0b8afb54382\") " pod="kube-system/kube-proxy-szh67" Sep 12 17:24:17.863126 kubelet[2664]: I0912 17:24:17.863021 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6ee086ec-2815-40b9-afcf-94289483ccc9-xtables-lock\") pod \"cilium-rjk6k\" (UID: \"6ee086ec-2815-40b9-afcf-94289483ccc9\") " pod="kube-system/cilium-rjk6k" Sep 12 17:24:17.863126 kubelet[2664]: I0912 17:24:17.863048 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6ee086ec-2815-40b9-afcf-94289483ccc9-cilium-config-path\") pod \"cilium-rjk6k\" (UID: \"6ee086ec-2815-40b9-afcf-94289483ccc9\") " pod="kube-system/cilium-rjk6k" Sep 12 17:24:17.863126 kubelet[2664]: I0912 17:24:17.863065 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/edaf4e4b-38b0-4592-a986-c0b8afb54382-lib-modules\") pod \"kube-proxy-szh67\" (UID: \"edaf4e4b-38b0-4592-a986-c0b8afb54382\") " pod="kube-system/kube-proxy-szh67" Sep 12 17:24:17.863126 kubelet[2664]: I0912 17:24:17.863079 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzfgq\" (UniqueName: \"kubernetes.io/projected/6ee086ec-2815-40b9-afcf-94289483ccc9-kube-api-access-qzfgq\") pod \"cilium-rjk6k\" (UID: \"6ee086ec-2815-40b9-afcf-94289483ccc9\") " pod="kube-system/cilium-rjk6k" Sep 12 17:24:17.863126 kubelet[2664]: I0912 17:24:17.863095 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6ee086ec-2815-40b9-afcf-94289483ccc9-hostproc\") pod \"cilium-rjk6k\" (UID: \"6ee086ec-2815-40b9-afcf-94289483ccc9\") " pod="kube-system/cilium-rjk6k" Sep 12 17:24:17.863126 kubelet[2664]: I0912 17:24:17.863109 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6ee086ec-2815-40b9-afcf-94289483ccc9-cni-path\") pod \"cilium-rjk6k\" (UID: \"6ee086ec-2815-40b9-afcf-94289483ccc9\") " pod="kube-system/cilium-rjk6k" Sep 12 17:24:17.863590 kubelet[2664]: I0912 17:24:17.863317 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6ee086ec-2815-40b9-afcf-94289483ccc9-cilium-run\") pod \"cilium-rjk6k\" (UID: \"6ee086ec-2815-40b9-afcf-94289483ccc9\") " pod="kube-system/cilium-rjk6k" Sep 12 17:24:17.863590 kubelet[2664]: I0912 17:24:17.863341 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/edaf4e4b-38b0-4592-a986-c0b8afb54382-kube-proxy\") pod \"kube-proxy-szh67\" (UID: \"edaf4e4b-38b0-4592-a986-c0b8afb54382\") " pod="kube-system/kube-proxy-szh67" Sep 12 17:24:17.863590 kubelet[2664]: I0912 17:24:17.863359 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/edaf4e4b-38b0-4592-a986-c0b8afb54382-xtables-lock\") pod \"kube-proxy-szh67\" (UID: \"edaf4e4b-38b0-4592-a986-c0b8afb54382\") " pod="kube-system/kube-proxy-szh67" Sep 12 17:24:17.863590 kubelet[2664]: I0912 17:24:17.863374 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ee086ec-2815-40b9-afcf-94289483ccc9-lib-modules\") pod \"cilium-rjk6k\" (UID: \"6ee086ec-2815-40b9-afcf-94289483ccc9\") " pod="kube-system/cilium-rjk6k" Sep 12 17:24:17.863590 kubelet[2664]: I0912 17:24:17.863388 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6ee086ec-2815-40b9-afcf-94289483ccc9-bpf-maps\") pod \"cilium-rjk6k\" (UID: \"6ee086ec-2815-40b9-afcf-94289483ccc9\") " pod="kube-system/cilium-rjk6k" Sep 12 17:24:17.863590 kubelet[2664]: I0912 17:24:17.863401 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6ee086ec-2815-40b9-afcf-94289483ccc9-cilium-cgroup\") pod \"cilium-rjk6k\" (UID: \"6ee086ec-2815-40b9-afcf-94289483ccc9\") " pod="kube-system/cilium-rjk6k" Sep 12 17:24:17.863743 kubelet[2664]: I0912 17:24:17.863441 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6ee086ec-2815-40b9-afcf-94289483ccc9-clustermesh-secrets\") pod \"cilium-rjk6k\" (UID: \"6ee086ec-2815-40b9-afcf-94289483ccc9\") " pod="kube-system/cilium-rjk6k" Sep 12 17:24:17.863743 kubelet[2664]: I0912 17:24:17.863460 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6ee086ec-2815-40b9-afcf-94289483ccc9-host-proc-sys-kernel\") pod \"cilium-rjk6k\" (UID: \"6ee086ec-2815-40b9-afcf-94289483ccc9\") " pod="kube-system/cilium-rjk6k" Sep 12 17:24:17.863743 kubelet[2664]: I0912 17:24:17.863505 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6ee086ec-2815-40b9-afcf-94289483ccc9-etc-cni-netd\") pod \"cilium-rjk6k\" (UID: \"6ee086ec-2815-40b9-afcf-94289483ccc9\") " pod="kube-system/cilium-rjk6k" Sep 12 17:24:17.863743 kubelet[2664]: I0912 17:24:17.863533 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6ee086ec-2815-40b9-afcf-94289483ccc9-host-proc-sys-net\") pod \"cilium-rjk6k\" (UID: \"6ee086ec-2815-40b9-afcf-94289483ccc9\") " pod="kube-system/cilium-rjk6k" Sep 12 17:24:17.863743 kubelet[2664]: I0912 17:24:17.863551 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6ee086ec-2815-40b9-afcf-94289483ccc9-hubble-tls\") pod \"cilium-rjk6k\" (UID: \"6ee086ec-2815-40b9-afcf-94289483ccc9\") " pod="kube-system/cilium-rjk6k" Sep 12 17:24:18.061622 systemd[1]: Created slice kubepods-besteffort-pode40acf47_c47c_4492_b130_d5de5b007667.slice - libcontainer container kubepods-besteffort-pode40acf47_c47c_4492_b130_d5de5b007667.slice. Sep 12 17:24:18.067737 kubelet[2664]: I0912 17:24:18.067710 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89kxr\" (UniqueName: \"kubernetes.io/projected/e40acf47-c47c-4492-b130-d5de5b007667-kube-api-access-89kxr\") pod \"cilium-operator-5d85765b45-7hrbk\" (UID: \"e40acf47-c47c-4492-b130-d5de5b007667\") " pod="kube-system/cilium-operator-5d85765b45-7hrbk" Sep 12 17:24:18.068149 kubelet[2664]: I0912 17:24:18.067928 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e40acf47-c47c-4492-b130-d5de5b007667-cilium-config-path\") pod \"cilium-operator-5d85765b45-7hrbk\" (UID: \"e40acf47-c47c-4492-b130-d5de5b007667\") " pod="kube-system/cilium-operator-5d85765b45-7hrbk" Sep 12 17:24:18.098712 containerd[1528]: time="2025-09-12T17:24:18.098673999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-szh67,Uid:edaf4e4b-38b0-4592-a986-c0b8afb54382,Namespace:kube-system,Attempt:0,}" Sep 12 17:24:18.103540 containerd[1528]: time="2025-09-12T17:24:18.103396173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rjk6k,Uid:6ee086ec-2815-40b9-afcf-94289483ccc9,Namespace:kube-system,Attempt:0,}" Sep 12 17:24:18.118181 containerd[1528]: time="2025-09-12T17:24:18.118141128Z" level=info msg="connecting to shim aa24b6a3caed91895aed2484afa7da4fa3acf125a6e9dd78a9001a94ca526535" address="unix:///run/containerd/s/90731f4590b4190c1d8494428b4e18240e1d374da1fbc5e32ee16131348f4752" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:24:18.131622 containerd[1528]: time="2025-09-12T17:24:18.131576483Z" level=info msg="connecting to shim ff43426c5ba2ebe026eca37ac8baf39db85c96b959d04037003eff4227dbe245" address="unix:///run/containerd/s/4f012937d4655757b83ae9c3947b5ba28bbcafd46a12e4e593b23585bcae5557" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:24:18.148597 systemd[1]: Started cri-containerd-aa24b6a3caed91895aed2484afa7da4fa3acf125a6e9dd78a9001a94ca526535.scope - libcontainer container aa24b6a3caed91895aed2484afa7da4fa3acf125a6e9dd78a9001a94ca526535. Sep 12 17:24:18.151599 systemd[1]: Started cri-containerd-ff43426c5ba2ebe026eca37ac8baf39db85c96b959d04037003eff4227dbe245.scope - libcontainer container ff43426c5ba2ebe026eca37ac8baf39db85c96b959d04037003eff4227dbe245. Sep 12 17:24:18.187137 containerd[1528]: time="2025-09-12T17:24:18.187061116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-szh67,Uid:edaf4e4b-38b0-4592-a986-c0b8afb54382,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa24b6a3caed91895aed2484afa7da4fa3acf125a6e9dd78a9001a94ca526535\"" Sep 12 17:24:18.189601 containerd[1528]: time="2025-09-12T17:24:18.189560580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rjk6k,Uid:6ee086ec-2815-40b9-afcf-94289483ccc9,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff43426c5ba2ebe026eca37ac8baf39db85c96b959d04037003eff4227dbe245\"" Sep 12 17:24:18.190897 containerd[1528]: time="2025-09-12T17:24:18.190739403Z" level=info msg="CreateContainer within sandbox \"aa24b6a3caed91895aed2484afa7da4fa3acf125a6e9dd78a9001a94ca526535\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:24:18.191961 containerd[1528]: time="2025-09-12T17:24:18.191914746Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 12 17:24:18.201935 containerd[1528]: time="2025-09-12T17:24:18.201881279Z" level=info msg="Container 01909187116e717701edce1f34a3e5d93a381a2d9da6811cbbb0f2e9c4e0d824: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:24:18.216253 containerd[1528]: time="2025-09-12T17:24:18.216184060Z" level=info msg="CreateContainer within sandbox \"aa24b6a3caed91895aed2484afa7da4fa3acf125a6e9dd78a9001a94ca526535\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"01909187116e717701edce1f34a3e5d93a381a2d9da6811cbbb0f2e9c4e0d824\"" Sep 12 17:24:18.216875 containerd[1528]: time="2025-09-12T17:24:18.216824698Z" level=info msg="StartContainer for \"01909187116e717701edce1f34a3e5d93a381a2d9da6811cbbb0f2e9c4e0d824\"" Sep 12 17:24:18.218278 containerd[1528]: time="2025-09-12T17:24:18.218250952Z" level=info msg="connecting to shim 01909187116e717701edce1f34a3e5d93a381a2d9da6811cbbb0f2e9c4e0d824" address="unix:///run/containerd/s/90731f4590b4190c1d8494428b4e18240e1d374da1fbc5e32ee16131348f4752" protocol=ttrpc version=3 Sep 12 17:24:18.240678 systemd[1]: Started cri-containerd-01909187116e717701edce1f34a3e5d93a381a2d9da6811cbbb0f2e9c4e0d824.scope - libcontainer container 01909187116e717701edce1f34a3e5d93a381a2d9da6811cbbb0f2e9c4e0d824. Sep 12 17:24:18.275620 containerd[1528]: time="2025-09-12T17:24:18.275551285Z" level=info msg="StartContainer for \"01909187116e717701edce1f34a3e5d93a381a2d9da6811cbbb0f2e9c4e0d824\" returns successfully" Sep 12 17:24:18.367888 containerd[1528]: time="2025-09-12T17:24:18.367628371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-7hrbk,Uid:e40acf47-c47c-4492-b130-d5de5b007667,Namespace:kube-system,Attempt:0,}" Sep 12 17:24:18.392983 containerd[1528]: time="2025-09-12T17:24:18.392918329Z" level=info msg="connecting to shim 9d0665321916a313d769fa8b36ed9c86f920df654e50ed84703f3a33aeca3638" address="unix:///run/containerd/s/30f611b5f53bbce395f537ddda57414fa5a37b273c45883c11b4570ecd060534" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:24:18.415704 systemd[1]: Started cri-containerd-9d0665321916a313d769fa8b36ed9c86f920df654e50ed84703f3a33aeca3638.scope - libcontainer container 9d0665321916a313d769fa8b36ed9c86f920df654e50ed84703f3a33aeca3638. Sep 12 17:24:18.451570 containerd[1528]: time="2025-09-12T17:24:18.451475816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-7hrbk,Uid:e40acf47-c47c-4492-b130-d5de5b007667,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d0665321916a313d769fa8b36ed9c86f920df654e50ed84703f3a33aeca3638\"" Sep 12 17:24:18.787500 kubelet[2664]: I0912 17:24:18.787441 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-szh67" podStartSLOduration=1.78740634 podStartE2EDuration="1.78740634s" podCreationTimestamp="2025-09-12 17:24:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:24:18.787345653 +0000 UTC m=+8.139338671" watchObservedRunningTime="2025-09-12 17:24:18.78740634 +0000 UTC m=+8.139399318" Sep 12 17:24:25.528351 update_engine[1512]: I20250912 17:24:25.528292 1512 update_attempter.cc:509] Updating boot flags... Sep 12 17:24:26.177719 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount290149332.mount: Deactivated successfully. Sep 12 17:24:30.250560 containerd[1528]: time="2025-09-12T17:24:30.250446171Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:24:30.251072 containerd[1528]: time="2025-09-12T17:24:30.251038450Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 12 17:24:30.251979 containerd[1528]: time="2025-09-12T17:24:30.251931988Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:24:30.253846 containerd[1528]: time="2025-09-12T17:24:30.253802311Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 12.061704383s" Sep 12 17:24:30.253846 containerd[1528]: time="2025-09-12T17:24:30.253840114Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 12 17:24:30.260225 containerd[1528]: time="2025-09-12T17:24:30.260164890Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 12 17:24:30.271785 containerd[1528]: time="2025-09-12T17:24:30.271746371Z" level=info msg="CreateContainer within sandbox \"ff43426c5ba2ebe026eca37ac8baf39db85c96b959d04037003eff4227dbe245\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 17:24:30.283573 containerd[1528]: time="2025-09-12T17:24:30.283441661Z" level=info msg="Container 5230d267570cf8358e5fc69b8a7d57d7cceb2a296a68f576be19beb127217150: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:24:30.290305 containerd[1528]: time="2025-09-12T17:24:30.290244548Z" level=info msg="CreateContainer within sandbox \"ff43426c5ba2ebe026eca37ac8baf39db85c96b959d04037003eff4227dbe245\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5230d267570cf8358e5fc69b8a7d57d7cceb2a296a68f576be19beb127217150\"" Sep 12 17:24:30.291997 containerd[1528]: time="2025-09-12T17:24:30.291966181Z" level=info msg="StartContainer for \"5230d267570cf8358e5fc69b8a7d57d7cceb2a296a68f576be19beb127217150\"" Sep 12 17:24:30.293238 containerd[1528]: time="2025-09-12T17:24:30.293183981Z" level=info msg="connecting to shim 5230d267570cf8358e5fc69b8a7d57d7cceb2a296a68f576be19beb127217150" address="unix:///run/containerd/s/4f012937d4655757b83ae9c3947b5ba28bbcafd46a12e4e593b23585bcae5557" protocol=ttrpc version=3 Sep 12 17:24:30.336605 systemd[1]: Started cri-containerd-5230d267570cf8358e5fc69b8a7d57d7cceb2a296a68f576be19beb127217150.scope - libcontainer container 5230d267570cf8358e5fc69b8a7d57d7cceb2a296a68f576be19beb127217150. Sep 12 17:24:30.366154 containerd[1528]: time="2025-09-12T17:24:30.366108097Z" level=info msg="StartContainer for \"5230d267570cf8358e5fc69b8a7d57d7cceb2a296a68f576be19beb127217150\" returns successfully" Sep 12 17:24:30.377702 systemd[1]: cri-containerd-5230d267570cf8358e5fc69b8a7d57d7cceb2a296a68f576be19beb127217150.scope: Deactivated successfully. Sep 12 17:24:30.414025 containerd[1528]: time="2025-09-12T17:24:30.413978005Z" level=info msg="received exit event container_id:\"5230d267570cf8358e5fc69b8a7d57d7cceb2a296a68f576be19beb127217150\" id:\"5230d267570cf8358e5fc69b8a7d57d7cceb2a296a68f576be19beb127217150\" pid:3097 exited_at:{seconds:1757697870 nanos:407366570}" Sep 12 17:24:30.414139 containerd[1528]: time="2025-09-12T17:24:30.414125534Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5230d267570cf8358e5fc69b8a7d57d7cceb2a296a68f576be19beb127217150\" id:\"5230d267570cf8358e5fc69b8a7d57d7cceb2a296a68f576be19beb127217150\" pid:3097 exited_at:{seconds:1757697870 nanos:407366570}" Sep 12 17:24:30.810998 containerd[1528]: time="2025-09-12T17:24:30.810560924Z" level=info msg="CreateContainer within sandbox \"ff43426c5ba2ebe026eca37ac8baf39db85c96b959d04037003eff4227dbe245\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 17:24:30.817517 containerd[1528]: time="2025-09-12T17:24:30.817485819Z" level=info msg="Container 45b720315d7016e837981b8b11fedcc34c20557ca4a5a6ec65e9974769c96056: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:24:30.824655 containerd[1528]: time="2025-09-12T17:24:30.824603247Z" level=info msg="CreateContainer within sandbox \"ff43426c5ba2ebe026eca37ac8baf39db85c96b959d04037003eff4227dbe245\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"45b720315d7016e837981b8b11fedcc34c20557ca4a5a6ec65e9974769c96056\"" Sep 12 17:24:30.825055 containerd[1528]: time="2025-09-12T17:24:30.825035476Z" level=info msg="StartContainer for \"45b720315d7016e837981b8b11fedcc34c20557ca4a5a6ec65e9974769c96056\"" Sep 12 17:24:30.831006 containerd[1528]: time="2025-09-12T17:24:30.830957465Z" level=info msg="connecting to shim 45b720315d7016e837981b8b11fedcc34c20557ca4a5a6ec65e9974769c96056" address="unix:///run/containerd/s/4f012937d4655757b83ae9c3947b5ba28bbcafd46a12e4e593b23585bcae5557" protocol=ttrpc version=3 Sep 12 17:24:30.864591 systemd[1]: Started cri-containerd-45b720315d7016e837981b8b11fedcc34c20557ca4a5a6ec65e9974769c96056.scope - libcontainer container 45b720315d7016e837981b8b11fedcc34c20557ca4a5a6ec65e9974769c96056. Sep 12 17:24:30.890722 containerd[1528]: time="2025-09-12T17:24:30.890675472Z" level=info msg="StartContainer for \"45b720315d7016e837981b8b11fedcc34c20557ca4a5a6ec65e9974769c96056\" returns successfully" Sep 12 17:24:30.904439 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:24:30.904679 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:24:30.905220 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:24:30.907813 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:24:30.908017 systemd[1]: cri-containerd-45b720315d7016e837981b8b11fedcc34c20557ca4a5a6ec65e9974769c96056.scope: Deactivated successfully. Sep 12 17:24:30.910010 containerd[1528]: time="2025-09-12T17:24:30.909973621Z" level=info msg="TaskExit event in podsandbox handler container_id:\"45b720315d7016e837981b8b11fedcc34c20557ca4a5a6ec65e9974769c96056\" id:\"45b720315d7016e837981b8b11fedcc34c20557ca4a5a6ec65e9974769c96056\" pid:3143 exited_at:{seconds:1757697870 nanos:908955874}" Sep 12 17:24:30.910161 containerd[1528]: time="2025-09-12T17:24:30.910147193Z" level=info msg="received exit event container_id:\"45b720315d7016e837981b8b11fedcc34c20557ca4a5a6ec65e9974769c96056\" id:\"45b720315d7016e837981b8b11fedcc34c20557ca4a5a6ec65e9974769c96056\" pid:3143 exited_at:{seconds:1757697870 nanos:908955874}" Sep 12 17:24:30.932098 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:24:31.282875 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5230d267570cf8358e5fc69b8a7d57d7cceb2a296a68f576be19beb127217150-rootfs.mount: Deactivated successfully. Sep 12 17:24:31.478843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount99460742.mount: Deactivated successfully. Sep 12 17:24:31.812932 containerd[1528]: time="2025-09-12T17:24:31.812705519Z" level=info msg="CreateContainer within sandbox \"ff43426c5ba2ebe026eca37ac8baf39db85c96b959d04037003eff4227dbe245\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 17:24:31.831138 containerd[1528]: time="2025-09-12T17:24:31.830693849Z" level=info msg="Container 91dfa60e0134097372d57b8eaac6eba8f3b2b72175fbf2f23fa53f7592eff484: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:24:31.831243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1268291832.mount: Deactivated successfully. Sep 12 17:24:31.839710 containerd[1528]: time="2025-09-12T17:24:31.839660931Z" level=info msg="CreateContainer within sandbox \"ff43426c5ba2ebe026eca37ac8baf39db85c96b959d04037003eff4227dbe245\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"91dfa60e0134097372d57b8eaac6eba8f3b2b72175fbf2f23fa53f7592eff484\"" Sep 12 17:24:31.840770 containerd[1528]: time="2025-09-12T17:24:31.840743439Z" level=info msg="StartContainer for \"91dfa60e0134097372d57b8eaac6eba8f3b2b72175fbf2f23fa53f7592eff484\"" Sep 12 17:24:31.842375 containerd[1528]: time="2025-09-12T17:24:31.842343340Z" level=info msg="connecting to shim 91dfa60e0134097372d57b8eaac6eba8f3b2b72175fbf2f23fa53f7592eff484" address="unix:///run/containerd/s/4f012937d4655757b83ae9c3947b5ba28bbcafd46a12e4e593b23585bcae5557" protocol=ttrpc version=3 Sep 12 17:24:31.881632 systemd[1]: Started cri-containerd-91dfa60e0134097372d57b8eaac6eba8f3b2b72175fbf2f23fa53f7592eff484.scope - libcontainer container 91dfa60e0134097372d57b8eaac6eba8f3b2b72175fbf2f23fa53f7592eff484. Sep 12 17:24:31.930579 systemd[1]: cri-containerd-91dfa60e0134097372d57b8eaac6eba8f3b2b72175fbf2f23fa53f7592eff484.scope: Deactivated successfully. Sep 12 17:24:31.936774 containerd[1528]: time="2025-09-12T17:24:31.936731345Z" level=info msg="StartContainer for \"91dfa60e0134097372d57b8eaac6eba8f3b2b72175fbf2f23fa53f7592eff484\" returns successfully" Sep 12 17:24:31.946581 containerd[1528]: time="2025-09-12T17:24:31.946536200Z" level=info msg="received exit event container_id:\"91dfa60e0134097372d57b8eaac6eba8f3b2b72175fbf2f23fa53f7592eff484\" id:\"91dfa60e0134097372d57b8eaac6eba8f3b2b72175fbf2f23fa53f7592eff484\" pid:3199 exited_at:{seconds:1757697871 nanos:946177177}" Sep 12 17:24:31.946792 containerd[1528]: time="2025-09-12T17:24:31.946729412Z" level=info msg="TaskExit event in podsandbox handler container_id:\"91dfa60e0134097372d57b8eaac6eba8f3b2b72175fbf2f23fa53f7592eff484\" id:\"91dfa60e0134097372d57b8eaac6eba8f3b2b72175fbf2f23fa53f7592eff484\" pid:3199 exited_at:{seconds:1757697871 nanos:946177177}" Sep 12 17:24:32.235505 containerd[1528]: time="2025-09-12T17:24:32.235456239Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:24:32.236032 containerd[1528]: time="2025-09-12T17:24:32.235800380Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 12 17:24:32.236892 containerd[1528]: time="2025-09-12T17:24:32.236851963Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:24:32.238808 containerd[1528]: time="2025-09-12T17:24:32.238764478Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.978551505s" Sep 12 17:24:32.238808 containerd[1528]: time="2025-09-12T17:24:32.238804960Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 12 17:24:32.241599 containerd[1528]: time="2025-09-12T17:24:32.241563326Z" level=info msg="CreateContainer within sandbox \"9d0665321916a313d769fa8b36ed9c86f920df654e50ed84703f3a33aeca3638\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 12 17:24:32.260754 containerd[1528]: time="2025-09-12T17:24:32.260692593Z" level=info msg="Container bc13d8b57b498aa81c95f75e352a90d524b7190ad8f42db42ee637e1bf7869a3: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:24:32.267642 containerd[1528]: time="2025-09-12T17:24:32.267598927Z" level=info msg="CreateContainer within sandbox \"9d0665321916a313d769fa8b36ed9c86f920df654e50ed84703f3a33aeca3638\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"bc13d8b57b498aa81c95f75e352a90d524b7190ad8f42db42ee637e1bf7869a3\"" Sep 12 17:24:32.268056 containerd[1528]: time="2025-09-12T17:24:32.268021872Z" level=info msg="StartContainer for \"bc13d8b57b498aa81c95f75e352a90d524b7190ad8f42db42ee637e1bf7869a3\"" Sep 12 17:24:32.269195 containerd[1528]: time="2025-09-12T17:24:32.269146940Z" level=info msg="connecting to shim bc13d8b57b498aa81c95f75e352a90d524b7190ad8f42db42ee637e1bf7869a3" address="unix:///run/containerd/s/30f611b5f53bbce395f537ddda57414fa5a37b273c45883c11b4570ecd060534" protocol=ttrpc version=3 Sep 12 17:24:32.303660 systemd[1]: Started cri-containerd-bc13d8b57b498aa81c95f75e352a90d524b7190ad8f42db42ee637e1bf7869a3.scope - libcontainer container bc13d8b57b498aa81c95f75e352a90d524b7190ad8f42db42ee637e1bf7869a3. Sep 12 17:24:32.330189 containerd[1528]: time="2025-09-12T17:24:32.330152718Z" level=info msg="StartContainer for \"bc13d8b57b498aa81c95f75e352a90d524b7190ad8f42db42ee637e1bf7869a3\" returns successfully" Sep 12 17:24:32.822785 containerd[1528]: time="2025-09-12T17:24:32.822690134Z" level=info msg="CreateContainer within sandbox \"ff43426c5ba2ebe026eca37ac8baf39db85c96b959d04037003eff4227dbe245\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 17:24:32.835311 kubelet[2664]: I0912 17:24:32.835010 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-7hrbk" podStartSLOduration=1.048852308 podStartE2EDuration="14.834993592s" podCreationTimestamp="2025-09-12 17:24:18 +0000 UTC" firstStartedPulling="2025-09-12 17:24:18.45323319 +0000 UTC m=+7.805226168" lastFinishedPulling="2025-09-12 17:24:32.239374474 +0000 UTC m=+21.591367452" observedRunningTime="2025-09-12 17:24:32.831249607 +0000 UTC m=+22.183242585" watchObservedRunningTime="2025-09-12 17:24:32.834993592 +0000 UTC m=+22.186986570" Sep 12 17:24:32.837588 containerd[1528]: time="2025-09-12T17:24:32.837517343Z" level=info msg="Container 0344e45b64422946c9d04248a0c81e4780a7c7afee6e524b707d247868e1ff0b: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:24:32.858194 containerd[1528]: time="2025-09-12T17:24:32.858141180Z" level=info msg="CreateContainer within sandbox \"ff43426c5ba2ebe026eca37ac8baf39db85c96b959d04037003eff4227dbe245\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0344e45b64422946c9d04248a0c81e4780a7c7afee6e524b707d247868e1ff0b\"" Sep 12 17:24:32.859331 containerd[1528]: time="2025-09-12T17:24:32.859296449Z" level=info msg="StartContainer for \"0344e45b64422946c9d04248a0c81e4780a7c7afee6e524b707d247868e1ff0b\"" Sep 12 17:24:32.860334 containerd[1528]: time="2025-09-12T17:24:32.860260547Z" level=info msg="connecting to shim 0344e45b64422946c9d04248a0c81e4780a7c7afee6e524b707d247868e1ff0b" address="unix:///run/containerd/s/4f012937d4655757b83ae9c3947b5ba28bbcafd46a12e4e593b23585bcae5557" protocol=ttrpc version=3 Sep 12 17:24:32.881621 systemd[1]: Started cri-containerd-0344e45b64422946c9d04248a0c81e4780a7c7afee6e524b707d247868e1ff0b.scope - libcontainer container 0344e45b64422946c9d04248a0c81e4780a7c7afee6e524b707d247868e1ff0b. Sep 12 17:24:32.933093 systemd[1]: cri-containerd-0344e45b64422946c9d04248a0c81e4780a7c7afee6e524b707d247868e1ff0b.scope: Deactivated successfully. Sep 12 17:24:32.936010 containerd[1528]: time="2025-09-12T17:24:32.935955246Z" level=info msg="received exit event container_id:\"0344e45b64422946c9d04248a0c81e4780a7c7afee6e524b707d247868e1ff0b\" id:\"0344e45b64422946c9d04248a0c81e4780a7c7afee6e524b707d247868e1ff0b\" pid:3280 exited_at:{seconds:1757697872 nanos:933725433}" Sep 12 17:24:32.936010 containerd[1528]: time="2025-09-12T17:24:32.936004809Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0344e45b64422946c9d04248a0c81e4780a7c7afee6e524b707d247868e1ff0b\" id:\"0344e45b64422946c9d04248a0c81e4780a7c7afee6e524b707d247868e1ff0b\" pid:3280 exited_at:{seconds:1757697872 nanos:933725433}" Sep 12 17:24:32.938024 containerd[1528]: time="2025-09-12T17:24:32.937994088Z" level=info msg="StartContainer for \"0344e45b64422946c9d04248a0c81e4780a7c7afee6e524b707d247868e1ff0b\" returns successfully" Sep 12 17:24:32.984541 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0344e45b64422946c9d04248a0c81e4780a7c7afee6e524b707d247868e1ff0b-rootfs.mount: Deactivated successfully. Sep 12 17:24:33.834085 containerd[1528]: time="2025-09-12T17:24:33.833943388Z" level=info msg="CreateContainer within sandbox \"ff43426c5ba2ebe026eca37ac8baf39db85c96b959d04037003eff4227dbe245\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 17:24:33.855732 containerd[1528]: time="2025-09-12T17:24:33.854661536Z" level=info msg="Container 0ddd83225450a0c374bfe6c5c7ca8c88c2b472d08c6e809e5ef6ac89ea0e0eeb: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:24:33.868278 containerd[1528]: time="2025-09-12T17:24:33.868173591Z" level=info msg="CreateContainer within sandbox \"ff43426c5ba2ebe026eca37ac8baf39db85c96b959d04037003eff4227dbe245\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0ddd83225450a0c374bfe6c5c7ca8c88c2b472d08c6e809e5ef6ac89ea0e0eeb\"" Sep 12 17:24:33.869701 containerd[1528]: time="2025-09-12T17:24:33.869678997Z" level=info msg="StartContainer for \"0ddd83225450a0c374bfe6c5c7ca8c88c2b472d08c6e809e5ef6ac89ea0e0eeb\"" Sep 12 17:24:33.871973 containerd[1528]: time="2025-09-12T17:24:33.871851561Z" level=info msg="connecting to shim 0ddd83225450a0c374bfe6c5c7ca8c88c2b472d08c6e809e5ef6ac89ea0e0eeb" address="unix:///run/containerd/s/4f012937d4655757b83ae9c3947b5ba28bbcafd46a12e4e593b23585bcae5557" protocol=ttrpc version=3 Sep 12 17:24:33.896590 systemd[1]: Started cri-containerd-0ddd83225450a0c374bfe6c5c7ca8c88c2b472d08c6e809e5ef6ac89ea0e0eeb.scope - libcontainer container 0ddd83225450a0c374bfe6c5c7ca8c88c2b472d08c6e809e5ef6ac89ea0e0eeb. Sep 12 17:24:33.931069 containerd[1528]: time="2025-09-12T17:24:33.931023594Z" level=info msg="StartContainer for \"0ddd83225450a0c374bfe6c5c7ca8c88c2b472d08c6e809e5ef6ac89ea0e0eeb\" returns successfully" Sep 12 17:24:34.061054 containerd[1528]: time="2025-09-12T17:24:34.061009261Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0ddd83225450a0c374bfe6c5c7ca8c88c2b472d08c6e809e5ef6ac89ea0e0eeb\" id:\"90ec04a698ad1829e10252c69f8266d00ba04d4ee9713d93b88ca34d53cba88c\" pid:3347 exited_at:{seconds:1757697874 nanos:60655481}" Sep 12 17:24:34.113249 kubelet[2664]: I0912 17:24:34.112820 2664 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 12 17:24:34.153713 systemd[1]: Created slice kubepods-burstable-podb456587c_2ebc_4095_8621_852dc5fb8c6e.slice - libcontainer container kubepods-burstable-podb456587c_2ebc_4095_8621_852dc5fb8c6e.slice. Sep 12 17:24:34.162919 systemd[1]: Created slice kubepods-burstable-pod61884cff_ad7f_4172_92ca_5c805a39651e.slice - libcontainer container kubepods-burstable-pod61884cff_ad7f_4172_92ca_5c805a39651e.slice. Sep 12 17:24:34.279384 kubelet[2664]: I0912 17:24:34.279303 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm5qp\" (UniqueName: \"kubernetes.io/projected/61884cff-ad7f-4172-92ca-5c805a39651e-kube-api-access-qm5qp\") pod \"coredns-7c65d6cfc9-fpv9n\" (UID: \"61884cff-ad7f-4172-92ca-5c805a39651e\") " pod="kube-system/coredns-7c65d6cfc9-fpv9n" Sep 12 17:24:34.279384 kubelet[2664]: I0912 17:24:34.279359 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b456587c-2ebc-4095-8621-852dc5fb8c6e-config-volume\") pod \"coredns-7c65d6cfc9-kgksp\" (UID: \"b456587c-2ebc-4095-8621-852dc5fb8c6e\") " pod="kube-system/coredns-7c65d6cfc9-kgksp" Sep 12 17:24:34.279384 kubelet[2664]: I0912 17:24:34.279388 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/61884cff-ad7f-4172-92ca-5c805a39651e-config-volume\") pod \"coredns-7c65d6cfc9-fpv9n\" (UID: \"61884cff-ad7f-4172-92ca-5c805a39651e\") " pod="kube-system/coredns-7c65d6cfc9-fpv9n" Sep 12 17:24:34.279577 kubelet[2664]: I0912 17:24:34.279404 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwss8\" (UniqueName: \"kubernetes.io/projected/b456587c-2ebc-4095-8621-852dc5fb8c6e-kube-api-access-pwss8\") pod \"coredns-7c65d6cfc9-kgksp\" (UID: \"b456587c-2ebc-4095-8621-852dc5fb8c6e\") " pod="kube-system/coredns-7c65d6cfc9-kgksp" Sep 12 17:24:34.460821 containerd[1528]: time="2025-09-12T17:24:34.460667472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kgksp,Uid:b456587c-2ebc-4095-8621-852dc5fb8c6e,Namespace:kube-system,Attempt:0,}" Sep 12 17:24:34.466170 containerd[1528]: time="2025-09-12T17:24:34.466130692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fpv9n,Uid:61884cff-ad7f-4172-92ca-5c805a39651e,Namespace:kube-system,Attempt:0,}" Sep 12 17:24:36.068528 systemd-networkd[1437]: cilium_host: Link UP Sep 12 17:24:36.068668 systemd-networkd[1437]: cilium_net: Link UP Sep 12 17:24:36.068800 systemd-networkd[1437]: cilium_net: Gained carrier Sep 12 17:24:36.069043 systemd-networkd[1437]: cilium_host: Gained carrier Sep 12 17:24:36.170964 systemd-networkd[1437]: cilium_vxlan: Link UP Sep 12 17:24:36.170976 systemd-networkd[1437]: cilium_vxlan: Gained carrier Sep 12 17:24:36.462750 kernel: NET: Registered PF_ALG protocol family Sep 12 17:24:36.646525 systemd-networkd[1437]: cilium_net: Gained IPv6LL Sep 12 17:24:36.902590 systemd-networkd[1437]: cilium_host: Gained IPv6LL Sep 12 17:24:37.075527 systemd-networkd[1437]: lxc_health: Link UP Sep 12 17:24:37.085639 systemd-networkd[1437]: lxc_health: Gained carrier Sep 12 17:24:37.222640 systemd-networkd[1437]: cilium_vxlan: Gained IPv6LL Sep 12 17:24:37.526455 kernel: eth0: renamed from tmpcc4e7 Sep 12 17:24:37.529730 systemd-networkd[1437]: lxc1d58a21b8cf6: Link UP Sep 12 17:24:37.529940 systemd-networkd[1437]: lxc1d58a21b8cf6: Gained carrier Sep 12 17:24:37.530615 systemd-networkd[1437]: lxc14750a1d2923: Link UP Sep 12 17:24:37.548118 kernel: eth0: renamed from tmpfde72 Sep 12 17:24:37.548641 systemd-networkd[1437]: lxc14750a1d2923: Gained carrier Sep 12 17:24:38.151236 kubelet[2664]: I0912 17:24:38.150650 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rjk6k" podStartSLOduration=9.082235406 podStartE2EDuration="21.150632822s" podCreationTimestamp="2025-09-12 17:24:17 +0000 UTC" firstStartedPulling="2025-09-12 17:24:18.191520858 +0000 UTC m=+7.543513836" lastFinishedPulling="2025-09-12 17:24:30.259918274 +0000 UTC m=+19.611911252" observedRunningTime="2025-09-12 17:24:34.860761307 +0000 UTC m=+24.212754285" watchObservedRunningTime="2025-09-12 17:24:38.150632822 +0000 UTC m=+27.502625760" Sep 12 17:24:38.310616 systemd-networkd[1437]: lxc_health: Gained IPv6LL Sep 12 17:24:38.950630 systemd-networkd[1437]: lxc1d58a21b8cf6: Gained IPv6LL Sep 12 17:24:39.014570 systemd-networkd[1437]: lxc14750a1d2923: Gained IPv6LL Sep 12 17:24:41.146722 systemd[1]: Started sshd@7-10.0.0.110:22-10.0.0.1:48612.service - OpenSSH per-connection server daemon (10.0.0.1:48612). Sep 12 17:24:41.210353 sshd[3842]: Accepted publickey for core from 10.0.0.1 port 48612 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:24:41.212905 sshd-session[3842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:24:41.213554 containerd[1528]: time="2025-09-12T17:24:41.213109093Z" level=info msg="connecting to shim cc4e7581c96297839c06dea2f4c4246eac09c6ef0d3d86f28728dfe075ffb1aa" address="unix:///run/containerd/s/b75d1e590b8f74315200fd9c760c3f961206fe8cc56f8a74876636a3f607d26b" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:24:41.218457 systemd-logind[1507]: New session 8 of user core. Sep 12 17:24:41.220061 containerd[1528]: time="2025-09-12T17:24:41.219562961Z" level=info msg="connecting to shim fde722d1660c434ba79a5513b698b83242c9d7f818eacc8aaf7a9b9b30e73e31" address="unix:///run/containerd/s/e18b4201ece45653c989ddbb158b8f116a1b57f6b9a73629a086794d51137c98" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:24:41.223610 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:24:41.244715 systemd[1]: Started cri-containerd-fde722d1660c434ba79a5513b698b83242c9d7f818eacc8aaf7a9b9b30e73e31.scope - libcontainer container fde722d1660c434ba79a5513b698b83242c9d7f818eacc8aaf7a9b9b30e73e31. Sep 12 17:24:41.247630 systemd[1]: Started cri-containerd-cc4e7581c96297839c06dea2f4c4246eac09c6ef0d3d86f28728dfe075ffb1aa.scope - libcontainer container cc4e7581c96297839c06dea2f4c4246eac09c6ef0d3d86f28728dfe075ffb1aa. Sep 12 17:24:41.259167 systemd-resolved[1354]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:24:41.260384 systemd-resolved[1354]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:24:41.286426 containerd[1528]: time="2025-09-12T17:24:41.286307167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kgksp,Uid:b456587c-2ebc-4095-8621-852dc5fb8c6e,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc4e7581c96297839c06dea2f4c4246eac09c6ef0d3d86f28728dfe075ffb1aa\"" Sep 12 17:24:41.289346 containerd[1528]: time="2025-09-12T17:24:41.289244328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fpv9n,Uid:61884cff-ad7f-4172-92ca-5c805a39651e,Namespace:kube-system,Attempt:0,} returns sandbox id \"fde722d1660c434ba79a5513b698b83242c9d7f818eacc8aaf7a9b9b30e73e31\"" Sep 12 17:24:41.292849 containerd[1528]: time="2025-09-12T17:24:41.292804356Z" level=info msg="CreateContainer within sandbox \"cc4e7581c96297839c06dea2f4c4246eac09c6ef0d3d86f28728dfe075ffb1aa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:24:41.296249 containerd[1528]: time="2025-09-12T17:24:41.296206297Z" level=info msg="CreateContainer within sandbox \"fde722d1660c434ba79a5513b698b83242c9d7f818eacc8aaf7a9b9b30e73e31\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:24:41.300802 containerd[1528]: time="2025-09-12T17:24:41.300753125Z" level=info msg="Container 8eb2c5e46d8ca32e4433821dae5de4c589c2fc18c9e38c95471c172b23e62cf3: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:24:41.308293 containerd[1528]: time="2025-09-12T17:24:41.308220155Z" level=info msg="Container ae7259dee216afa599585ab88d37c2f0bbccf2ba3f7e8b908473b08d4c874ac6: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:24:41.309351 containerd[1528]: time="2025-09-12T17:24:41.309236237Z" level=info msg="CreateContainer within sandbox \"cc4e7581c96297839c06dea2f4c4246eac09c6ef0d3d86f28728dfe075ffb1aa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8eb2c5e46d8ca32e4433821dae5de4c589c2fc18c9e38c95471c172b23e62cf3\"" Sep 12 17:24:41.310980 containerd[1528]: time="2025-09-12T17:24:41.310946788Z" level=info msg="StartContainer for \"8eb2c5e46d8ca32e4433821dae5de4c589c2fc18c9e38c95471c172b23e62cf3\"" Sep 12 17:24:41.312017 containerd[1528]: time="2025-09-12T17:24:41.311982631Z" level=info msg="connecting to shim 8eb2c5e46d8ca32e4433821dae5de4c589c2fc18c9e38c95471c172b23e62cf3" address="unix:///run/containerd/s/b75d1e590b8f74315200fd9c760c3f961206fe8cc56f8a74876636a3f607d26b" protocol=ttrpc version=3 Sep 12 17:24:41.315993 containerd[1528]: time="2025-09-12T17:24:41.315944475Z" level=info msg="CreateContainer within sandbox \"fde722d1660c434ba79a5513b698b83242c9d7f818eacc8aaf7a9b9b30e73e31\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ae7259dee216afa599585ab88d37c2f0bbccf2ba3f7e8b908473b08d4c874ac6\"" Sep 12 17:24:41.318468 containerd[1528]: time="2025-09-12T17:24:41.318251451Z" level=info msg="StartContainer for \"ae7259dee216afa599585ab88d37c2f0bbccf2ba3f7e8b908473b08d4c874ac6\"" Sep 12 17:24:41.319688 containerd[1528]: time="2025-09-12T17:24:41.319642788Z" level=info msg="connecting to shim ae7259dee216afa599585ab88d37c2f0bbccf2ba3f7e8b908473b08d4c874ac6" address="unix:///run/containerd/s/e18b4201ece45653c989ddbb158b8f116a1b57f6b9a73629a086794d51137c98" protocol=ttrpc version=3 Sep 12 17:24:41.345614 systemd[1]: Started cri-containerd-8eb2c5e46d8ca32e4433821dae5de4c589c2fc18c9e38c95471c172b23e62cf3.scope - libcontainer container 8eb2c5e46d8ca32e4433821dae5de4c589c2fc18c9e38c95471c172b23e62cf3. Sep 12 17:24:41.350313 systemd[1]: Started cri-containerd-ae7259dee216afa599585ab88d37c2f0bbccf2ba3f7e8b908473b08d4c874ac6.scope - libcontainer container ae7259dee216afa599585ab88d37c2f0bbccf2ba3f7e8b908473b08d4c874ac6. Sep 12 17:24:41.388809 sshd[3902]: Connection closed by 10.0.0.1 port 48612 Sep 12 17:24:41.388212 sshd-session[3842]: pam_unix(sshd:session): session closed for user core Sep 12 17:24:41.396976 containerd[1528]: time="2025-09-12T17:24:41.396794745Z" level=info msg="StartContainer for \"ae7259dee216afa599585ab88d37c2f0bbccf2ba3f7e8b908473b08d4c874ac6\" returns successfully" Sep 12 17:24:41.397743 containerd[1528]: time="2025-09-12T17:24:41.397569458Z" level=info msg="StartContainer for \"8eb2c5e46d8ca32e4433821dae5de4c589c2fc18c9e38c95471c172b23e62cf3\" returns successfully" Sep 12 17:24:41.397678 systemd[1]: sshd@7-10.0.0.110:22-10.0.0.1:48612.service: Deactivated successfully. Sep 12 17:24:41.405670 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:24:41.407976 systemd-logind[1507]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:24:41.409995 systemd-logind[1507]: Removed session 8. Sep 12 17:24:41.897184 kubelet[2664]: I0912 17:24:41.896858 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-fpv9n" podStartSLOduration=23.896837828 podStartE2EDuration="23.896837828s" podCreationTimestamp="2025-09-12 17:24:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:24:41.895992393 +0000 UTC m=+31.247985371" watchObservedRunningTime="2025-09-12 17:24:41.896837828 +0000 UTC m=+31.248830846" Sep 12 17:24:41.912972 kubelet[2664]: I0912 17:24:41.912905 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-kgksp" podStartSLOduration=23.912887613 podStartE2EDuration="23.912887613s" podCreationTimestamp="2025-09-12 17:24:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:24:41.912639403 +0000 UTC m=+31.264632381" watchObservedRunningTime="2025-09-12 17:24:41.912887613 +0000 UTC m=+31.264880631" Sep 12 17:24:46.411608 systemd[1]: Started sshd@8-10.0.0.110:22-10.0.0.1:48624.service - OpenSSH per-connection server daemon (10.0.0.1:48624). Sep 12 17:24:46.480790 sshd[4027]: Accepted publickey for core from 10.0.0.1 port 48624 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:24:46.482745 sshd-session[4027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:24:46.489557 systemd-logind[1507]: New session 9 of user core. Sep 12 17:24:46.501618 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:24:46.634178 sshd[4030]: Connection closed by 10.0.0.1 port 48624 Sep 12 17:24:46.634731 sshd-session[4027]: pam_unix(sshd:session): session closed for user core Sep 12 17:24:46.640808 systemd-logind[1507]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:24:46.641104 systemd[1]: sshd@8-10.0.0.110:22-10.0.0.1:48624.service: Deactivated successfully. Sep 12 17:24:46.647004 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:24:46.650524 systemd-logind[1507]: Removed session 9. Sep 12 17:24:51.649983 systemd[1]: Started sshd@9-10.0.0.110:22-10.0.0.1:33470.service - OpenSSH per-connection server daemon (10.0.0.1:33470). Sep 12 17:24:51.733335 sshd[4049]: Accepted publickey for core from 10.0.0.1 port 33470 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:24:51.735520 sshd-session[4049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:24:51.740686 systemd-logind[1507]: New session 10 of user core. Sep 12 17:24:51.753009 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:24:51.890494 sshd[4052]: Connection closed by 10.0.0.1 port 33470 Sep 12 17:24:51.890863 sshd-session[4049]: pam_unix(sshd:session): session closed for user core Sep 12 17:24:51.894955 systemd[1]: sshd@9-10.0.0.110:22-10.0.0.1:33470.service: Deactivated successfully. Sep 12 17:24:51.898351 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:24:51.900075 systemd-logind[1507]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:24:51.901815 systemd-logind[1507]: Removed session 10. Sep 12 17:24:56.916058 systemd[1]: Started sshd@10-10.0.0.110:22-10.0.0.1:33474.service - OpenSSH per-connection server daemon (10.0.0.1:33474). Sep 12 17:24:56.977532 sshd[4068]: Accepted publickey for core from 10.0.0.1 port 33474 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:24:56.978868 sshd-session[4068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:24:56.983745 systemd-logind[1507]: New session 11 of user core. Sep 12 17:24:56.994680 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:24:57.125216 sshd[4071]: Connection closed by 10.0.0.1 port 33474 Sep 12 17:24:57.125589 sshd-session[4068]: pam_unix(sshd:session): session closed for user core Sep 12 17:24:57.136194 systemd[1]: sshd@10-10.0.0.110:22-10.0.0.1:33474.service: Deactivated successfully. Sep 12 17:24:57.139945 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:24:57.144460 systemd-logind[1507]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:24:57.147028 systemd-logind[1507]: Removed session 11. Sep 12 17:24:57.149553 systemd[1]: Started sshd@11-10.0.0.110:22-10.0.0.1:33476.service - OpenSSH per-connection server daemon (10.0.0.1:33476). Sep 12 17:24:57.208759 sshd[4086]: Accepted publickey for core from 10.0.0.1 port 33476 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:24:57.210030 sshd-session[4086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:24:57.214699 systemd-logind[1507]: New session 12 of user core. Sep 12 17:24:57.226661 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:24:57.394850 sshd[4089]: Connection closed by 10.0.0.1 port 33476 Sep 12 17:24:57.396805 sshd-session[4086]: pam_unix(sshd:session): session closed for user core Sep 12 17:24:57.406670 systemd[1]: sshd@11-10.0.0.110:22-10.0.0.1:33476.service: Deactivated successfully. Sep 12 17:24:57.411979 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:24:57.415111 systemd-logind[1507]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:24:57.417786 systemd-logind[1507]: Removed session 12. Sep 12 17:24:57.420698 systemd[1]: Started sshd@12-10.0.0.110:22-10.0.0.1:33480.service - OpenSSH per-connection server daemon (10.0.0.1:33480). Sep 12 17:24:57.480593 sshd[4100]: Accepted publickey for core from 10.0.0.1 port 33480 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:24:57.482179 sshd-session[4100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:24:57.486526 systemd-logind[1507]: New session 13 of user core. Sep 12 17:24:57.495606 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:24:57.611035 sshd[4103]: Connection closed by 10.0.0.1 port 33480 Sep 12 17:24:57.611979 sshd-session[4100]: pam_unix(sshd:session): session closed for user core Sep 12 17:24:57.615911 systemd[1]: sshd@12-10.0.0.110:22-10.0.0.1:33480.service: Deactivated successfully. Sep 12 17:24:57.617888 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:24:57.618850 systemd-logind[1507]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:24:57.619902 systemd-logind[1507]: Removed session 13. Sep 12 17:25:02.626897 systemd[1]: Started sshd@13-10.0.0.110:22-10.0.0.1:55142.service - OpenSSH per-connection server daemon (10.0.0.1:55142). Sep 12 17:25:02.678314 sshd[4116]: Accepted publickey for core from 10.0.0.1 port 55142 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:25:02.681781 sshd-session[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:25:02.686494 systemd-logind[1507]: New session 14 of user core. Sep 12 17:25:02.695592 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:25:02.810762 sshd[4119]: Connection closed by 10.0.0.1 port 55142 Sep 12 17:25:02.811275 sshd-session[4116]: pam_unix(sshd:session): session closed for user core Sep 12 17:25:02.814837 systemd[1]: sshd@13-10.0.0.110:22-10.0.0.1:55142.service: Deactivated successfully. Sep 12 17:25:02.817563 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:25:02.818134 systemd-logind[1507]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:25:02.819465 systemd-logind[1507]: Removed session 14. Sep 12 17:25:07.830080 systemd[1]: Started sshd@14-10.0.0.110:22-10.0.0.1:55152.service - OpenSSH per-connection server daemon (10.0.0.1:55152). Sep 12 17:25:07.902556 sshd[4135]: Accepted publickey for core from 10.0.0.1 port 55152 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:25:07.904373 sshd-session[4135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:25:07.909944 systemd-logind[1507]: New session 15 of user core. Sep 12 17:25:07.920655 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:25:08.049613 sshd[4138]: Connection closed by 10.0.0.1 port 55152 Sep 12 17:25:08.050011 sshd-session[4135]: pam_unix(sshd:session): session closed for user core Sep 12 17:25:08.066617 systemd[1]: sshd@14-10.0.0.110:22-10.0.0.1:55152.service: Deactivated successfully. Sep 12 17:25:08.069282 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:25:08.071147 systemd-logind[1507]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:25:08.074060 systemd[1]: Started sshd@15-10.0.0.110:22-10.0.0.1:55164.service - OpenSSH per-connection server daemon (10.0.0.1:55164). Sep 12 17:25:08.077895 systemd-logind[1507]: Removed session 15. Sep 12 17:25:08.143057 sshd[4152]: Accepted publickey for core from 10.0.0.1 port 55164 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:25:08.144700 sshd-session[4152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:25:08.151233 systemd-logind[1507]: New session 16 of user core. Sep 12 17:25:08.160608 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:25:08.361988 sshd[4155]: Connection closed by 10.0.0.1 port 55164 Sep 12 17:25:08.362483 sshd-session[4152]: pam_unix(sshd:session): session closed for user core Sep 12 17:25:08.379324 systemd[1]: sshd@15-10.0.0.110:22-10.0.0.1:55164.service: Deactivated successfully. Sep 12 17:25:08.381047 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:25:08.382058 systemd-logind[1507]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:25:08.385192 systemd[1]: Started sshd@16-10.0.0.110:22-10.0.0.1:55174.service - OpenSSH per-connection server daemon (10.0.0.1:55174). Sep 12 17:25:08.386071 systemd-logind[1507]: Removed session 16. Sep 12 17:25:08.441367 sshd[4166]: Accepted publickey for core from 10.0.0.1 port 55174 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:25:08.442714 sshd-session[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:25:08.447053 systemd-logind[1507]: New session 17 of user core. Sep 12 17:25:08.464637 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:25:09.791096 sshd[4169]: Connection closed by 10.0.0.1 port 55174 Sep 12 17:25:09.791374 sshd-session[4166]: pam_unix(sshd:session): session closed for user core Sep 12 17:25:09.803314 systemd[1]: sshd@16-10.0.0.110:22-10.0.0.1:55174.service: Deactivated successfully. Sep 12 17:25:09.808126 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:25:09.811025 systemd-logind[1507]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:25:09.816691 systemd[1]: Started sshd@17-10.0.0.110:22-10.0.0.1:55182.service - OpenSSH per-connection server daemon (10.0.0.1:55182). Sep 12 17:25:09.818366 systemd-logind[1507]: Removed session 17. Sep 12 17:25:09.868083 sshd[4193]: Accepted publickey for core from 10.0.0.1 port 55182 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:25:09.869580 sshd-session[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:25:09.874505 systemd-logind[1507]: New session 18 of user core. Sep 12 17:25:09.884595 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:25:10.116112 sshd[4197]: Connection closed by 10.0.0.1 port 55182 Sep 12 17:25:10.116963 sshd-session[4193]: pam_unix(sshd:session): session closed for user core Sep 12 17:25:10.132180 systemd[1]: sshd@17-10.0.0.110:22-10.0.0.1:55182.service: Deactivated successfully. Sep 12 17:25:10.134861 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:25:10.135752 systemd-logind[1507]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:25:10.139658 systemd[1]: Started sshd@18-10.0.0.110:22-10.0.0.1:42386.service - OpenSSH per-connection server daemon (10.0.0.1:42386). Sep 12 17:25:10.141063 systemd-logind[1507]: Removed session 18. Sep 12 17:25:10.211710 sshd[4209]: Accepted publickey for core from 10.0.0.1 port 42386 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:25:10.213190 sshd-session[4209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:25:10.217491 systemd-logind[1507]: New session 19 of user core. Sep 12 17:25:10.227683 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:25:10.351991 sshd[4212]: Connection closed by 10.0.0.1 port 42386 Sep 12 17:25:10.352347 sshd-session[4209]: pam_unix(sshd:session): session closed for user core Sep 12 17:25:10.356895 systemd[1]: sshd@18-10.0.0.110:22-10.0.0.1:42386.service: Deactivated successfully. Sep 12 17:25:10.359004 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:25:10.359719 systemd-logind[1507]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:25:10.360942 systemd-logind[1507]: Removed session 19. Sep 12 17:25:15.369267 systemd[1]: Started sshd@19-10.0.0.110:22-10.0.0.1:42396.service - OpenSSH per-connection server daemon (10.0.0.1:42396). Sep 12 17:25:15.430755 sshd[4230]: Accepted publickey for core from 10.0.0.1 port 42396 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:25:15.432346 sshd-session[4230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:25:15.438779 systemd-logind[1507]: New session 20 of user core. Sep 12 17:25:15.455645 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:25:15.589905 sshd[4233]: Connection closed by 10.0.0.1 port 42396 Sep 12 17:25:15.590648 sshd-session[4230]: pam_unix(sshd:session): session closed for user core Sep 12 17:25:15.596239 systemd[1]: sshd@19-10.0.0.110:22-10.0.0.1:42396.service: Deactivated successfully. Sep 12 17:25:15.597784 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:25:15.598640 systemd-logind[1507]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:25:15.599971 systemd-logind[1507]: Removed session 20. Sep 12 17:25:20.610197 systemd[1]: Started sshd@20-10.0.0.110:22-10.0.0.1:54260.service - OpenSSH per-connection server daemon (10.0.0.1:54260). Sep 12 17:25:20.668350 sshd[4249]: Accepted publickey for core from 10.0.0.1 port 54260 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:25:20.669729 sshd-session[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:25:20.673747 systemd-logind[1507]: New session 21 of user core. Sep 12 17:25:20.685660 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 17:25:20.799544 sshd[4252]: Connection closed by 10.0.0.1 port 54260 Sep 12 17:25:20.800068 sshd-session[4249]: pam_unix(sshd:session): session closed for user core Sep 12 17:25:20.804429 systemd[1]: sshd@20-10.0.0.110:22-10.0.0.1:54260.service: Deactivated successfully. Sep 12 17:25:20.806352 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 17:25:20.807231 systemd-logind[1507]: Session 21 logged out. Waiting for processes to exit. Sep 12 17:25:20.808391 systemd-logind[1507]: Removed session 21. Sep 12 17:25:25.820897 systemd[1]: Started sshd@21-10.0.0.110:22-10.0.0.1:54272.service - OpenSSH per-connection server daemon (10.0.0.1:54272). Sep 12 17:25:25.886355 sshd[4266]: Accepted publickey for core from 10.0.0.1 port 54272 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:25:25.887773 sshd-session[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:25:25.894835 systemd-logind[1507]: New session 22 of user core. Sep 12 17:25:25.904647 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 17:25:26.036811 sshd[4269]: Connection closed by 10.0.0.1 port 54272 Sep 12 17:25:26.038155 sshd-session[4266]: pam_unix(sshd:session): session closed for user core Sep 12 17:25:26.050522 systemd[1]: sshd@21-10.0.0.110:22-10.0.0.1:54272.service: Deactivated successfully. Sep 12 17:25:26.052984 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 17:25:26.054993 systemd-logind[1507]: Session 22 logged out. Waiting for processes to exit. Sep 12 17:25:26.057693 systemd[1]: Started sshd@22-10.0.0.110:22-10.0.0.1:54282.service - OpenSSH per-connection server daemon (10.0.0.1:54282). Sep 12 17:25:26.058808 systemd-logind[1507]: Removed session 22. Sep 12 17:25:26.133610 sshd[4282]: Accepted publickey for core from 10.0.0.1 port 54282 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:25:26.135273 sshd-session[4282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:25:26.140358 systemd-logind[1507]: New session 23 of user core. Sep 12 17:25:26.156640 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 17:25:28.178326 containerd[1528]: time="2025-09-12T17:25:28.177957032Z" level=info msg="StopContainer for \"bc13d8b57b498aa81c95f75e352a90d524b7190ad8f42db42ee637e1bf7869a3\" with timeout 30 (s)" Sep 12 17:25:28.178687 containerd[1528]: time="2025-09-12T17:25:28.178390794Z" level=info msg="Stop container \"bc13d8b57b498aa81c95f75e352a90d524b7190ad8f42db42ee637e1bf7869a3\" with signal terminated" Sep 12 17:25:28.190590 systemd[1]: cri-containerd-bc13d8b57b498aa81c95f75e352a90d524b7190ad8f42db42ee637e1bf7869a3.scope: Deactivated successfully. Sep 12 17:25:28.192675 containerd[1528]: time="2025-09-12T17:25:28.192630983Z" level=info msg="received exit event container_id:\"bc13d8b57b498aa81c95f75e352a90d524b7190ad8f42db42ee637e1bf7869a3\" id:\"bc13d8b57b498aa81c95f75e352a90d524b7190ad8f42db42ee637e1bf7869a3\" pid:3245 exited_at:{seconds:1757697928 nanos:191587458}" Sep 12 17:25:28.193088 containerd[1528]: time="2025-09-12T17:25:28.193028025Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bc13d8b57b498aa81c95f75e352a90d524b7190ad8f42db42ee637e1bf7869a3\" id:\"bc13d8b57b498aa81c95f75e352a90d524b7190ad8f42db42ee637e1bf7869a3\" pid:3245 exited_at:{seconds:1757697928 nanos:191587458}" Sep 12 17:25:28.219164 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc13d8b57b498aa81c95f75e352a90d524b7190ad8f42db42ee637e1bf7869a3-rootfs.mount: Deactivated successfully. Sep 12 17:25:28.219766 containerd[1528]: time="2025-09-12T17:25:28.219406232Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:25:28.221986 containerd[1528]: time="2025-09-12T17:25:28.221928084Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0ddd83225450a0c374bfe6c5c7ca8c88c2b472d08c6e809e5ef6ac89ea0e0eeb\" id:\"f70ebbb9bf2633969a70632c98ebdfd1674f7f02da7dc0529c65105dab2b9477\" pid:4313 exited_at:{seconds:1757697928 nanos:220385476}" Sep 12 17:25:28.226043 containerd[1528]: time="2025-09-12T17:25:28.225992743Z" level=info msg="StopContainer for \"0ddd83225450a0c374bfe6c5c7ca8c88c2b472d08c6e809e5ef6ac89ea0e0eeb\" with timeout 2 (s)" Sep 12 17:25:28.226301 containerd[1528]: time="2025-09-12T17:25:28.226275905Z" level=info msg="Stop container \"0ddd83225450a0c374bfe6c5c7ca8c88c2b472d08c6e809e5ef6ac89ea0e0eeb\" with signal terminated" Sep 12 17:25:28.234333 systemd-networkd[1437]: lxc_health: Link DOWN Sep 12 17:25:28.234345 systemd-networkd[1437]: lxc_health: Lost carrier Sep 12 17:25:28.242755 containerd[1528]: time="2025-09-12T17:25:28.242712384Z" level=info msg="StopContainer for \"bc13d8b57b498aa81c95f75e352a90d524b7190ad8f42db42ee637e1bf7869a3\" returns successfully" Sep 12 17:25:28.245407 containerd[1528]: time="2025-09-12T17:25:28.245351477Z" level=info msg="StopPodSandbox for \"9d0665321916a313d769fa8b36ed9c86f920df654e50ed84703f3a33aeca3638\"" Sep 12 17:25:28.250111 systemd[1]: cri-containerd-0ddd83225450a0c374bfe6c5c7ca8c88c2b472d08c6e809e5ef6ac89ea0e0eeb.scope: Deactivated successfully. Sep 12 17:25:28.250431 systemd[1]: cri-containerd-0ddd83225450a0c374bfe6c5c7ca8c88c2b472d08c6e809e5ef6ac89ea0e0eeb.scope: Consumed 6.381s CPU time, 125.8M memory peak, 1.3M read from disk, 12.9M written to disk. Sep 12 17:25:28.251330 containerd[1528]: time="2025-09-12T17:25:28.250997504Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0ddd83225450a0c374bfe6c5c7ca8c88c2b472d08c6e809e5ef6ac89ea0e0eeb\" id:\"0ddd83225450a0c374bfe6c5c7ca8c88c2b472d08c6e809e5ef6ac89ea0e0eeb\" pid:3316 exited_at:{seconds:1757697928 nanos:250724342}" Sep 12 17:25:28.251330 containerd[1528]: time="2025-09-12T17:25:28.251125424Z" level=info msg="received exit event container_id:\"0ddd83225450a0c374bfe6c5c7ca8c88c2b472d08c6e809e5ef6ac89ea0e0eeb\" id:\"0ddd83225450a0c374bfe6c5c7ca8c88c2b472d08c6e809e5ef6ac89ea0e0eeb\" pid:3316 exited_at:{seconds:1757697928 nanos:250724342}" Sep 12 17:25:28.252137 containerd[1528]: time="2025-09-12T17:25:28.252106589Z" level=info msg="Container to stop \"bc13d8b57b498aa81c95f75e352a90d524b7190ad8f42db42ee637e1bf7869a3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:25:28.261529 systemd[1]: cri-containerd-9d0665321916a313d769fa8b36ed9c86f920df654e50ed84703f3a33aeca3638.scope: Deactivated successfully. Sep 12 17:25:28.262534 containerd[1528]: time="2025-09-12T17:25:28.262111197Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9d0665321916a313d769fa8b36ed9c86f920df654e50ed84703f3a33aeca3638\" id:\"9d0665321916a313d769fa8b36ed9c86f920df654e50ed84703f3a33aeca3638\" pid:2922 exit_status:137 exited_at:{seconds:1757697928 nanos:261717235}" Sep 12 17:25:28.277051 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ddd83225450a0c374bfe6c5c7ca8c88c2b472d08c6e809e5ef6ac89ea0e0eeb-rootfs.mount: Deactivated successfully. Sep 12 17:25:28.289925 containerd[1528]: time="2025-09-12T17:25:28.289840931Z" level=info msg="StopContainer for \"0ddd83225450a0c374bfe6c5c7ca8c88c2b472d08c6e809e5ef6ac89ea0e0eeb\" returns successfully" Sep 12 17:25:28.290569 containerd[1528]: time="2025-09-12T17:25:28.290495174Z" level=info msg="StopPodSandbox for \"ff43426c5ba2ebe026eca37ac8baf39db85c96b959d04037003eff4227dbe245\"" Sep 12 17:25:28.290629 containerd[1528]: time="2025-09-12T17:25:28.290578294Z" level=info msg="Container to stop \"5230d267570cf8358e5fc69b8a7d57d7cceb2a296a68f576be19beb127217150\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:25:28.290629 containerd[1528]: time="2025-09-12T17:25:28.290599854Z" level=info msg="Container to stop \"45b720315d7016e837981b8b11fedcc34c20557ca4a5a6ec65e9974769c96056\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:25:28.290629 containerd[1528]: time="2025-09-12T17:25:28.290609535Z" level=info msg="Container to stop \"91dfa60e0134097372d57b8eaac6eba8f3b2b72175fbf2f23fa53f7592eff484\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:25:28.290629 containerd[1528]: time="2025-09-12T17:25:28.290617695Z" level=info msg="Container to stop \"0344e45b64422946c9d04248a0c81e4780a7c7afee6e524b707d247868e1ff0b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:25:28.290629 containerd[1528]: time="2025-09-12T17:25:28.290625695Z" level=info msg="Container to stop \"0ddd83225450a0c374bfe6c5c7ca8c88c2b472d08c6e809e5ef6ac89ea0e0eeb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:25:28.292363 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d0665321916a313d769fa8b36ed9c86f920df654e50ed84703f3a33aeca3638-rootfs.mount: Deactivated successfully. Sep 12 17:25:28.298017 systemd[1]: cri-containerd-ff43426c5ba2ebe026eca37ac8baf39db85c96b959d04037003eff4227dbe245.scope: Deactivated successfully. Sep 12 17:25:28.300732 containerd[1528]: time="2025-09-12T17:25:28.299175976Z" level=info msg="shim disconnected" id=9d0665321916a313d769fa8b36ed9c86f920df654e50ed84703f3a33aeca3638 namespace=k8s.io Sep 12 17:25:28.308447 containerd[1528]: time="2025-09-12T17:25:28.299201736Z" level=warning msg="cleaning up after shim disconnected" id=9d0665321916a313d769fa8b36ed9c86f920df654e50ed84703f3a33aeca3638 namespace=k8s.io Sep 12 17:25:28.308447 containerd[1528]: time="2025-09-12T17:25:28.308017298Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:25:28.320474 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff43426c5ba2ebe026eca37ac8baf39db85c96b959d04037003eff4227dbe245-rootfs.mount: Deactivated successfully. Sep 12 17:25:28.326381 containerd[1528]: time="2025-09-12T17:25:28.326331827Z" level=info msg="shim disconnected" id=ff43426c5ba2ebe026eca37ac8baf39db85c96b959d04037003eff4227dbe245 namespace=k8s.io Sep 12 17:25:28.326612 containerd[1528]: time="2025-09-12T17:25:28.326368867Z" level=warning msg="cleaning up after shim disconnected" id=ff43426c5ba2ebe026eca37ac8baf39db85c96b959d04037003eff4227dbe245 namespace=k8s.io Sep 12 17:25:28.326612 containerd[1528]: time="2025-09-12T17:25:28.326572228Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:25:28.331909 containerd[1528]: time="2025-09-12T17:25:28.331561212Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ff43426c5ba2ebe026eca37ac8baf39db85c96b959d04037003eff4227dbe245\" id:\"ff43426c5ba2ebe026eca37ac8baf39db85c96b959d04037003eff4227dbe245\" pid:2809 exit_status:137 exited_at:{seconds:1757697928 nanos:298580893}" Sep 12 17:25:28.331909 containerd[1528]: time="2025-09-12T17:25:28.331583252Z" level=info msg="TearDown network for sandbox \"9d0665321916a313d769fa8b36ed9c86f920df654e50ed84703f3a33aeca3638\" successfully" Sep 12 17:25:28.331909 containerd[1528]: time="2025-09-12T17:25:28.331714372Z" level=info msg="StopPodSandbox for \"9d0665321916a313d769fa8b36ed9c86f920df654e50ed84703f3a33aeca3638\" returns successfully" Sep 12 17:25:28.332052 containerd[1528]: time="2025-09-12T17:25:28.332010574Z" level=info msg="TearDown network for sandbox \"ff43426c5ba2ebe026eca37ac8baf39db85c96b959d04037003eff4227dbe245\" successfully" Sep 12 17:25:28.332052 containerd[1528]: time="2025-09-12T17:25:28.332026654Z" level=info msg="StopPodSandbox for \"ff43426c5ba2ebe026eca37ac8baf39db85c96b959d04037003eff4227dbe245\" returns successfully" Sep 12 17:25:28.333687 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9d0665321916a313d769fa8b36ed9c86f920df654e50ed84703f3a33aeca3638-shm.mount: Deactivated successfully. Sep 12 17:25:28.333791 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ff43426c5ba2ebe026eca37ac8baf39db85c96b959d04037003eff4227dbe245-shm.mount: Deactivated successfully. Sep 12 17:25:28.336254 containerd[1528]: time="2025-09-12T17:25:28.335605391Z" level=info msg="received exit event sandbox_id:\"9d0665321916a313d769fa8b36ed9c86f920df654e50ed84703f3a33aeca3638\" exit_status:137 exited_at:{seconds:1757697928 nanos:261717235}" Sep 12 17:25:28.336254 containerd[1528]: time="2025-09-12T17:25:28.336012033Z" level=info msg="received exit event sandbox_id:\"ff43426c5ba2ebe026eca37ac8baf39db85c96b959d04037003eff4227dbe245\" exit_status:137 exited_at:{seconds:1757697928 nanos:298580893}" Sep 12 17:25:28.442179 kubelet[2664]: I0912 17:25:28.441972 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6ee086ec-2815-40b9-afcf-94289483ccc9-cilium-config-path\") pod \"6ee086ec-2815-40b9-afcf-94289483ccc9\" (UID: \"6ee086ec-2815-40b9-afcf-94289483ccc9\") " Sep 12 17:25:28.442179 kubelet[2664]: I0912 17:25:28.442015 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6ee086ec-2815-40b9-afcf-94289483ccc9-host-proc-sys-kernel\") pod \"6ee086ec-2815-40b9-afcf-94289483ccc9\" (UID: \"6ee086ec-2815-40b9-afcf-94289483ccc9\") " Sep 12 17:25:28.442179 kubelet[2664]: I0912 17:25:28.442030 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6ee086ec-2815-40b9-afcf-94289483ccc9-host-proc-sys-net\") pod \"6ee086ec-2815-40b9-afcf-94289483ccc9\" (UID: \"6ee086ec-2815-40b9-afcf-94289483ccc9\") " Sep 12 17:25:28.442179 kubelet[2664]: I0912 17:25:28.442046 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ee086ec-2815-40b9-afcf-94289483ccc9-lib-modules\") pod \"6ee086ec-2815-40b9-afcf-94289483ccc9\" (UID: \"6ee086ec-2815-40b9-afcf-94289483ccc9\") " Sep 12 17:25:28.442179 kubelet[2664]: I0912 17:25:28.442067 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6ee086ec-2815-40b9-afcf-94289483ccc9-bpf-maps\") pod \"6ee086ec-2815-40b9-afcf-94289483ccc9\" (UID: \"6ee086ec-2815-40b9-afcf-94289483ccc9\") " Sep 12 17:25:28.442179 kubelet[2664]: I0912 17:25:28.442083 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6ee086ec-2815-40b9-afcf-94289483ccc9-hubble-tls\") pod \"6ee086ec-2815-40b9-afcf-94289483ccc9\" (UID: \"6ee086ec-2815-40b9-afcf-94289483ccc9\") " Sep 12 17:25:28.442637 kubelet[2664]: I0912 17:25:28.442098 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6ee086ec-2815-40b9-afcf-94289483ccc9-cni-path\") pod \"6ee086ec-2815-40b9-afcf-94289483ccc9\" (UID: \"6ee086ec-2815-40b9-afcf-94289483ccc9\") " Sep 12 17:25:28.442637 kubelet[2664]: I0912 17:25:28.442113 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6ee086ec-2815-40b9-afcf-94289483ccc9-xtables-lock\") pod \"6ee086ec-2815-40b9-afcf-94289483ccc9\" (UID: \"6ee086ec-2815-40b9-afcf-94289483ccc9\") " Sep 12 17:25:28.442637 kubelet[2664]: I0912 17:25:28.442127 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6ee086ec-2815-40b9-afcf-94289483ccc9-hostproc\") pod \"6ee086ec-2815-40b9-afcf-94289483ccc9\" (UID: \"6ee086ec-2815-40b9-afcf-94289483ccc9\") " Sep 12 17:25:28.442637 kubelet[2664]: I0912 17:25:28.442145 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzfgq\" (UniqueName: \"kubernetes.io/projected/6ee086ec-2815-40b9-afcf-94289483ccc9-kube-api-access-qzfgq\") pod \"6ee086ec-2815-40b9-afcf-94289483ccc9\" (UID: \"6ee086ec-2815-40b9-afcf-94289483ccc9\") " Sep 12 17:25:28.442637 kubelet[2664]: I0912 17:25:28.442162 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e40acf47-c47c-4492-b130-d5de5b007667-cilium-config-path\") pod \"e40acf47-c47c-4492-b130-d5de5b007667\" (UID: \"e40acf47-c47c-4492-b130-d5de5b007667\") " Sep 12 17:25:28.442637 kubelet[2664]: I0912 17:25:28.442176 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6ee086ec-2815-40b9-afcf-94289483ccc9-cilium-run\") pod \"6ee086ec-2815-40b9-afcf-94289483ccc9\" (UID: \"6ee086ec-2815-40b9-afcf-94289483ccc9\") " Sep 12 17:25:28.442760 kubelet[2664]: I0912 17:25:28.442192 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6ee086ec-2815-40b9-afcf-94289483ccc9-cilium-cgroup\") pod \"6ee086ec-2815-40b9-afcf-94289483ccc9\" (UID: \"6ee086ec-2815-40b9-afcf-94289483ccc9\") " Sep 12 17:25:28.442760 kubelet[2664]: I0912 17:25:28.442206 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6ee086ec-2815-40b9-afcf-94289483ccc9-etc-cni-netd\") pod \"6ee086ec-2815-40b9-afcf-94289483ccc9\" (UID: \"6ee086ec-2815-40b9-afcf-94289483ccc9\") " Sep 12 17:25:28.442760 kubelet[2664]: I0912 17:25:28.442226 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6ee086ec-2815-40b9-afcf-94289483ccc9-clustermesh-secrets\") pod \"6ee086ec-2815-40b9-afcf-94289483ccc9\" (UID: \"6ee086ec-2815-40b9-afcf-94289483ccc9\") " Sep 12 17:25:28.442760 kubelet[2664]: I0912 17:25:28.442243 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89kxr\" (UniqueName: \"kubernetes.io/projected/e40acf47-c47c-4492-b130-d5de5b007667-kube-api-access-89kxr\") pod \"e40acf47-c47c-4492-b130-d5de5b007667\" (UID: \"e40acf47-c47c-4492-b130-d5de5b007667\") " Sep 12 17:25:28.449553 kubelet[2664]: I0912 17:25:28.449465 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ee086ec-2815-40b9-afcf-94289483ccc9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6ee086ec-2815-40b9-afcf-94289483ccc9" (UID: "6ee086ec-2815-40b9-afcf-94289483ccc9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:25:28.449553 kubelet[2664]: I0912 17:25:28.449549 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ee086ec-2815-40b9-afcf-94289483ccc9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6ee086ec-2815-40b9-afcf-94289483ccc9" (UID: "6ee086ec-2815-40b9-afcf-94289483ccc9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:25:28.451437 kubelet[2664]: I0912 17:25:28.450488 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ee086ec-2815-40b9-afcf-94289483ccc9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6ee086ec-2815-40b9-afcf-94289483ccc9" (UID: "6ee086ec-2815-40b9-afcf-94289483ccc9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:25:28.451437 kubelet[2664]: I0912 17:25:28.450610 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ee086ec-2815-40b9-afcf-94289483ccc9-cni-path" (OuterVolumeSpecName: "cni-path") pod "6ee086ec-2815-40b9-afcf-94289483ccc9" (UID: "6ee086ec-2815-40b9-afcf-94289483ccc9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:25:28.451437 kubelet[2664]: I0912 17:25:28.450638 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ee086ec-2815-40b9-afcf-94289483ccc9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6ee086ec-2815-40b9-afcf-94289483ccc9" (UID: "6ee086ec-2815-40b9-afcf-94289483ccc9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:25:28.452145 kubelet[2664]: I0912 17:25:28.452111 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ee086ec-2815-40b9-afcf-94289483ccc9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6ee086ec-2815-40b9-afcf-94289483ccc9" (UID: "6ee086ec-2815-40b9-afcf-94289483ccc9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 12 17:25:28.452254 kubelet[2664]: I0912 17:25:28.452240 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ee086ec-2815-40b9-afcf-94289483ccc9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6ee086ec-2815-40b9-afcf-94289483ccc9" (UID: "6ee086ec-2815-40b9-afcf-94289483ccc9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:25:28.452324 kubelet[2664]: I0912 17:25:28.452312 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ee086ec-2815-40b9-afcf-94289483ccc9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6ee086ec-2815-40b9-afcf-94289483ccc9" (UID: "6ee086ec-2815-40b9-afcf-94289483ccc9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:25:28.452988 kubelet[2664]: I0912 17:25:28.452945 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e40acf47-c47c-4492-b130-d5de5b007667-kube-api-access-89kxr" (OuterVolumeSpecName: "kube-api-access-89kxr") pod "e40acf47-c47c-4492-b130-d5de5b007667" (UID: "e40acf47-c47c-4492-b130-d5de5b007667"). InnerVolumeSpecName "kube-api-access-89kxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 17:25:28.452988 kubelet[2664]: I0912 17:25:28.452966 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee086ec-2815-40b9-afcf-94289483ccc9-kube-api-access-qzfgq" (OuterVolumeSpecName: "kube-api-access-qzfgq") pod "6ee086ec-2815-40b9-afcf-94289483ccc9" (UID: "6ee086ec-2815-40b9-afcf-94289483ccc9"). InnerVolumeSpecName "kube-api-access-qzfgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 17:25:28.453072 kubelet[2664]: I0912 17:25:28.452999 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ee086ec-2815-40b9-afcf-94289483ccc9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6ee086ec-2815-40b9-afcf-94289483ccc9" (UID: "6ee086ec-2815-40b9-afcf-94289483ccc9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:25:28.453072 kubelet[2664]: I0912 17:25:28.453002 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ee086ec-2815-40b9-afcf-94289483ccc9-hostproc" (OuterVolumeSpecName: "hostproc") pod "6ee086ec-2815-40b9-afcf-94289483ccc9" (UID: "6ee086ec-2815-40b9-afcf-94289483ccc9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:25:28.453072 kubelet[2664]: I0912 17:25:28.453021 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ee086ec-2815-40b9-afcf-94289483ccc9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6ee086ec-2815-40b9-afcf-94289483ccc9" (UID: "6ee086ec-2815-40b9-afcf-94289483ccc9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:25:28.453207 kubelet[2664]: I0912 17:25:28.453176 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee086ec-2815-40b9-afcf-94289483ccc9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6ee086ec-2815-40b9-afcf-94289483ccc9" (UID: "6ee086ec-2815-40b9-afcf-94289483ccc9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 17:25:28.454323 kubelet[2664]: I0912 17:25:28.454293 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e40acf47-c47c-4492-b130-d5de5b007667-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e40acf47-c47c-4492-b130-d5de5b007667" (UID: "e40acf47-c47c-4492-b130-d5de5b007667"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 12 17:25:28.454935 kubelet[2664]: I0912 17:25:28.454892 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee086ec-2815-40b9-afcf-94289483ccc9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6ee086ec-2815-40b9-afcf-94289483ccc9" (UID: "6ee086ec-2815-40b9-afcf-94289483ccc9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 12 17:25:28.543254 kubelet[2664]: I0912 17:25:28.543201 2664 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6ee086ec-2815-40b9-afcf-94289483ccc9-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 12 17:25:28.543254 kubelet[2664]: I0912 17:25:28.543234 2664 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6ee086ec-2815-40b9-afcf-94289483ccc9-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 12 17:25:28.543254 kubelet[2664]: I0912 17:25:28.543243 2664 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6ee086ec-2815-40b9-afcf-94289483ccc9-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 12 17:25:28.543254 kubelet[2664]: I0912 17:25:28.543255 2664 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ee086ec-2815-40b9-afcf-94289483ccc9-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 12 17:25:28.543254 kubelet[2664]: I0912 17:25:28.543264 2664 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6ee086ec-2815-40b9-afcf-94289483ccc9-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 12 17:25:28.543254 kubelet[2664]: I0912 17:25:28.543272 2664 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6ee086ec-2815-40b9-afcf-94289483ccc9-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 12 17:25:28.543543 kubelet[2664]: I0912 17:25:28.543280 2664 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6ee086ec-2815-40b9-afcf-94289483ccc9-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 12 17:25:28.543543 kubelet[2664]: I0912 17:25:28.543313 2664 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6ee086ec-2815-40b9-afcf-94289483ccc9-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 12 17:25:28.543543 kubelet[2664]: I0912 17:25:28.543321 2664 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6ee086ec-2815-40b9-afcf-94289483ccc9-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 12 17:25:28.543543 kubelet[2664]: I0912 17:25:28.543331 2664 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzfgq\" (UniqueName: \"kubernetes.io/projected/6ee086ec-2815-40b9-afcf-94289483ccc9-kube-api-access-qzfgq\") on node \"localhost\" DevicePath \"\"" Sep 12 17:25:28.543543 kubelet[2664]: I0912 17:25:28.543339 2664 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6ee086ec-2815-40b9-afcf-94289483ccc9-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 12 17:25:28.543543 kubelet[2664]: I0912 17:25:28.543346 2664 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6ee086ec-2815-40b9-afcf-94289483ccc9-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 12 17:25:28.543543 kubelet[2664]: I0912 17:25:28.543356 2664 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6ee086ec-2815-40b9-afcf-94289483ccc9-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 12 17:25:28.543543 kubelet[2664]: I0912 17:25:28.543364 2664 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e40acf47-c47c-4492-b130-d5de5b007667-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 12 17:25:28.543703 kubelet[2664]: I0912 17:25:28.543372 2664 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6ee086ec-2815-40b9-afcf-94289483ccc9-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 12 17:25:28.543703 kubelet[2664]: I0912 17:25:28.543382 2664 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89kxr\" (UniqueName: \"kubernetes.io/projected/e40acf47-c47c-4492-b130-d5de5b007667-kube-api-access-89kxr\") on node \"localhost\" DevicePath \"\"" Sep 12 17:25:28.744147 systemd[1]: Removed slice kubepods-besteffort-pode40acf47_c47c_4492_b130_d5de5b007667.slice - libcontainer container kubepods-besteffort-pode40acf47_c47c_4492_b130_d5de5b007667.slice. Sep 12 17:25:28.747087 systemd[1]: Removed slice kubepods-burstable-pod6ee086ec_2815_40b9_afcf_94289483ccc9.slice - libcontainer container kubepods-burstable-pod6ee086ec_2815_40b9_afcf_94289483ccc9.slice. Sep 12 17:25:28.747289 systemd[1]: kubepods-burstable-pod6ee086ec_2815_40b9_afcf_94289483ccc9.slice: Consumed 6.473s CPU time, 126.1M memory peak, 1.3M read from disk, 12.9M written to disk. Sep 12 17:25:28.986388 kubelet[2664]: I0912 17:25:28.986331 2664 scope.go:117] "RemoveContainer" containerID="bc13d8b57b498aa81c95f75e352a90d524b7190ad8f42db42ee637e1bf7869a3" Sep 12 17:25:28.989741 containerd[1528]: time="2025-09-12T17:25:28.989706461Z" level=info msg="RemoveContainer for \"bc13d8b57b498aa81c95f75e352a90d524b7190ad8f42db42ee637e1bf7869a3\"" Sep 12 17:25:28.994884 containerd[1528]: time="2025-09-12T17:25:28.994671725Z" level=info msg="RemoveContainer for \"bc13d8b57b498aa81c95f75e352a90d524b7190ad8f42db42ee637e1bf7869a3\" returns successfully" Sep 12 17:25:28.994965 kubelet[2664]: I0912 17:25:28.994850 2664 scope.go:117] "RemoveContainer" containerID="bc13d8b57b498aa81c95f75e352a90d524b7190ad8f42db42ee637e1bf7869a3" Sep 12 17:25:28.995558 containerd[1528]: time="2025-09-12T17:25:28.995505569Z" level=error msg="ContainerStatus for \"bc13d8b57b498aa81c95f75e352a90d524b7190ad8f42db42ee637e1bf7869a3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bc13d8b57b498aa81c95f75e352a90d524b7190ad8f42db42ee637e1bf7869a3\": not found" Sep 12 17:25:29.000849 kubelet[2664]: E0912 17:25:29.000783 2664 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bc13d8b57b498aa81c95f75e352a90d524b7190ad8f42db42ee637e1bf7869a3\": not found" containerID="bc13d8b57b498aa81c95f75e352a90d524b7190ad8f42db42ee637e1bf7869a3" Sep 12 17:25:29.001001 kubelet[2664]: I0912 17:25:29.000925 2664 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bc13d8b57b498aa81c95f75e352a90d524b7190ad8f42db42ee637e1bf7869a3"} err="failed to get container status \"bc13d8b57b498aa81c95f75e352a90d524b7190ad8f42db42ee637e1bf7869a3\": rpc error: code = NotFound desc = an error occurred when try to find container \"bc13d8b57b498aa81c95f75e352a90d524b7190ad8f42db42ee637e1bf7869a3\": not found" Sep 12 17:25:29.001601 kubelet[2664]: I0912 17:25:29.001573 2664 scope.go:117] "RemoveContainer" containerID="0ddd83225450a0c374bfe6c5c7ca8c88c2b472d08c6e809e5ef6ac89ea0e0eeb" Sep 12 17:25:29.004230 containerd[1528]: time="2025-09-12T17:25:29.004205612Z" level=info msg="RemoveContainer for \"0ddd83225450a0c374bfe6c5c7ca8c88c2b472d08c6e809e5ef6ac89ea0e0eeb\"" Sep 12 17:25:29.009470 containerd[1528]: time="2025-09-12T17:25:29.009442919Z" level=info msg="RemoveContainer for \"0ddd83225450a0c374bfe6c5c7ca8c88c2b472d08c6e809e5ef6ac89ea0e0eeb\" returns successfully" Sep 12 17:25:29.010452 kubelet[2664]: I0912 17:25:29.010428 2664 scope.go:117] "RemoveContainer" containerID="0344e45b64422946c9d04248a0c81e4780a7c7afee6e524b707d247868e1ff0b" Sep 12 17:25:29.013097 containerd[1528]: time="2025-09-12T17:25:29.013066938Z" level=info msg="RemoveContainer for \"0344e45b64422946c9d04248a0c81e4780a7c7afee6e524b707d247868e1ff0b\"" Sep 12 17:25:29.017657 containerd[1528]: time="2025-09-12T17:25:29.017625241Z" level=info msg="RemoveContainer for \"0344e45b64422946c9d04248a0c81e4780a7c7afee6e524b707d247868e1ff0b\" returns successfully" Sep 12 17:25:29.017847 kubelet[2664]: I0912 17:25:29.017815 2664 scope.go:117] "RemoveContainer" containerID="91dfa60e0134097372d57b8eaac6eba8f3b2b72175fbf2f23fa53f7592eff484" Sep 12 17:25:29.020918 containerd[1528]: time="2025-09-12T17:25:29.020888978Z" level=info msg="RemoveContainer for \"91dfa60e0134097372d57b8eaac6eba8f3b2b72175fbf2f23fa53f7592eff484\"" Sep 12 17:25:29.031068 containerd[1528]: time="2025-09-12T17:25:29.031026551Z" level=info msg="RemoveContainer for \"91dfa60e0134097372d57b8eaac6eba8f3b2b72175fbf2f23fa53f7592eff484\" returns successfully" Sep 12 17:25:29.031294 kubelet[2664]: I0912 17:25:29.031257 2664 scope.go:117] "RemoveContainer" containerID="45b720315d7016e837981b8b11fedcc34c20557ca4a5a6ec65e9974769c96056" Sep 12 17:25:29.032512 containerd[1528]: time="2025-09-12T17:25:29.032488278Z" level=info msg="RemoveContainer for \"45b720315d7016e837981b8b11fedcc34c20557ca4a5a6ec65e9974769c96056\"" Sep 12 17:25:29.035345 containerd[1528]: time="2025-09-12T17:25:29.035319213Z" level=info msg="RemoveContainer for \"45b720315d7016e837981b8b11fedcc34c20557ca4a5a6ec65e9974769c96056\" returns successfully" Sep 12 17:25:29.035562 kubelet[2664]: I0912 17:25:29.035507 2664 scope.go:117] "RemoveContainer" containerID="5230d267570cf8358e5fc69b8a7d57d7cceb2a296a68f576be19beb127217150" Sep 12 17:25:29.036975 containerd[1528]: time="2025-09-12T17:25:29.036948461Z" level=info msg="RemoveContainer for \"5230d267570cf8358e5fc69b8a7d57d7cceb2a296a68f576be19beb127217150\"" Sep 12 17:25:29.039425 containerd[1528]: time="2025-09-12T17:25:29.039378634Z" level=info msg="RemoveContainer for \"5230d267570cf8358e5fc69b8a7d57d7cceb2a296a68f576be19beb127217150\" returns successfully" Sep 12 17:25:29.039653 kubelet[2664]: I0912 17:25:29.039564 2664 scope.go:117] "RemoveContainer" containerID="0ddd83225450a0c374bfe6c5c7ca8c88c2b472d08c6e809e5ef6ac89ea0e0eeb" Sep 12 17:25:29.039795 containerd[1528]: time="2025-09-12T17:25:29.039761436Z" level=error msg="ContainerStatus for \"0ddd83225450a0c374bfe6c5c7ca8c88c2b472d08c6e809e5ef6ac89ea0e0eeb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0ddd83225450a0c374bfe6c5c7ca8c88c2b472d08c6e809e5ef6ac89ea0e0eeb\": not found" Sep 12 17:25:29.039948 kubelet[2664]: E0912 17:25:29.039903 2664 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0ddd83225450a0c374bfe6c5c7ca8c88c2b472d08c6e809e5ef6ac89ea0e0eeb\": not found" containerID="0ddd83225450a0c374bfe6c5c7ca8c88c2b472d08c6e809e5ef6ac89ea0e0eeb" Sep 12 17:25:29.039985 kubelet[2664]: I0912 17:25:29.039958 2664 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0ddd83225450a0c374bfe6c5c7ca8c88c2b472d08c6e809e5ef6ac89ea0e0eeb"} err="failed to get container status \"0ddd83225450a0c374bfe6c5c7ca8c88c2b472d08c6e809e5ef6ac89ea0e0eeb\": rpc error: code = NotFound desc = an error occurred when try to find container \"0ddd83225450a0c374bfe6c5c7ca8c88c2b472d08c6e809e5ef6ac89ea0e0eeb\": not found" Sep 12 17:25:29.039985 kubelet[2664]: I0912 17:25:29.039979 2664 scope.go:117] "RemoveContainer" containerID="0344e45b64422946c9d04248a0c81e4780a7c7afee6e524b707d247868e1ff0b" Sep 12 17:25:29.040203 containerd[1528]: time="2025-09-12T17:25:29.040157198Z" level=error msg="ContainerStatus for \"0344e45b64422946c9d04248a0c81e4780a7c7afee6e524b707d247868e1ff0b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0344e45b64422946c9d04248a0c81e4780a7c7afee6e524b707d247868e1ff0b\": not found" Sep 12 17:25:29.040322 kubelet[2664]: E0912 17:25:29.040301 2664 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0344e45b64422946c9d04248a0c81e4780a7c7afee6e524b707d247868e1ff0b\": not found" containerID="0344e45b64422946c9d04248a0c81e4780a7c7afee6e524b707d247868e1ff0b" Sep 12 17:25:29.040356 kubelet[2664]: I0912 17:25:29.040332 2664 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0344e45b64422946c9d04248a0c81e4780a7c7afee6e524b707d247868e1ff0b"} err="failed to get container status \"0344e45b64422946c9d04248a0c81e4780a7c7afee6e524b707d247868e1ff0b\": rpc error: code = NotFound desc = an error occurred when try to find container \"0344e45b64422946c9d04248a0c81e4780a7c7afee6e524b707d247868e1ff0b\": not found" Sep 12 17:25:29.040356 kubelet[2664]: I0912 17:25:29.040349 2664 scope.go:117] "RemoveContainer" containerID="91dfa60e0134097372d57b8eaac6eba8f3b2b72175fbf2f23fa53f7592eff484" Sep 12 17:25:29.040608 containerd[1528]: time="2025-09-12T17:25:29.040578160Z" level=error msg="ContainerStatus for \"91dfa60e0134097372d57b8eaac6eba8f3b2b72175fbf2f23fa53f7592eff484\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"91dfa60e0134097372d57b8eaac6eba8f3b2b72175fbf2f23fa53f7592eff484\": not found" Sep 12 17:25:29.040718 kubelet[2664]: E0912 17:25:29.040698 2664 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"91dfa60e0134097372d57b8eaac6eba8f3b2b72175fbf2f23fa53f7592eff484\": not found" containerID="91dfa60e0134097372d57b8eaac6eba8f3b2b72175fbf2f23fa53f7592eff484" Sep 12 17:25:29.040754 kubelet[2664]: I0912 17:25:29.040722 2664 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"91dfa60e0134097372d57b8eaac6eba8f3b2b72175fbf2f23fa53f7592eff484"} err="failed to get container status \"91dfa60e0134097372d57b8eaac6eba8f3b2b72175fbf2f23fa53f7592eff484\": rpc error: code = NotFound desc = an error occurred when try to find container \"91dfa60e0134097372d57b8eaac6eba8f3b2b72175fbf2f23fa53f7592eff484\": not found" Sep 12 17:25:29.040754 kubelet[2664]: I0912 17:25:29.040736 2664 scope.go:117] "RemoveContainer" containerID="45b720315d7016e837981b8b11fedcc34c20557ca4a5a6ec65e9974769c96056" Sep 12 17:25:29.040910 containerd[1528]: time="2025-09-12T17:25:29.040876802Z" level=error msg="ContainerStatus for \"45b720315d7016e837981b8b11fedcc34c20557ca4a5a6ec65e9974769c96056\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"45b720315d7016e837981b8b11fedcc34c20557ca4a5a6ec65e9974769c96056\": not found" Sep 12 17:25:29.041036 kubelet[2664]: E0912 17:25:29.041018 2664 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"45b720315d7016e837981b8b11fedcc34c20557ca4a5a6ec65e9974769c96056\": not found" containerID="45b720315d7016e837981b8b11fedcc34c20557ca4a5a6ec65e9974769c96056" Sep 12 17:25:29.041208 kubelet[2664]: I0912 17:25:29.041068 2664 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"45b720315d7016e837981b8b11fedcc34c20557ca4a5a6ec65e9974769c96056"} err="failed to get container status \"45b720315d7016e837981b8b11fedcc34c20557ca4a5a6ec65e9974769c96056\": rpc error: code = NotFound desc = an error occurred when try to find container \"45b720315d7016e837981b8b11fedcc34c20557ca4a5a6ec65e9974769c96056\": not found" Sep 12 17:25:29.041208 kubelet[2664]: I0912 17:25:29.041087 2664 scope.go:117] "RemoveContainer" containerID="5230d267570cf8358e5fc69b8a7d57d7cceb2a296a68f576be19beb127217150" Sep 12 17:25:29.041255 containerd[1528]: time="2025-09-12T17:25:29.041213403Z" level=error msg="ContainerStatus for \"5230d267570cf8358e5fc69b8a7d57d7cceb2a296a68f576be19beb127217150\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5230d267570cf8358e5fc69b8a7d57d7cceb2a296a68f576be19beb127217150\": not found" Sep 12 17:25:29.041369 kubelet[2664]: E0912 17:25:29.041350 2664 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5230d267570cf8358e5fc69b8a7d57d7cceb2a296a68f576be19beb127217150\": not found" containerID="5230d267570cf8358e5fc69b8a7d57d7cceb2a296a68f576be19beb127217150" Sep 12 17:25:29.041488 kubelet[2664]: I0912 17:25:29.041469 2664 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5230d267570cf8358e5fc69b8a7d57d7cceb2a296a68f576be19beb127217150"} err="failed to get container status \"5230d267570cf8358e5fc69b8a7d57d7cceb2a296a68f576be19beb127217150\": rpc error: code = NotFound desc = an error occurred when try to find container \"5230d267570cf8358e5fc69b8a7d57d7cceb2a296a68f576be19beb127217150\": not found" Sep 12 17:25:29.216989 systemd[1]: var-lib-kubelet-pods-e40acf47\x2dc47c\x2d4492\x2db130\x2dd5de5b007667-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d89kxr.mount: Deactivated successfully. Sep 12 17:25:29.217095 systemd[1]: var-lib-kubelet-pods-6ee086ec\x2d2815\x2d40b9\x2dafcf\x2d94289483ccc9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqzfgq.mount: Deactivated successfully. Sep 12 17:25:29.217144 systemd[1]: var-lib-kubelet-pods-6ee086ec\x2d2815\x2d40b9\x2dafcf\x2d94289483ccc9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 12 17:25:29.217202 systemd[1]: var-lib-kubelet-pods-6ee086ec\x2d2815\x2d40b9\x2dafcf\x2d94289483ccc9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 12 17:25:30.108426 sshd[4285]: Connection closed by 10.0.0.1 port 54282 Sep 12 17:25:30.108989 sshd-session[4282]: pam_unix(sshd:session): session closed for user core Sep 12 17:25:30.124301 systemd[1]: sshd@22-10.0.0.110:22-10.0.0.1:54282.service: Deactivated successfully. Sep 12 17:25:30.126297 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 17:25:30.126573 systemd[1]: session-23.scope: Consumed 1.313s CPU time, 24.9M memory peak. Sep 12 17:25:30.127788 systemd-logind[1507]: Session 23 logged out. Waiting for processes to exit. Sep 12 17:25:30.130290 systemd[1]: Started sshd@23-10.0.0.110:22-10.0.0.1:48924.service - OpenSSH per-connection server daemon (10.0.0.1:48924). Sep 12 17:25:30.131736 systemd-logind[1507]: Removed session 23. Sep 12 17:25:30.185948 sshd[4436]: Accepted publickey for core from 10.0.0.1 port 48924 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:25:30.187354 sshd-session[4436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:25:30.191807 systemd-logind[1507]: New session 24 of user core. Sep 12 17:25:30.201560 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 17:25:30.739425 kubelet[2664]: I0912 17:25:30.739237 2664 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee086ec-2815-40b9-afcf-94289483ccc9" path="/var/lib/kubelet/pods/6ee086ec-2815-40b9-afcf-94289483ccc9/volumes" Sep 12 17:25:30.740453 kubelet[2664]: I0912 17:25:30.739774 2664 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e40acf47-c47c-4492-b130-d5de5b007667" path="/var/lib/kubelet/pods/e40acf47-c47c-4492-b130-d5de5b007667/volumes" Sep 12 17:25:30.804942 kubelet[2664]: E0912 17:25:30.804882 2664 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 17:25:30.990865 sshd[4439]: Connection closed by 10.0.0.1 port 48924 Sep 12 17:25:30.991932 sshd-session[4436]: pam_unix(sshd:session): session closed for user core Sep 12 17:25:31.004398 systemd[1]: sshd@23-10.0.0.110:22-10.0.0.1:48924.service: Deactivated successfully. Sep 12 17:25:31.007291 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 17:25:31.008309 systemd-logind[1507]: Session 24 logged out. Waiting for processes to exit. Sep 12 17:25:31.012824 systemd[1]: Started sshd@24-10.0.0.110:22-10.0.0.1:48932.service - OpenSSH per-connection server daemon (10.0.0.1:48932). Sep 12 17:25:31.015697 systemd-logind[1507]: Removed session 24. Sep 12 17:25:31.018258 kubelet[2664]: E0912 17:25:31.018032 2664 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6ee086ec-2815-40b9-afcf-94289483ccc9" containerName="mount-bpf-fs" Sep 12 17:25:31.018258 kubelet[2664]: E0912 17:25:31.018070 2664 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e40acf47-c47c-4492-b130-d5de5b007667" containerName="cilium-operator" Sep 12 17:25:31.018258 kubelet[2664]: E0912 17:25:31.018077 2664 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6ee086ec-2815-40b9-afcf-94289483ccc9" containerName="clean-cilium-state" Sep 12 17:25:31.018258 kubelet[2664]: E0912 17:25:31.018084 2664 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6ee086ec-2815-40b9-afcf-94289483ccc9" containerName="mount-cgroup" Sep 12 17:25:31.018258 kubelet[2664]: E0912 17:25:31.018089 2664 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6ee086ec-2815-40b9-afcf-94289483ccc9" containerName="apply-sysctl-overwrites" Sep 12 17:25:31.018258 kubelet[2664]: E0912 17:25:31.018094 2664 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6ee086ec-2815-40b9-afcf-94289483ccc9" containerName="cilium-agent" Sep 12 17:25:31.018258 kubelet[2664]: I0912 17:25:31.018117 2664 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ee086ec-2815-40b9-afcf-94289483ccc9" containerName="cilium-agent" Sep 12 17:25:31.018258 kubelet[2664]: I0912 17:25:31.018123 2664 memory_manager.go:354] "RemoveStaleState removing state" podUID="e40acf47-c47c-4492-b130-d5de5b007667" containerName="cilium-operator" Sep 12 17:25:31.035684 systemd[1]: Created slice kubepods-burstable-pod4a7d82df_4612_4030_9709_2ac3ba0ad2a6.slice - libcontainer container kubepods-burstable-pod4a7d82df_4612_4030_9709_2ac3ba0ad2a6.slice. Sep 12 17:25:31.077950 sshd[4451]: Accepted publickey for core from 10.0.0.1 port 48932 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:25:31.079181 sshd-session[4451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:25:31.083759 systemd-logind[1507]: New session 25 of user core. Sep 12 17:25:31.094594 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 17:25:31.145446 sshd[4454]: Connection closed by 10.0.0.1 port 48932 Sep 12 17:25:31.146281 sshd-session[4451]: pam_unix(sshd:session): session closed for user core Sep 12 17:25:31.157201 systemd[1]: sshd@24-10.0.0.110:22-10.0.0.1:48932.service: Deactivated successfully. Sep 12 17:25:31.158751 kubelet[2664]: I0912 17:25:31.158677 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4a7d82df-4612-4030-9709-2ac3ba0ad2a6-host-proc-sys-kernel\") pod \"cilium-7dhxz\" (UID: \"4a7d82df-4612-4030-9709-2ac3ba0ad2a6\") " pod="kube-system/cilium-7dhxz" Sep 12 17:25:31.158751 kubelet[2664]: I0912 17:25:31.158715 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4a7d82df-4612-4030-9709-2ac3ba0ad2a6-etc-cni-netd\") pod \"cilium-7dhxz\" (UID: \"4a7d82df-4612-4030-9709-2ac3ba0ad2a6\") " pod="kube-system/cilium-7dhxz" Sep 12 17:25:31.158970 kubelet[2664]: I0912 17:25:31.158734 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4a7d82df-4612-4030-9709-2ac3ba0ad2a6-clustermesh-secrets\") pod \"cilium-7dhxz\" (UID: \"4a7d82df-4612-4030-9709-2ac3ba0ad2a6\") " pod="kube-system/cilium-7dhxz" Sep 12 17:25:31.158970 kubelet[2664]: I0912 17:25:31.158915 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4a7d82df-4612-4030-9709-2ac3ba0ad2a6-hostproc\") pod \"cilium-7dhxz\" (UID: \"4a7d82df-4612-4030-9709-2ac3ba0ad2a6\") " pod="kube-system/cilium-7dhxz" Sep 12 17:25:31.158970 kubelet[2664]: I0912 17:25:31.158935 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvq68\" (UniqueName: \"kubernetes.io/projected/4a7d82df-4612-4030-9709-2ac3ba0ad2a6-kube-api-access-kvq68\") pod \"cilium-7dhxz\" (UID: \"4a7d82df-4612-4030-9709-2ac3ba0ad2a6\") " pod="kube-system/cilium-7dhxz" Sep 12 17:25:31.158970 kubelet[2664]: I0912 17:25:31.158952 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4a7d82df-4612-4030-9709-2ac3ba0ad2a6-cilium-config-path\") pod \"cilium-7dhxz\" (UID: \"4a7d82df-4612-4030-9709-2ac3ba0ad2a6\") " pod="kube-system/cilium-7dhxz" Sep 12 17:25:31.159094 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 17:25:31.159453 kubelet[2664]: I0912 17:25:31.159128 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4a7d82df-4612-4030-9709-2ac3ba0ad2a6-host-proc-sys-net\") pod \"cilium-7dhxz\" (UID: \"4a7d82df-4612-4030-9709-2ac3ba0ad2a6\") " pod="kube-system/cilium-7dhxz" Sep 12 17:25:31.159453 kubelet[2664]: I0912 17:25:31.159153 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4a7d82df-4612-4030-9709-2ac3ba0ad2a6-bpf-maps\") pod \"cilium-7dhxz\" (UID: \"4a7d82df-4612-4030-9709-2ac3ba0ad2a6\") " pod="kube-system/cilium-7dhxz" Sep 12 17:25:31.159453 kubelet[2664]: I0912 17:25:31.159195 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4a7d82df-4612-4030-9709-2ac3ba0ad2a6-cilium-ipsec-secrets\") pod \"cilium-7dhxz\" (UID: \"4a7d82df-4612-4030-9709-2ac3ba0ad2a6\") " pod="kube-system/cilium-7dhxz" Sep 12 17:25:31.159453 kubelet[2664]: I0912 17:25:31.159244 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4a7d82df-4612-4030-9709-2ac3ba0ad2a6-cilium-cgroup\") pod \"cilium-7dhxz\" (UID: \"4a7d82df-4612-4030-9709-2ac3ba0ad2a6\") " pod="kube-system/cilium-7dhxz" Sep 12 17:25:31.159453 kubelet[2664]: I0912 17:25:31.159284 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4a7d82df-4612-4030-9709-2ac3ba0ad2a6-cilium-run\") pod \"cilium-7dhxz\" (UID: \"4a7d82df-4612-4030-9709-2ac3ba0ad2a6\") " pod="kube-system/cilium-7dhxz" Sep 12 17:25:31.159453 kubelet[2664]: I0912 17:25:31.159310 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4a7d82df-4612-4030-9709-2ac3ba0ad2a6-cni-path\") pod \"cilium-7dhxz\" (UID: \"4a7d82df-4612-4030-9709-2ac3ba0ad2a6\") " pod="kube-system/cilium-7dhxz" Sep 12 17:25:31.159606 kubelet[2664]: I0912 17:25:31.159326 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4a7d82df-4612-4030-9709-2ac3ba0ad2a6-lib-modules\") pod \"cilium-7dhxz\" (UID: \"4a7d82df-4612-4030-9709-2ac3ba0ad2a6\") " pod="kube-system/cilium-7dhxz" Sep 12 17:25:31.159606 kubelet[2664]: I0912 17:25:31.159340 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4a7d82df-4612-4030-9709-2ac3ba0ad2a6-hubble-tls\") pod \"cilium-7dhxz\" (UID: \"4a7d82df-4612-4030-9709-2ac3ba0ad2a6\") " pod="kube-system/cilium-7dhxz" Sep 12 17:25:31.159606 kubelet[2664]: I0912 17:25:31.159356 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4a7d82df-4612-4030-9709-2ac3ba0ad2a6-xtables-lock\") pod \"cilium-7dhxz\" (UID: \"4a7d82df-4612-4030-9709-2ac3ba0ad2a6\") " pod="kube-system/cilium-7dhxz" Sep 12 17:25:31.159810 systemd-logind[1507]: Session 25 logged out. Waiting for processes to exit. Sep 12 17:25:31.162809 systemd[1]: Started sshd@25-10.0.0.110:22-10.0.0.1:48936.service - OpenSSH per-connection server daemon (10.0.0.1:48936). Sep 12 17:25:31.163709 systemd-logind[1507]: Removed session 25. Sep 12 17:25:31.211778 sshd[4461]: Accepted publickey for core from 10.0.0.1 port 48936 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:25:31.212907 sshd-session[4461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:25:31.217055 systemd-logind[1507]: New session 26 of user core. Sep 12 17:25:31.222571 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 17:25:31.340397 containerd[1528]: time="2025-09-12T17:25:31.340060754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7dhxz,Uid:4a7d82df-4612-4030-9709-2ac3ba0ad2a6,Namespace:kube-system,Attempt:0,}" Sep 12 17:25:31.355142 containerd[1528]: time="2025-09-12T17:25:31.354847520Z" level=info msg="connecting to shim 6034c17184be52715932e2a6918792d5c8d2b9da13ea8113f2eb0b24662373c0" address="unix:///run/containerd/s/fcca1d1349e21ac0b0e26e039f813e076a6234a1cbcc99880c1386da2ccf1e9e" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:25:31.378597 systemd[1]: Started cri-containerd-6034c17184be52715932e2a6918792d5c8d2b9da13ea8113f2eb0b24662373c0.scope - libcontainer container 6034c17184be52715932e2a6918792d5c8d2b9da13ea8113f2eb0b24662373c0. Sep 12 17:25:31.399797 containerd[1528]: time="2025-09-12T17:25:31.399759062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7dhxz,Uid:4a7d82df-4612-4030-9709-2ac3ba0ad2a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"6034c17184be52715932e2a6918792d5c8d2b9da13ea8113f2eb0b24662373c0\"" Sep 12 17:25:31.403297 containerd[1528]: time="2025-09-12T17:25:31.403251202Z" level=info msg="CreateContainer within sandbox \"6034c17184be52715932e2a6918792d5c8d2b9da13ea8113f2eb0b24662373c0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 17:25:31.410715 containerd[1528]: time="2025-09-12T17:25:31.410681365Z" level=info msg="Container 65d5806f7d4900ae33fca853d870f2a857f7046e373d45694662ff4fedd0834f: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:25:31.415944 containerd[1528]: time="2025-09-12T17:25:31.415906836Z" level=info msg="CreateContainer within sandbox \"6034c17184be52715932e2a6918792d5c8d2b9da13ea8113f2eb0b24662373c0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"65d5806f7d4900ae33fca853d870f2a857f7046e373d45694662ff4fedd0834f\"" Sep 12 17:25:31.417642 containerd[1528]: time="2025-09-12T17:25:31.417618766Z" level=info msg="StartContainer for \"65d5806f7d4900ae33fca853d870f2a857f7046e373d45694662ff4fedd0834f\"" Sep 12 17:25:31.418694 containerd[1528]: time="2025-09-12T17:25:31.418666732Z" level=info msg="connecting to shim 65d5806f7d4900ae33fca853d870f2a857f7046e373d45694662ff4fedd0834f" address="unix:///run/containerd/s/fcca1d1349e21ac0b0e26e039f813e076a6234a1cbcc99880c1386da2ccf1e9e" protocol=ttrpc version=3 Sep 12 17:25:31.436595 systemd[1]: Started cri-containerd-65d5806f7d4900ae33fca853d870f2a857f7046e373d45694662ff4fedd0834f.scope - libcontainer container 65d5806f7d4900ae33fca853d870f2a857f7046e373d45694662ff4fedd0834f. Sep 12 17:25:31.463586 containerd[1528]: time="2025-09-12T17:25:31.463548513Z" level=info msg="StartContainer for \"65d5806f7d4900ae33fca853d870f2a857f7046e373d45694662ff4fedd0834f\" returns successfully" Sep 12 17:25:31.472015 systemd[1]: cri-containerd-65d5806f7d4900ae33fca853d870f2a857f7046e373d45694662ff4fedd0834f.scope: Deactivated successfully. Sep 12 17:25:31.473370 containerd[1528]: time="2025-09-12T17:25:31.473279370Z" level=info msg="received exit event container_id:\"65d5806f7d4900ae33fca853d870f2a857f7046e373d45694662ff4fedd0834f\" id:\"65d5806f7d4900ae33fca853d870f2a857f7046e373d45694662ff4fedd0834f\" pid:4533 exited_at:{seconds:1757697931 nanos:473014688}" Sep 12 17:25:31.473590 containerd[1528]: time="2025-09-12T17:25:31.473503851Z" level=info msg="TaskExit event in podsandbox handler container_id:\"65d5806f7d4900ae33fca853d870f2a857f7046e373d45694662ff4fedd0834f\" id:\"65d5806f7d4900ae33fca853d870f2a857f7046e373d45694662ff4fedd0834f\" pid:4533 exited_at:{seconds:1757697931 nanos:473014688}" Sep 12 17:25:32.008274 containerd[1528]: time="2025-09-12T17:25:32.008237969Z" level=info msg="CreateContainer within sandbox \"6034c17184be52715932e2a6918792d5c8d2b9da13ea8113f2eb0b24662373c0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 17:25:32.015446 containerd[1528]: time="2025-09-12T17:25:32.015052971Z" level=info msg="Container 8058d49cec6f0185ae1486de705db1c2b6dac12d8b21957508e6901d014f5b66: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:25:32.020692 containerd[1528]: time="2025-09-12T17:25:32.020653285Z" level=info msg="CreateContainer within sandbox \"6034c17184be52715932e2a6918792d5c8d2b9da13ea8113f2eb0b24662373c0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8058d49cec6f0185ae1486de705db1c2b6dac12d8b21957508e6901d014f5b66\"" Sep 12 17:25:32.022532 containerd[1528]: time="2025-09-12T17:25:32.022489377Z" level=info msg="StartContainer for \"8058d49cec6f0185ae1486de705db1c2b6dac12d8b21957508e6901d014f5b66\"" Sep 12 17:25:32.023429 containerd[1528]: time="2025-09-12T17:25:32.023381702Z" level=info msg="connecting to shim 8058d49cec6f0185ae1486de705db1c2b6dac12d8b21957508e6901d014f5b66" address="unix:///run/containerd/s/fcca1d1349e21ac0b0e26e039f813e076a6234a1cbcc99880c1386da2ccf1e9e" protocol=ttrpc version=3 Sep 12 17:25:32.052737 systemd[1]: Started cri-containerd-8058d49cec6f0185ae1486de705db1c2b6dac12d8b21957508e6901d014f5b66.scope - libcontainer container 8058d49cec6f0185ae1486de705db1c2b6dac12d8b21957508e6901d014f5b66. Sep 12 17:25:32.090793 containerd[1528]: time="2025-09-12T17:25:32.090751996Z" level=info msg="StartContainer for \"8058d49cec6f0185ae1486de705db1c2b6dac12d8b21957508e6901d014f5b66\" returns successfully" Sep 12 17:25:32.099515 systemd[1]: cri-containerd-8058d49cec6f0185ae1486de705db1c2b6dac12d8b21957508e6901d014f5b66.scope: Deactivated successfully. Sep 12 17:25:32.102588 containerd[1528]: time="2025-09-12T17:25:32.102552308Z" level=info msg="received exit event container_id:\"8058d49cec6f0185ae1486de705db1c2b6dac12d8b21957508e6901d014f5b66\" id:\"8058d49cec6f0185ae1486de705db1c2b6dac12d8b21957508e6901d014f5b66\" pid:4582 exited_at:{seconds:1757697932 nanos:101917225}" Sep 12 17:25:32.102805 containerd[1528]: time="2025-09-12T17:25:32.102771310Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8058d49cec6f0185ae1486de705db1c2b6dac12d8b21957508e6901d014f5b66\" id:\"8058d49cec6f0185ae1486de705db1c2b6dac12d8b21957508e6901d014f5b66\" pid:4582 exited_at:{seconds:1757697932 nanos:101917225}" Sep 12 17:25:32.479091 kubelet[2664]: I0912 17:25:32.477703 2664 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-12T17:25:32Z","lastTransitionTime":"2025-09-12T17:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 12 17:25:33.022236 containerd[1528]: time="2025-09-12T17:25:33.021618240Z" level=info msg="CreateContainer within sandbox \"6034c17184be52715932e2a6918792d5c8d2b9da13ea8113f2eb0b24662373c0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 17:25:33.047343 containerd[1528]: time="2025-09-12T17:25:33.047304206Z" level=info msg="Container d6fb94d75e6e4e1a9a8d74c57e81d55d63db3ea5f57af76bf5f8a0e60ffc0dba: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:25:33.058143 containerd[1528]: time="2025-09-12T17:25:33.058088395Z" level=info msg="CreateContainer within sandbox \"6034c17184be52715932e2a6918792d5c8d2b9da13ea8113f2eb0b24662373c0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d6fb94d75e6e4e1a9a8d74c57e81d55d63db3ea5f57af76bf5f8a0e60ffc0dba\"" Sep 12 17:25:33.058812 containerd[1528]: time="2025-09-12T17:25:33.058768040Z" level=info msg="StartContainer for \"d6fb94d75e6e4e1a9a8d74c57e81d55d63db3ea5f57af76bf5f8a0e60ffc0dba\"" Sep 12 17:25:33.060904 containerd[1528]: time="2025-09-12T17:25:33.060820853Z" level=info msg="connecting to shim d6fb94d75e6e4e1a9a8d74c57e81d55d63db3ea5f57af76bf5f8a0e60ffc0dba" address="unix:///run/containerd/s/fcca1d1349e21ac0b0e26e039f813e076a6234a1cbcc99880c1386da2ccf1e9e" protocol=ttrpc version=3 Sep 12 17:25:33.095615 systemd[1]: Started cri-containerd-d6fb94d75e6e4e1a9a8d74c57e81d55d63db3ea5f57af76bf5f8a0e60ffc0dba.scope - libcontainer container d6fb94d75e6e4e1a9a8d74c57e81d55d63db3ea5f57af76bf5f8a0e60ffc0dba. Sep 12 17:25:33.132360 systemd[1]: cri-containerd-d6fb94d75e6e4e1a9a8d74c57e81d55d63db3ea5f57af76bf5f8a0e60ffc0dba.scope: Deactivated successfully. Sep 12 17:25:33.132868 containerd[1528]: time="2025-09-12T17:25:33.132836797Z" level=info msg="StartContainer for \"d6fb94d75e6e4e1a9a8d74c57e81d55d63db3ea5f57af76bf5f8a0e60ffc0dba\" returns successfully" Sep 12 17:25:33.134652 containerd[1528]: time="2025-09-12T17:25:33.134619929Z" level=info msg="received exit event container_id:\"d6fb94d75e6e4e1a9a8d74c57e81d55d63db3ea5f57af76bf5f8a0e60ffc0dba\" id:\"d6fb94d75e6e4e1a9a8d74c57e81d55d63db3ea5f57af76bf5f8a0e60ffc0dba\" pid:4627 exited_at:{seconds:1757697933 nanos:133291800}" Sep 12 17:25:33.135189 containerd[1528]: time="2025-09-12T17:25:33.135163532Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d6fb94d75e6e4e1a9a8d74c57e81d55d63db3ea5f57af76bf5f8a0e60ffc0dba\" id:\"d6fb94d75e6e4e1a9a8d74c57e81d55d63db3ea5f57af76bf5f8a0e60ffc0dba\" pid:4627 exited_at:{seconds:1757697933 nanos:133291800}" Sep 12 17:25:33.264466 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d6fb94d75e6e4e1a9a8d74c57e81d55d63db3ea5f57af76bf5f8a0e60ffc0dba-rootfs.mount: Deactivated successfully. Sep 12 17:25:34.024061 containerd[1528]: time="2025-09-12T17:25:34.024002990Z" level=info msg="CreateContainer within sandbox \"6034c17184be52715932e2a6918792d5c8d2b9da13ea8113f2eb0b24662373c0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 17:25:34.039236 containerd[1528]: time="2025-09-12T17:25:34.039162293Z" level=info msg="Container a5f91b0ae800e8431f934b9766eeac504ead921f2409e1411d2f18a78721de76: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:25:34.050377 containerd[1528]: time="2025-09-12T17:25:34.050160727Z" level=info msg="CreateContainer within sandbox \"6034c17184be52715932e2a6918792d5c8d2b9da13ea8113f2eb0b24662373c0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a5f91b0ae800e8431f934b9766eeac504ead921f2409e1411d2f18a78721de76\"" Sep 12 17:25:34.050907 containerd[1528]: time="2025-09-12T17:25:34.050878612Z" level=info msg="StartContainer for \"a5f91b0ae800e8431f934b9766eeac504ead921f2409e1411d2f18a78721de76\"" Sep 12 17:25:34.052173 containerd[1528]: time="2025-09-12T17:25:34.052112020Z" level=info msg="connecting to shim a5f91b0ae800e8431f934b9766eeac504ead921f2409e1411d2f18a78721de76" address="unix:///run/containerd/s/fcca1d1349e21ac0b0e26e039f813e076a6234a1cbcc99880c1386da2ccf1e9e" protocol=ttrpc version=3 Sep 12 17:25:34.080648 systemd[1]: Started cri-containerd-a5f91b0ae800e8431f934b9766eeac504ead921f2409e1411d2f18a78721de76.scope - libcontainer container a5f91b0ae800e8431f934b9766eeac504ead921f2409e1411d2f18a78721de76. Sep 12 17:25:34.112995 systemd[1]: cri-containerd-a5f91b0ae800e8431f934b9766eeac504ead921f2409e1411d2f18a78721de76.scope: Deactivated successfully. Sep 12 17:25:34.114853 containerd[1528]: time="2025-09-12T17:25:34.114641362Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a5f91b0ae800e8431f934b9766eeac504ead921f2409e1411d2f18a78721de76\" id:\"a5f91b0ae800e8431f934b9766eeac504ead921f2409e1411d2f18a78721de76\" pid:4667 exited_at:{seconds:1757697934 nanos:113990997}" Sep 12 17:25:34.124610 containerd[1528]: time="2025-09-12T17:25:34.124549989Z" level=info msg="received exit event container_id:\"a5f91b0ae800e8431f934b9766eeac504ead921f2409e1411d2f18a78721de76\" id:\"a5f91b0ae800e8431f934b9766eeac504ead921f2409e1411d2f18a78721de76\" pid:4667 exited_at:{seconds:1757697934 nanos:113990997}" Sep 12 17:25:34.132895 containerd[1528]: time="2025-09-12T17:25:34.132278241Z" level=info msg="StartContainer for \"a5f91b0ae800e8431f934b9766eeac504ead921f2409e1411d2f18a78721de76\" returns successfully" Sep 12 17:25:34.145900 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5f91b0ae800e8431f934b9766eeac504ead921f2409e1411d2f18a78721de76-rootfs.mount: Deactivated successfully. Sep 12 17:25:35.033785 containerd[1528]: time="2025-09-12T17:25:35.033705770Z" level=info msg="CreateContainer within sandbox \"6034c17184be52715932e2a6918792d5c8d2b9da13ea8113f2eb0b24662373c0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 17:25:35.057133 containerd[1528]: time="2025-09-12T17:25:35.057083495Z" level=info msg="Container 2bd6b269b76066d776cac733c32f8329c3c4eb880a274fd27d786288b8896832: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:25:35.067188 containerd[1528]: time="2025-09-12T17:25:35.067137885Z" level=info msg="CreateContainer within sandbox \"6034c17184be52715932e2a6918792d5c8d2b9da13ea8113f2eb0b24662373c0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2bd6b269b76066d776cac733c32f8329c3c4eb880a274fd27d786288b8896832\"" Sep 12 17:25:35.067704 containerd[1528]: time="2025-09-12T17:25:35.067667689Z" level=info msg="StartContainer for \"2bd6b269b76066d776cac733c32f8329c3c4eb880a274fd27d786288b8896832\"" Sep 12 17:25:35.068589 containerd[1528]: time="2025-09-12T17:25:35.068561775Z" level=info msg="connecting to shim 2bd6b269b76066d776cac733c32f8329c3c4eb880a274fd27d786288b8896832" address="unix:///run/containerd/s/fcca1d1349e21ac0b0e26e039f813e076a6234a1cbcc99880c1386da2ccf1e9e" protocol=ttrpc version=3 Sep 12 17:25:35.096605 systemd[1]: Started cri-containerd-2bd6b269b76066d776cac733c32f8329c3c4eb880a274fd27d786288b8896832.scope - libcontainer container 2bd6b269b76066d776cac733c32f8329c3c4eb880a274fd27d786288b8896832. Sep 12 17:25:35.145755 containerd[1528]: time="2025-09-12T17:25:35.145708798Z" level=info msg="StartContainer for \"2bd6b269b76066d776cac733c32f8329c3c4eb880a274fd27d786288b8896832\" returns successfully" Sep 12 17:25:35.206727 containerd[1528]: time="2025-09-12T17:25:35.206680227Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2bd6b269b76066d776cac733c32f8329c3c4eb880a274fd27d786288b8896832\" id:\"5b4b56c4584cea91ceb4fbe0aa788d85b9b35db664147de42291c74d5a4815dc\" pid:4734 exited_at:{seconds:1757697935 nanos:206376185}" Sep 12 17:25:35.431469 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 12 17:25:37.576613 containerd[1528]: time="2025-09-12T17:25:37.576564768Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2bd6b269b76066d776cac733c32f8329c3c4eb880a274fd27d786288b8896832\" id:\"9a45fe87a0cdc3b8b2ef8ccb4378c31199726d739d628c4abeb2d3719ff68d98\" pid:5012 exit_status:1 exited_at:{seconds:1757697937 nanos:576215286}" Sep 12 17:25:38.443426 systemd-networkd[1437]: lxc_health: Link UP Sep 12 17:25:38.445694 systemd-networkd[1437]: lxc_health: Gained carrier Sep 12 17:25:39.370632 kubelet[2664]: I0912 17:25:39.369941 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7dhxz" podStartSLOduration=8.369924136 podStartE2EDuration="8.369924136s" podCreationTimestamp="2025-09-12 17:25:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:25:36.058682195 +0000 UTC m=+85.410675173" watchObservedRunningTime="2025-09-12 17:25:39.369924136 +0000 UTC m=+88.721917114" Sep 12 17:25:39.698163 containerd[1528]: time="2025-09-12T17:25:39.698109753Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2bd6b269b76066d776cac733c32f8329c3c4eb880a274fd27d786288b8896832\" id:\"df1981691da8b9f8e065d73cc5c57a7a1209e7dbc61fe880ad3f2dfa506ea343\" pid:5272 exited_at:{seconds:1757697939 nanos:697686270}" Sep 12 17:25:40.390559 systemd-networkd[1437]: lxc_health: Gained IPv6LL Sep 12 17:25:41.892019 containerd[1528]: time="2025-09-12T17:25:41.891973270Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2bd6b269b76066d776cac733c32f8329c3c4eb880a274fd27d786288b8896832\" id:\"dea05a37422c6460f34753a829e80c4464a0a92bb98d481bc3894cc08b5f586b\" pid:5299 exited_at:{seconds:1757697941 nanos:891634347}" Sep 12 17:25:44.037781 containerd[1528]: time="2025-09-12T17:25:44.037738545Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2bd6b269b76066d776cac733c32f8329c3c4eb880a274fd27d786288b8896832\" id:\"e76ce66236a279c4e742296e345d909ae299dc248a939fc6991440b16a4118b8\" pid:5327 exited_at:{seconds:1757697944 nanos:37190100}" Sep 12 17:25:46.168888 containerd[1528]: time="2025-09-12T17:25:46.168839408Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2bd6b269b76066d776cac733c32f8329c3c4eb880a274fd27d786288b8896832\" id:\"821c2f6b2b1ed1fc17f5078d6dd782c90beef52450519b153d24f93269614db0\" pid:5352 exited_at:{seconds:1757697946 nanos:168365004}" Sep 12 17:25:46.171932 kubelet[2664]: E0912 17:25:46.171892 2664 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:37968->127.0.0.1:39911: write tcp 127.0.0.1:37968->127.0.0.1:39911: write: broken pipe Sep 12 17:25:46.201508 sshd[4464]: Connection closed by 10.0.0.1 port 48936 Sep 12 17:25:46.202265 sshd-session[4461]: pam_unix(sshd:session): session closed for user core Sep 12 17:25:46.206490 systemd-logind[1507]: Session 26 logged out. Waiting for processes to exit. Sep 12 17:25:46.206650 systemd[1]: sshd@25-10.0.0.110:22-10.0.0.1:48936.service: Deactivated successfully. Sep 12 17:25:46.208175 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 17:25:46.209286 systemd-logind[1507]: Removed session 26.