Sep 3 23:22:57.782780 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 3 23:22:57.782810 kernel: Linux version 6.12.44-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Wed Sep 3 22:04:24 -00 2025 Sep 3 23:22:57.782821 kernel: KASLR enabled Sep 3 23:22:57.782827 kernel: efi: EFI v2.7 by EDK II Sep 3 23:22:57.782832 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb221f18 Sep 3 23:22:57.782838 kernel: random: crng init done Sep 3 23:22:57.782844 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Sep 3 23:22:57.782850 kernel: secureboot: Secure boot enabled Sep 3 23:22:57.782856 kernel: ACPI: Early table checksum verification disabled Sep 3 23:22:57.782862 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) Sep 3 23:22:57.782868 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 3 23:22:57.782874 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 3 23:22:57.782880 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 3 23:22:57.782885 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 3 23:22:57.782892 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 3 23:22:57.782900 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 3 23:22:57.782906 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 3 23:22:57.782912 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 3 23:22:57.782918 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 3 23:22:57.782924 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 3 23:22:57.782930 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 3 23:22:57.782936 kernel: ACPI: Use ACPI SPCR as default console: No Sep 3 23:22:57.782942 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 3 23:22:57.782948 kernel: NODE_DATA(0) allocated [mem 0xdc737a00-0xdc73efff] Sep 3 23:22:57.782954 kernel: Zone ranges: Sep 3 23:22:57.782962 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 3 23:22:57.782968 kernel: DMA32 empty Sep 3 23:22:57.782974 kernel: Normal empty Sep 3 23:22:57.782980 kernel: Device empty Sep 3 23:22:57.782986 kernel: Movable zone start for each node Sep 3 23:22:57.782992 kernel: Early memory node ranges Sep 3 23:22:57.783012 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] Sep 3 23:22:57.783018 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] Sep 3 23:22:57.783024 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] Sep 3 23:22:57.783030 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] Sep 3 23:22:57.783037 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] Sep 3 23:22:57.783043 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] Sep 3 23:22:57.783050 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] Sep 3 23:22:57.783056 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] Sep 3 23:22:57.783062 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 3 23:22:57.783071 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 3 23:22:57.783078 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 3 23:22:57.783084 kernel: cma: Reserved 16 MiB at 0x00000000d7a00000 on node -1 Sep 3 23:22:57.783090 kernel: psci: probing for conduit method from ACPI. Sep 3 23:22:57.783098 kernel: psci: PSCIv1.1 detected in firmware. Sep 3 23:22:57.783104 kernel: psci: Using standard PSCI v0.2 function IDs Sep 3 23:22:57.783111 kernel: psci: Trusted OS migration not required Sep 3 23:22:57.783117 kernel: psci: SMC Calling Convention v1.1 Sep 3 23:22:57.783123 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 3 23:22:57.783130 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 3 23:22:57.783136 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 3 23:22:57.783143 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 3 23:22:57.783149 kernel: Detected PIPT I-cache on CPU0 Sep 3 23:22:57.783157 kernel: CPU features: detected: GIC system register CPU interface Sep 3 23:22:57.783163 kernel: CPU features: detected: Spectre-v4 Sep 3 23:22:57.783170 kernel: CPU features: detected: Spectre-BHB Sep 3 23:22:57.783176 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 3 23:22:57.783182 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 3 23:22:57.783189 kernel: CPU features: detected: ARM erratum 1418040 Sep 3 23:22:57.783195 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 3 23:22:57.783201 kernel: alternatives: applying boot alternatives Sep 3 23:22:57.783208 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=cb633bb0c889435b58a5c40c9c9bc9d5899ece5018569c9fa08f911265d3f18e Sep 3 23:22:57.783215 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 3 23:22:57.783221 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 3 23:22:57.783229 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 3 23:22:57.783235 kernel: Fallback order for Node 0: 0 Sep 3 23:22:57.783242 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 3 23:22:57.783248 kernel: Policy zone: DMA Sep 3 23:22:57.783254 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 3 23:22:57.783260 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 3 23:22:57.783267 kernel: software IO TLB: area num 4. Sep 3 23:22:57.783273 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 3 23:22:57.783279 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) Sep 3 23:22:57.783286 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 3 23:22:57.783292 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 3 23:22:57.783299 kernel: rcu: RCU event tracing is enabled. Sep 3 23:22:57.783307 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 3 23:22:57.783313 kernel: Trampoline variant of Tasks RCU enabled. Sep 3 23:22:57.783319 kernel: Tracing variant of Tasks RCU enabled. Sep 3 23:22:57.783326 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 3 23:22:57.783332 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 3 23:22:57.783339 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 3 23:22:57.783345 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 3 23:22:57.783352 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 3 23:22:57.783358 kernel: GICv3: 256 SPIs implemented Sep 3 23:22:57.783364 kernel: GICv3: 0 Extended SPIs implemented Sep 3 23:22:57.783370 kernel: Root IRQ handler: gic_handle_irq Sep 3 23:22:57.783378 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 3 23:22:57.783384 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 3 23:22:57.783390 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 3 23:22:57.783396 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 3 23:22:57.783403 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 3 23:22:57.783409 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 3 23:22:57.783416 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 3 23:22:57.783422 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 3 23:22:57.783428 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 3 23:22:57.783435 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 3 23:22:57.783441 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 3 23:22:57.783447 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 3 23:22:57.783455 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 3 23:22:57.783462 kernel: arm-pv: using stolen time PV Sep 3 23:22:57.783469 kernel: Console: colour dummy device 80x25 Sep 3 23:22:57.783475 kernel: ACPI: Core revision 20240827 Sep 3 23:22:57.783482 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 3 23:22:57.783488 kernel: pid_max: default: 32768 minimum: 301 Sep 3 23:22:57.783495 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 3 23:22:57.783501 kernel: landlock: Up and running. Sep 3 23:22:57.783508 kernel: SELinux: Initializing. Sep 3 23:22:57.783516 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 3 23:22:57.783522 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 3 23:22:57.783529 kernel: rcu: Hierarchical SRCU implementation. Sep 3 23:22:57.783536 kernel: rcu: Max phase no-delay instances is 400. Sep 3 23:22:57.783542 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 3 23:22:57.783549 kernel: Remapping and enabling EFI services. Sep 3 23:22:57.783555 kernel: smp: Bringing up secondary CPUs ... Sep 3 23:22:57.783562 kernel: Detected PIPT I-cache on CPU1 Sep 3 23:22:57.783568 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 3 23:22:57.783576 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 3 23:22:57.783587 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 3 23:22:57.783594 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 3 23:22:57.783602 kernel: Detected PIPT I-cache on CPU2 Sep 3 23:22:57.783609 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 3 23:22:57.783616 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 3 23:22:57.783623 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 3 23:22:57.783630 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 3 23:22:57.783644 kernel: Detected PIPT I-cache on CPU3 Sep 3 23:22:57.783653 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 3 23:22:57.783660 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 3 23:22:57.783667 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 3 23:22:57.783681 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 3 23:22:57.783688 kernel: smp: Brought up 1 node, 4 CPUs Sep 3 23:22:57.783695 kernel: SMP: Total of 4 processors activated. Sep 3 23:22:57.783702 kernel: CPU: All CPU(s) started at EL1 Sep 3 23:22:57.783708 kernel: CPU features: detected: 32-bit EL0 Support Sep 3 23:22:57.783715 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 3 23:22:57.783724 kernel: CPU features: detected: Common not Private translations Sep 3 23:22:57.783731 kernel: CPU features: detected: CRC32 instructions Sep 3 23:22:57.783738 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 3 23:22:57.783745 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 3 23:22:57.783752 kernel: CPU features: detected: LSE atomic instructions Sep 3 23:22:57.783759 kernel: CPU features: detected: Privileged Access Never Sep 3 23:22:57.783766 kernel: CPU features: detected: RAS Extension Support Sep 3 23:22:57.783773 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 3 23:22:57.783779 kernel: alternatives: applying system-wide alternatives Sep 3 23:22:57.783788 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 3 23:22:57.783795 kernel: Memory: 2422372K/2572288K available (11136K kernel code, 2436K rwdata, 9076K rodata, 38976K init, 1038K bss, 127580K reserved, 16384K cma-reserved) Sep 3 23:22:57.783802 kernel: devtmpfs: initialized Sep 3 23:22:57.783809 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 3 23:22:57.783816 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 3 23:22:57.783823 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 3 23:22:57.783829 kernel: 0 pages in range for non-PLT usage Sep 3 23:22:57.783836 kernel: 508560 pages in range for PLT usage Sep 3 23:22:57.783843 kernel: pinctrl core: initialized pinctrl subsystem Sep 3 23:22:57.783851 kernel: SMBIOS 3.0.0 present. Sep 3 23:22:57.783858 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 3 23:22:57.783864 kernel: DMI: Memory slots populated: 1/1 Sep 3 23:22:57.783874 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 3 23:22:57.783881 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 3 23:22:57.783888 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 3 23:22:57.783895 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 3 23:22:57.783902 kernel: audit: initializing netlink subsys (disabled) Sep 3 23:22:57.783909 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Sep 3 23:22:57.783917 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 3 23:22:57.783924 kernel: cpuidle: using governor menu Sep 3 23:22:57.783931 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 3 23:22:57.783938 kernel: ASID allocator initialised with 32768 entries Sep 3 23:22:57.783944 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 3 23:22:57.783951 kernel: Serial: AMBA PL011 UART driver Sep 3 23:22:57.783958 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 3 23:22:57.783965 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 3 23:22:57.783972 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 3 23:22:57.783980 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 3 23:22:57.783987 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 3 23:22:57.783994 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 3 23:22:57.784001 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 3 23:22:57.784008 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 3 23:22:57.784015 kernel: ACPI: Added _OSI(Module Device) Sep 3 23:22:57.784021 kernel: ACPI: Added _OSI(Processor Device) Sep 3 23:22:57.784028 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 3 23:22:57.784035 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 3 23:22:57.784043 kernel: ACPI: Interpreter enabled Sep 3 23:22:57.784050 kernel: ACPI: Using GIC for interrupt routing Sep 3 23:22:57.784056 kernel: ACPI: MCFG table detected, 1 entries Sep 3 23:22:57.784063 kernel: ACPI: CPU0 has been hot-added Sep 3 23:22:57.784070 kernel: ACPI: CPU1 has been hot-added Sep 3 23:22:57.784077 kernel: ACPI: CPU2 has been hot-added Sep 3 23:22:57.784083 kernel: ACPI: CPU3 has been hot-added Sep 3 23:22:57.784090 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 3 23:22:57.784097 kernel: printk: legacy console [ttyAMA0] enabled Sep 3 23:22:57.784105 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 3 23:22:57.784241 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 3 23:22:57.784307 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 3 23:22:57.784365 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 3 23:22:57.784436 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 3 23:22:57.784494 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 3 23:22:57.784503 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 3 23:22:57.784513 kernel: PCI host bridge to bus 0000:00 Sep 3 23:22:57.784576 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 3 23:22:57.784629 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 3 23:22:57.784713 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 3 23:22:57.784767 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 3 23:22:57.784844 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 3 23:22:57.784915 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 3 23:22:57.784980 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 3 23:22:57.785040 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 3 23:22:57.785099 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 3 23:22:57.785158 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 3 23:22:57.785216 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 3 23:22:57.785275 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 3 23:22:57.785329 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 3 23:22:57.785382 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 3 23:22:57.785433 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 3 23:22:57.785442 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 3 23:22:57.785450 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 3 23:22:57.785457 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 3 23:22:57.785463 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 3 23:22:57.785470 kernel: iommu: Default domain type: Translated Sep 3 23:22:57.785477 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 3 23:22:57.785485 kernel: efivars: Registered efivars operations Sep 3 23:22:57.785492 kernel: vgaarb: loaded Sep 3 23:22:57.785499 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 3 23:22:57.785506 kernel: VFS: Disk quotas dquot_6.6.0 Sep 3 23:22:57.785513 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 3 23:22:57.785520 kernel: pnp: PnP ACPI init Sep 3 23:22:57.785586 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 3 23:22:57.785596 kernel: pnp: PnP ACPI: found 1 devices Sep 3 23:22:57.785605 kernel: NET: Registered PF_INET protocol family Sep 3 23:22:57.785612 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 3 23:22:57.785619 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 3 23:22:57.785626 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 3 23:22:57.785647 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 3 23:22:57.785655 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 3 23:22:57.785663 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 3 23:22:57.785669 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 3 23:22:57.785684 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 3 23:22:57.785694 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 3 23:22:57.785701 kernel: PCI: CLS 0 bytes, default 64 Sep 3 23:22:57.785708 kernel: kvm [1]: HYP mode not available Sep 3 23:22:57.785715 kernel: Initialise system trusted keyrings Sep 3 23:22:57.785722 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 3 23:22:57.785729 kernel: Key type asymmetric registered Sep 3 23:22:57.785735 kernel: Asymmetric key parser 'x509' registered Sep 3 23:22:57.785742 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 3 23:22:57.785749 kernel: io scheduler mq-deadline registered Sep 3 23:22:57.785758 kernel: io scheduler kyber registered Sep 3 23:22:57.785764 kernel: io scheduler bfq registered Sep 3 23:22:57.785771 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 3 23:22:57.785778 kernel: ACPI: button: Power Button [PWRB] Sep 3 23:22:57.785785 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 3 23:22:57.785853 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 3 23:22:57.785863 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 3 23:22:57.785870 kernel: thunder_xcv, ver 1.0 Sep 3 23:22:57.785876 kernel: thunder_bgx, ver 1.0 Sep 3 23:22:57.785885 kernel: nicpf, ver 1.0 Sep 3 23:22:57.785892 kernel: nicvf, ver 1.0 Sep 3 23:22:57.785958 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 3 23:22:57.786013 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-03T23:22:57 UTC (1756941777) Sep 3 23:22:57.786022 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 3 23:22:57.786030 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 3 23:22:57.786036 kernel: watchdog: NMI not fully supported Sep 3 23:22:57.786043 kernel: watchdog: Hard watchdog permanently disabled Sep 3 23:22:57.786052 kernel: NET: Registered PF_INET6 protocol family Sep 3 23:22:57.786059 kernel: Segment Routing with IPv6 Sep 3 23:22:57.786066 kernel: In-situ OAM (IOAM) with IPv6 Sep 3 23:22:57.786073 kernel: NET: Registered PF_PACKET protocol family Sep 3 23:22:57.786079 kernel: Key type dns_resolver registered Sep 3 23:22:57.786087 kernel: registered taskstats version 1 Sep 3 23:22:57.786093 kernel: Loading compiled-in X.509 certificates Sep 3 23:22:57.786101 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.44-flatcar: 08fc774dab168e64ce30c382a4517d40e72c4744' Sep 3 23:22:57.786107 kernel: Demotion targets for Node 0: null Sep 3 23:22:57.786116 kernel: Key type .fscrypt registered Sep 3 23:22:57.786123 kernel: Key type fscrypt-provisioning registered Sep 3 23:22:57.786130 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 3 23:22:57.786136 kernel: ima: Allocated hash algorithm: sha1 Sep 3 23:22:57.786143 kernel: ima: No architecture policies found Sep 3 23:22:57.786150 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 3 23:22:57.786157 kernel: clk: Disabling unused clocks Sep 3 23:22:57.786163 kernel: PM: genpd: Disabling unused power domains Sep 3 23:22:57.786170 kernel: Warning: unable to open an initial console. Sep 3 23:22:57.786179 kernel: Freeing unused kernel memory: 38976K Sep 3 23:22:57.786186 kernel: Run /init as init process Sep 3 23:22:57.786193 kernel: with arguments: Sep 3 23:22:57.786200 kernel: /init Sep 3 23:22:57.786207 kernel: with environment: Sep 3 23:22:57.786213 kernel: HOME=/ Sep 3 23:22:57.786220 kernel: TERM=linux Sep 3 23:22:57.786227 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 3 23:22:57.786235 systemd[1]: Successfully made /usr/ read-only. Sep 3 23:22:57.786246 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 3 23:22:57.786254 systemd[1]: Detected virtualization kvm. Sep 3 23:22:57.786262 systemd[1]: Detected architecture arm64. Sep 3 23:22:57.786269 systemd[1]: Running in initrd. Sep 3 23:22:57.786277 systemd[1]: No hostname configured, using default hostname. Sep 3 23:22:57.786285 systemd[1]: Hostname set to . Sep 3 23:22:57.786292 systemd[1]: Initializing machine ID from VM UUID. Sep 3 23:22:57.786301 systemd[1]: Queued start job for default target initrd.target. Sep 3 23:22:57.786308 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 3 23:22:57.786316 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 3 23:22:57.786324 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 3 23:22:57.786331 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 3 23:22:57.786339 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 3 23:22:57.786347 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 3 23:22:57.786357 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 3 23:22:57.786365 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 3 23:22:57.786372 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 3 23:22:57.786380 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 3 23:22:57.786387 systemd[1]: Reached target paths.target - Path Units. Sep 3 23:22:57.786394 systemd[1]: Reached target slices.target - Slice Units. Sep 3 23:22:57.786402 systemd[1]: Reached target swap.target - Swaps. Sep 3 23:22:57.786409 systemd[1]: Reached target timers.target - Timer Units. Sep 3 23:22:57.786418 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 3 23:22:57.786425 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 3 23:22:57.786433 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 3 23:22:57.786440 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 3 23:22:57.786448 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 3 23:22:57.786455 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 3 23:22:57.786462 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 3 23:22:57.786470 systemd[1]: Reached target sockets.target - Socket Units. Sep 3 23:22:57.786477 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 3 23:22:57.786486 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 3 23:22:57.786494 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 3 23:22:57.786501 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 3 23:22:57.786509 systemd[1]: Starting systemd-fsck-usr.service... Sep 3 23:22:57.786516 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 3 23:22:57.786524 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 3 23:22:57.786532 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 3 23:22:57.786539 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 3 23:22:57.786548 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 3 23:22:57.786556 systemd[1]: Finished systemd-fsck-usr.service. Sep 3 23:22:57.786564 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 3 23:22:57.786585 systemd-journald[243]: Collecting audit messages is disabled. Sep 3 23:22:57.786605 systemd-journald[243]: Journal started Sep 3 23:22:57.786623 systemd-journald[243]: Runtime Journal (/run/log/journal/ee16f62c1be94eafae920c54d4c0440a) is 6M, max 48.5M, 42.4M free. Sep 3 23:22:57.778700 systemd-modules-load[245]: Inserted module 'overlay' Sep 3 23:22:57.789013 systemd[1]: Started systemd-journald.service - Journal Service. Sep 3 23:22:57.789950 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:22:57.793662 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 3 23:22:57.793755 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 3 23:22:57.795757 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 3 23:22:57.799291 kernel: Bridge firewalling registered Sep 3 23:22:57.795803 systemd-modules-load[245]: Inserted module 'br_netfilter' Sep 3 23:22:57.797525 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 3 23:22:57.799505 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 3 23:22:57.807360 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 3 23:22:57.808922 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 3 23:22:57.816747 systemd-tmpfiles[263]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 3 23:22:57.818570 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 3 23:22:57.819957 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 3 23:22:57.822435 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 3 23:22:57.823690 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 3 23:22:57.826300 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 3 23:22:57.828593 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 3 23:22:57.857659 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=cb633bb0c889435b58a5c40c9c9bc9d5899ece5018569c9fa08f911265d3f18e Sep 3 23:22:57.871350 systemd-resolved[288]: Positive Trust Anchors: Sep 3 23:22:57.871370 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 3 23:22:57.871402 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 3 23:22:57.876327 systemd-resolved[288]: Defaulting to hostname 'linux'. Sep 3 23:22:57.877350 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 3 23:22:57.879806 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 3 23:22:57.932704 kernel: SCSI subsystem initialized Sep 3 23:22:57.937651 kernel: Loading iSCSI transport class v2.0-870. Sep 3 23:22:57.945666 kernel: iscsi: registered transport (tcp) Sep 3 23:22:57.957881 kernel: iscsi: registered transport (qla4xxx) Sep 3 23:22:57.957920 kernel: QLogic iSCSI HBA Driver Sep 3 23:22:57.975435 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 3 23:22:57.994551 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 3 23:22:57.996864 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 3 23:22:58.044009 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 3 23:22:58.046180 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 3 23:22:58.107680 kernel: raid6: neonx8 gen() 15760 MB/s Sep 3 23:22:58.124664 kernel: raid6: neonx4 gen() 15650 MB/s Sep 3 23:22:58.141654 kernel: raid6: neonx2 gen() 13230 MB/s Sep 3 23:22:58.158652 kernel: raid6: neonx1 gen() 10530 MB/s Sep 3 23:22:58.175651 kernel: raid6: int64x8 gen() 6883 MB/s Sep 3 23:22:58.192651 kernel: raid6: int64x4 gen() 7343 MB/s Sep 3 23:22:58.209652 kernel: raid6: int64x2 gen() 6098 MB/s Sep 3 23:22:58.226657 kernel: raid6: int64x1 gen() 5027 MB/s Sep 3 23:22:58.226677 kernel: raid6: using algorithm neonx8 gen() 15760 MB/s Sep 3 23:22:58.243660 kernel: raid6: .... xor() 11955 MB/s, rmw enabled Sep 3 23:22:58.243682 kernel: raid6: using neon recovery algorithm Sep 3 23:22:58.249057 kernel: xor: measuring software checksum speed Sep 3 23:22:58.249072 kernel: 8regs : 21641 MB/sec Sep 3 23:22:58.249649 kernel: 32regs : 21699 MB/sec Sep 3 23:22:58.250684 kernel: arm64_neon : 24411 MB/sec Sep 3 23:22:58.250696 kernel: xor: using function: arm64_neon (24411 MB/sec) Sep 3 23:22:58.302663 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 3 23:22:58.309372 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 3 23:22:58.311690 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 3 23:22:58.340529 systemd-udevd[498]: Using default interface naming scheme 'v255'. Sep 3 23:22:58.344597 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 3 23:22:58.346314 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 3 23:22:58.377204 dracut-pre-trigger[505]: rd.md=0: removing MD RAID activation Sep 3 23:22:58.400923 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 3 23:22:58.403095 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 3 23:22:58.461548 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 3 23:22:58.464139 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 3 23:22:58.529105 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 3 23:22:58.529664 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 3 23:22:58.538658 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 3 23:22:58.538715 kernel: GPT:9289727 != 19775487 Sep 3 23:22:58.538726 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 3 23:22:58.538735 kernel: GPT:9289727 != 19775487 Sep 3 23:22:58.538743 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 3 23:22:58.538751 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 3 23:22:58.541582 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 3 23:22:58.541716 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:22:58.545226 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 3 23:22:58.546986 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 3 23:22:58.572757 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:22:58.585803 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 3 23:22:58.587774 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 3 23:22:58.595059 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 3 23:22:58.603138 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 3 23:22:58.609550 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 3 23:22:58.610717 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 3 23:22:58.613139 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 3 23:22:58.614994 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 3 23:22:58.616779 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 3 23:22:58.619197 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 3 23:22:58.620797 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 3 23:22:58.636811 disk-uuid[591]: Primary Header is updated. Sep 3 23:22:58.636811 disk-uuid[591]: Secondary Entries is updated. Sep 3 23:22:58.636811 disk-uuid[591]: Secondary Header is updated. Sep 3 23:22:58.641663 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 3 23:22:58.642567 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 3 23:22:58.647652 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 3 23:22:59.649661 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 3 23:22:59.650117 disk-uuid[595]: The operation has completed successfully. Sep 3 23:22:59.676862 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 3 23:22:59.676959 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 3 23:22:59.703979 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 3 23:22:59.727603 sh[613]: Success Sep 3 23:22:59.740202 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 3 23:22:59.740244 kernel: device-mapper: uevent: version 1.0.3 Sep 3 23:22:59.740264 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 3 23:22:59.747691 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 3 23:22:59.770785 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 3 23:22:59.773183 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 3 23:22:59.787899 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 3 23:22:59.795504 kernel: BTRFS: device fsid e8b97e78-d30f-4a41-b431-d82f3afef949 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (625) Sep 3 23:22:59.795546 kernel: BTRFS info (device dm-0): first mount of filesystem e8b97e78-d30f-4a41-b431-d82f3afef949 Sep 3 23:22:59.795557 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 3 23:22:59.800660 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 3 23:22:59.800706 kernel: BTRFS info (device dm-0): enabling free space tree Sep 3 23:22:59.802112 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 3 23:22:59.804189 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 3 23:22:59.806099 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 3 23:22:59.807070 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 3 23:22:59.809948 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 3 23:22:59.832155 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (658) Sep 3 23:22:59.832204 kernel: BTRFS info (device vda6): first mount of filesystem f1885725-917a-44ef-9d71-3c4c588cc4f4 Sep 3 23:22:59.832216 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 3 23:22:59.834926 kernel: BTRFS info (device vda6): turning on async discard Sep 3 23:22:59.834956 kernel: BTRFS info (device vda6): enabling free space tree Sep 3 23:22:59.838672 kernel: BTRFS info (device vda6): last unmount of filesystem f1885725-917a-44ef-9d71-3c4c588cc4f4 Sep 3 23:22:59.839497 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 3 23:22:59.841707 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 3 23:22:59.901152 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 3 23:22:59.904352 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 3 23:22:59.938824 systemd-networkd[800]: lo: Link UP Sep 3 23:22:59.938835 systemd-networkd[800]: lo: Gained carrier Sep 3 23:22:59.939498 systemd-networkd[800]: Enumeration completed Sep 3 23:22:59.939602 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 3 23:22:59.940215 systemd-networkd[800]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 3 23:22:59.940218 systemd-networkd[800]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 3 23:22:59.940957 systemd-networkd[800]: eth0: Link UP Sep 3 23:22:59.941096 systemd-networkd[800]: eth0: Gained carrier Sep 3 23:22:59.941104 systemd-networkd[800]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 3 23:22:59.941538 systemd[1]: Reached target network.target - Network. Sep 3 23:22:59.953590 ignition[701]: Ignition 2.21.0 Sep 3 23:22:59.953606 ignition[701]: Stage: fetch-offline Sep 3 23:22:59.953644 ignition[701]: no configs at "/usr/lib/ignition/base.d" Sep 3 23:22:59.953652 ignition[701]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 3 23:22:59.953818 ignition[701]: parsed url from cmdline: "" Sep 3 23:22:59.953821 ignition[701]: no config URL provided Sep 3 23:22:59.953826 ignition[701]: reading system config file "/usr/lib/ignition/user.ign" Sep 3 23:22:59.953833 ignition[701]: no config at "/usr/lib/ignition/user.ign" Sep 3 23:22:59.953853 ignition[701]: op(1): [started] loading QEMU firmware config module Sep 3 23:22:59.960695 systemd-networkd[800]: eth0: DHCPv4 address 10.0.0.50/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 3 23:22:59.953857 ignition[701]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 3 23:22:59.959273 ignition[701]: op(1): [finished] loading QEMU firmware config module Sep 3 23:23:00.000748 ignition[701]: parsing config with SHA512: a76828000d39837f48640b34706676546f06bb77ccbbb9ad9e7738337c7c65759c87ad297a52e56003872d4edf8c2333b33e8ac3c3c1e50278b76f4820cdd3e2 Sep 3 23:23:00.005483 unknown[701]: fetched base config from "system" Sep 3 23:23:00.005496 unknown[701]: fetched user config from "qemu" Sep 3 23:23:00.005869 ignition[701]: fetch-offline: fetch-offline passed Sep 3 23:23:00.005924 ignition[701]: Ignition finished successfully Sep 3 23:23:00.008046 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 3 23:23:00.009468 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 3 23:23:00.010219 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 3 23:23:00.041772 ignition[813]: Ignition 2.21.0 Sep 3 23:23:00.041791 ignition[813]: Stage: kargs Sep 3 23:23:00.041963 ignition[813]: no configs at "/usr/lib/ignition/base.d" Sep 3 23:23:00.042566 ignition[813]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 3 23:23:00.043834 ignition[813]: kargs: kargs passed Sep 3 23:23:00.043885 ignition[813]: Ignition finished successfully Sep 3 23:23:00.048117 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 3 23:23:00.049836 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 3 23:23:00.073319 ignition[821]: Ignition 2.21.0 Sep 3 23:23:00.073335 ignition[821]: Stage: disks Sep 3 23:23:00.073463 ignition[821]: no configs at "/usr/lib/ignition/base.d" Sep 3 23:23:00.073472 ignition[821]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 3 23:23:00.075853 ignition[821]: disks: disks passed Sep 3 23:23:00.075918 ignition[821]: Ignition finished successfully Sep 3 23:23:00.078201 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 3 23:23:00.079196 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 3 23:23:00.080552 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 3 23:23:00.082281 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 3 23:23:00.083795 systemd[1]: Reached target sysinit.target - System Initialization. Sep 3 23:23:00.085140 systemd[1]: Reached target basic.target - Basic System. Sep 3 23:23:00.087286 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 3 23:23:00.119140 systemd-fsck[831]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 3 23:23:00.124005 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 3 23:23:00.127336 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 3 23:23:00.185655 kernel: EXT4-fs (vda9): mounted filesystem d953e3b7-a0cb-45f7-b3a7-216a9a578dda r/w with ordered data mode. Quota mode: none. Sep 3 23:23:00.186405 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 3 23:23:00.187507 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 3 23:23:00.189445 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 3 23:23:00.190876 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 3 23:23:00.191631 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 3 23:23:00.191707 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 3 23:23:00.191730 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 3 23:23:00.209172 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 3 23:23:00.212118 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 3 23:23:00.216044 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (839) Sep 3 23:23:00.216065 kernel: BTRFS info (device vda6): first mount of filesystem f1885725-917a-44ef-9d71-3c4c588cc4f4 Sep 3 23:23:00.216075 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 3 23:23:00.218654 kernel: BTRFS info (device vda6): turning on async discard Sep 3 23:23:00.218682 kernel: BTRFS info (device vda6): enabling free space tree Sep 3 23:23:00.220181 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 3 23:23:00.246237 initrd-setup-root[864]: cut: /sysroot/etc/passwd: No such file or directory Sep 3 23:23:00.250302 initrd-setup-root[871]: cut: /sysroot/etc/group: No such file or directory Sep 3 23:23:00.253678 initrd-setup-root[878]: cut: /sysroot/etc/shadow: No such file or directory Sep 3 23:23:00.256916 initrd-setup-root[885]: cut: /sysroot/etc/gshadow: No such file or directory Sep 3 23:23:00.320700 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 3 23:23:00.322411 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 3 23:23:00.324481 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 3 23:23:00.337653 kernel: BTRFS info (device vda6): last unmount of filesystem f1885725-917a-44ef-9d71-3c4c588cc4f4 Sep 3 23:23:00.346961 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 3 23:23:00.354929 ignition[954]: INFO : Ignition 2.21.0 Sep 3 23:23:00.354929 ignition[954]: INFO : Stage: mount Sep 3 23:23:00.356172 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 3 23:23:00.356172 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 3 23:23:00.357728 ignition[954]: INFO : mount: mount passed Sep 3 23:23:00.357728 ignition[954]: INFO : Ignition finished successfully Sep 3 23:23:00.358309 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 3 23:23:00.360405 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 3 23:23:00.794429 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 3 23:23:00.795914 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 3 23:23:00.812656 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (966) Sep 3 23:23:00.814489 kernel: BTRFS info (device vda6): first mount of filesystem f1885725-917a-44ef-9d71-3c4c588cc4f4 Sep 3 23:23:00.814504 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 3 23:23:00.816852 kernel: BTRFS info (device vda6): turning on async discard Sep 3 23:23:00.816872 kernel: BTRFS info (device vda6): enabling free space tree Sep 3 23:23:00.818424 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 3 23:23:00.850045 ignition[984]: INFO : Ignition 2.21.0 Sep 3 23:23:00.850045 ignition[984]: INFO : Stage: files Sep 3 23:23:00.852223 ignition[984]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 3 23:23:00.852223 ignition[984]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 3 23:23:00.852223 ignition[984]: DEBUG : files: compiled without relabeling support, skipping Sep 3 23:23:00.855411 ignition[984]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 3 23:23:00.855411 ignition[984]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 3 23:23:00.855411 ignition[984]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 3 23:23:00.855411 ignition[984]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 3 23:23:00.855411 ignition[984]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 3 23:23:00.854855 unknown[984]: wrote ssh authorized keys file for user: core Sep 3 23:23:00.861800 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 3 23:23:00.861800 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 3 23:23:00.906714 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 3 23:23:01.177232 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 3 23:23:01.177232 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 3 23:23:01.180164 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 3 23:23:01.354875 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 3 23:23:01.439673 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 3 23:23:01.441113 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 3 23:23:01.441113 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 3 23:23:01.441113 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 3 23:23:01.441113 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 3 23:23:01.441113 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 3 23:23:01.441113 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 3 23:23:01.441113 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 3 23:23:01.441113 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 3 23:23:01.452032 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 3 23:23:01.452032 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 3 23:23:01.452032 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 3 23:23:01.452032 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 3 23:23:01.452032 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 3 23:23:01.452032 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 3 23:23:01.885156 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 3 23:23:01.961761 systemd-networkd[800]: eth0: Gained IPv6LL Sep 3 23:23:02.355934 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 3 23:23:02.355934 ignition[984]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 3 23:23:02.359022 ignition[984]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 3 23:23:02.359022 ignition[984]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 3 23:23:02.359022 ignition[984]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 3 23:23:02.359022 ignition[984]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 3 23:23:02.359022 ignition[984]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 3 23:23:02.359022 ignition[984]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 3 23:23:02.359022 ignition[984]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 3 23:23:02.359022 ignition[984]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 3 23:23:02.372787 ignition[984]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 3 23:23:02.377741 ignition[984]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 3 23:23:02.378838 ignition[984]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 3 23:23:02.378838 ignition[984]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 3 23:23:02.378838 ignition[984]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 3 23:23:02.378838 ignition[984]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 3 23:23:02.378838 ignition[984]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 3 23:23:02.378838 ignition[984]: INFO : files: files passed Sep 3 23:23:02.378838 ignition[984]: INFO : Ignition finished successfully Sep 3 23:23:02.381549 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 3 23:23:02.384539 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 3 23:23:02.387341 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 3 23:23:02.400541 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 3 23:23:02.400649 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 3 23:23:02.403988 initrd-setup-root-after-ignition[1013]: grep: /sysroot/oem/oem-release: No such file or directory Sep 3 23:23:02.406823 initrd-setup-root-after-ignition[1015]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 3 23:23:02.406823 initrd-setup-root-after-ignition[1015]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 3 23:23:02.409208 initrd-setup-root-after-ignition[1019]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 3 23:23:02.409523 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 3 23:23:02.411740 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 3 23:23:02.414272 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 3 23:23:02.446007 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 3 23:23:02.446113 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 3 23:23:02.448039 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 3 23:23:02.449506 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 3 23:23:02.450997 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 3 23:23:02.451786 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 3 23:23:02.483820 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 3 23:23:02.486070 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 3 23:23:02.508720 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 3 23:23:02.509736 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 3 23:23:02.511429 systemd[1]: Stopped target timers.target - Timer Units. Sep 3 23:23:02.512830 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 3 23:23:02.512954 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 3 23:23:02.514923 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 3 23:23:02.516490 systemd[1]: Stopped target basic.target - Basic System. Sep 3 23:23:02.517781 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 3 23:23:02.519109 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 3 23:23:02.520569 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 3 23:23:02.522202 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 3 23:23:02.523654 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 3 23:23:02.525156 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 3 23:23:02.526619 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 3 23:23:02.528285 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 3 23:23:02.529640 systemd[1]: Stopped target swap.target - Swaps. Sep 3 23:23:02.530894 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 3 23:23:02.531026 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 3 23:23:02.532972 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 3 23:23:02.534511 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 3 23:23:02.536033 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 3 23:23:02.536142 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 3 23:23:02.537664 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 3 23:23:02.537782 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 3 23:23:02.540090 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 3 23:23:02.540204 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 3 23:23:02.541603 systemd[1]: Stopped target paths.target - Path Units. Sep 3 23:23:02.542855 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 3 23:23:02.543762 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 3 23:23:02.545382 systemd[1]: Stopped target slices.target - Slice Units. Sep 3 23:23:02.546486 systemd[1]: Stopped target sockets.target - Socket Units. Sep 3 23:23:02.547854 systemd[1]: iscsid.socket: Deactivated successfully. Sep 3 23:23:02.547942 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 3 23:23:02.549472 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 3 23:23:02.549546 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 3 23:23:02.550777 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 3 23:23:02.550893 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 3 23:23:02.552356 systemd[1]: ignition-files.service: Deactivated successfully. Sep 3 23:23:02.552454 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 3 23:23:02.554379 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 3 23:23:02.555322 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 3 23:23:02.555466 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 3 23:23:02.557710 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 3 23:23:02.559259 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 3 23:23:02.559385 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 3 23:23:02.560772 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 3 23:23:02.560869 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 3 23:23:02.566987 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 3 23:23:02.567075 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 3 23:23:02.571922 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 3 23:23:02.578568 ignition[1041]: INFO : Ignition 2.21.0 Sep 3 23:23:02.578568 ignition[1041]: INFO : Stage: umount Sep 3 23:23:02.580719 ignition[1041]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 3 23:23:02.580719 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 3 23:23:02.580719 ignition[1041]: INFO : umount: umount passed Sep 3 23:23:02.580719 ignition[1041]: INFO : Ignition finished successfully Sep 3 23:23:02.581598 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 3 23:23:02.583104 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 3 23:23:02.584135 systemd[1]: Stopped target network.target - Network. Sep 3 23:23:02.585347 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 3 23:23:02.585404 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 3 23:23:02.586767 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 3 23:23:02.586810 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 3 23:23:02.588128 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 3 23:23:02.588173 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 3 23:23:02.589415 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 3 23:23:02.589454 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 3 23:23:02.590993 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 3 23:23:02.592264 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 3 23:23:02.600532 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 3 23:23:02.600655 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 3 23:23:02.604550 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 3 23:23:02.605841 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 3 23:23:02.605949 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 3 23:23:02.609056 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 3 23:23:02.609623 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 3 23:23:02.610525 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 3 23:23:02.610565 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 3 23:23:02.613275 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 3 23:23:02.614081 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 3 23:23:02.614141 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 3 23:23:02.615306 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 3 23:23:02.615351 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 3 23:23:02.618003 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 3 23:23:02.618046 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 3 23:23:02.619647 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 3 23:23:02.619698 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 3 23:23:02.622269 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 3 23:23:02.626474 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 3 23:23:02.626531 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 3 23:23:02.629947 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 3 23:23:02.630052 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 3 23:23:02.631816 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 3 23:23:02.631899 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 3 23:23:02.636066 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 3 23:23:02.636165 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 3 23:23:02.641233 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 3 23:23:02.641366 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 3 23:23:02.643068 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 3 23:23:02.643105 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 3 23:23:02.644416 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 3 23:23:02.644444 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 3 23:23:02.645805 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 3 23:23:02.645844 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 3 23:23:02.647949 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 3 23:23:02.647989 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 3 23:23:02.650153 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 3 23:23:02.650202 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 3 23:23:02.653428 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 3 23:23:02.654950 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 3 23:23:02.655002 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 3 23:23:02.657555 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 3 23:23:02.657592 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 3 23:23:02.660359 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 3 23:23:02.660399 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:23:02.663866 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 3 23:23:02.663949 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 3 23:23:02.663980 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 3 23:23:02.672784 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 3 23:23:02.672896 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 3 23:23:02.674814 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 3 23:23:02.676951 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 3 23:23:02.704981 systemd[1]: Switching root. Sep 3 23:23:02.740582 systemd-journald[243]: Journal stopped Sep 3 23:23:03.471778 systemd-journald[243]: Received SIGTERM from PID 1 (systemd). Sep 3 23:23:03.471834 kernel: SELinux: policy capability network_peer_controls=1 Sep 3 23:23:03.471850 kernel: SELinux: policy capability open_perms=1 Sep 3 23:23:03.471863 kernel: SELinux: policy capability extended_socket_class=1 Sep 3 23:23:03.471875 kernel: SELinux: policy capability always_check_network=0 Sep 3 23:23:03.471886 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 3 23:23:03.471895 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 3 23:23:03.471904 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 3 23:23:03.471913 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 3 23:23:03.471924 kernel: SELinux: policy capability userspace_initial_context=0 Sep 3 23:23:03.471933 kernel: audit: type=1403 audit(1756941782.922:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 3 23:23:03.471947 systemd[1]: Successfully loaded SELinux policy in 45.017ms. Sep 3 23:23:03.471964 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.483ms. Sep 3 23:23:03.471975 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 3 23:23:03.471986 systemd[1]: Detected virtualization kvm. Sep 3 23:23:03.471995 systemd[1]: Detected architecture arm64. Sep 3 23:23:03.472005 systemd[1]: Detected first boot. Sep 3 23:23:03.472016 systemd[1]: Initializing machine ID from VM UUID. Sep 3 23:23:03.472027 zram_generator::config[1086]: No configuration found. Sep 3 23:23:03.472039 kernel: NET: Registered PF_VSOCK protocol family Sep 3 23:23:03.472049 systemd[1]: Populated /etc with preset unit settings. Sep 3 23:23:03.472059 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 3 23:23:03.472069 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 3 23:23:03.472078 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 3 23:23:03.472088 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 3 23:23:03.472098 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 3 23:23:03.472109 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 3 23:23:03.472122 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 3 23:23:03.472132 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 3 23:23:03.472142 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 3 23:23:03.472152 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 3 23:23:03.472161 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 3 23:23:03.472171 systemd[1]: Created slice user.slice - User and Session Slice. Sep 3 23:23:03.472181 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 3 23:23:03.472191 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 3 23:23:03.472203 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 3 23:23:03.472213 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 3 23:23:03.472223 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 3 23:23:03.472233 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 3 23:23:03.472243 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 3 23:23:03.472253 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 3 23:23:03.472263 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 3 23:23:03.472273 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 3 23:23:03.472284 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 3 23:23:03.472294 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 3 23:23:03.472305 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 3 23:23:03.472314 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 3 23:23:03.472324 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 3 23:23:03.472335 systemd[1]: Reached target slices.target - Slice Units. Sep 3 23:23:03.472345 systemd[1]: Reached target swap.target - Swaps. Sep 3 23:23:03.472355 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 3 23:23:03.472365 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 3 23:23:03.472376 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 3 23:23:03.472386 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 3 23:23:03.472395 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 3 23:23:03.472406 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 3 23:23:03.472415 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 3 23:23:03.472425 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 3 23:23:03.472435 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 3 23:23:03.472445 systemd[1]: Mounting media.mount - External Media Directory... Sep 3 23:23:03.472455 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 3 23:23:03.472466 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 3 23:23:03.472476 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 3 23:23:03.472486 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 3 23:23:03.472497 systemd[1]: Reached target machines.target - Containers. Sep 3 23:23:03.472507 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 3 23:23:03.472517 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 3 23:23:03.472527 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 3 23:23:03.472537 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 3 23:23:03.472554 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 3 23:23:03.472565 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 3 23:23:03.472575 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 3 23:23:03.472585 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 3 23:23:03.472594 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 3 23:23:03.472604 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 3 23:23:03.472614 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 3 23:23:03.472624 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 3 23:23:03.473546 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 3 23:23:03.473580 systemd[1]: Stopped systemd-fsck-usr.service. Sep 3 23:23:03.473592 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 3 23:23:03.473603 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 3 23:23:03.473612 kernel: loop: module loaded Sep 3 23:23:03.473622 kernel: fuse: init (API version 7.41) Sep 3 23:23:03.473632 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 3 23:23:03.473664 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 3 23:23:03.473678 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 3 23:23:03.473689 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 3 23:23:03.473701 kernel: ACPI: bus type drm_connector registered Sep 3 23:23:03.473713 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 3 23:23:03.473723 systemd[1]: verity-setup.service: Deactivated successfully. Sep 3 23:23:03.473732 systemd[1]: Stopped verity-setup.service. Sep 3 23:23:03.473744 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 3 23:23:03.473754 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 3 23:23:03.473855 systemd-journald[1151]: Collecting audit messages is disabled. Sep 3 23:23:03.473881 systemd[1]: Mounted media.mount - External Media Directory. Sep 3 23:23:03.473892 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 3 23:23:03.473903 systemd-journald[1151]: Journal started Sep 3 23:23:03.473926 systemd-journald[1151]: Runtime Journal (/run/log/journal/ee16f62c1be94eafae920c54d4c0440a) is 6M, max 48.5M, 42.4M free. Sep 3 23:23:03.277081 systemd[1]: Queued start job for default target multi-user.target. Sep 3 23:23:03.301557 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 3 23:23:03.301958 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 3 23:23:03.476648 systemd[1]: Started systemd-journald.service - Journal Service. Sep 3 23:23:03.476871 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 3 23:23:03.478284 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 3 23:23:03.479425 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 3 23:23:03.481691 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 3 23:23:03.482861 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 3 23:23:03.483030 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 3 23:23:03.485067 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 3 23:23:03.485241 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 3 23:23:03.486363 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 3 23:23:03.486529 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 3 23:23:03.488984 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 3 23:23:03.489147 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 3 23:23:03.490400 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 3 23:23:03.490574 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 3 23:23:03.491770 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 3 23:23:03.491933 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 3 23:23:03.493136 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 3 23:23:03.494292 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 3 23:23:03.495559 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 3 23:23:03.496891 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 3 23:23:03.508572 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 3 23:23:03.510924 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 3 23:23:03.512668 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 3 23:23:03.513559 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 3 23:23:03.513590 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 3 23:23:03.515414 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 3 23:23:03.525478 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 3 23:23:03.526556 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 3 23:23:03.527938 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 3 23:23:03.529706 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 3 23:23:03.530739 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 3 23:23:03.532877 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 3 23:23:03.533989 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 3 23:23:03.536745 systemd-journald[1151]: Time spent on flushing to /var/log/journal/ee16f62c1be94eafae920c54d4c0440a is 21.618ms for 887 entries. Sep 3 23:23:03.536745 systemd-journald[1151]: System Journal (/var/log/journal/ee16f62c1be94eafae920c54d4c0440a) is 8M, max 195.6M, 187.6M free. Sep 3 23:23:03.565269 systemd-journald[1151]: Received client request to flush runtime journal. Sep 3 23:23:03.565315 kernel: loop0: detected capacity change from 0 to 203944 Sep 3 23:23:03.536565 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 3 23:23:03.539947 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 3 23:23:03.545724 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 3 23:23:03.548004 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 3 23:23:03.549362 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 3 23:23:03.550435 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 3 23:23:03.554914 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 3 23:23:03.557818 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 3 23:23:03.560321 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 3 23:23:03.571693 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 3 23:23:03.572406 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 3 23:23:03.582565 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 3 23:23:03.589049 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 3 23:23:03.594934 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 3 23:23:03.597738 kernel: loop1: detected capacity change from 0 to 138376 Sep 3 23:23:03.599043 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 3 23:23:03.621750 kernel: loop2: detected capacity change from 0 to 107312 Sep 3 23:23:03.626272 systemd-tmpfiles[1219]: ACLs are not supported, ignoring. Sep 3 23:23:03.626551 systemd-tmpfiles[1219]: ACLs are not supported, ignoring. Sep 3 23:23:03.633689 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 3 23:23:03.649870 kernel: loop3: detected capacity change from 0 to 203944 Sep 3 23:23:03.655688 kernel: loop4: detected capacity change from 0 to 138376 Sep 3 23:23:03.661679 kernel: loop5: detected capacity change from 0 to 107312 Sep 3 23:23:03.665420 (sd-merge)[1224]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 3 23:23:03.665803 (sd-merge)[1224]: Merged extensions into '/usr'. Sep 3 23:23:03.669382 systemd[1]: Reload requested from client PID 1202 ('systemd-sysext') (unit systemd-sysext.service)... Sep 3 23:23:03.669403 systemd[1]: Reloading... Sep 3 23:23:03.726669 zram_generator::config[1249]: No configuration found. Sep 3 23:23:03.784353 ldconfig[1197]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 3 23:23:03.803902 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 3 23:23:03.867029 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 3 23:23:03.867446 systemd[1]: Reloading finished in 197 ms. Sep 3 23:23:03.893233 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 3 23:23:03.894535 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 3 23:23:03.912845 systemd[1]: Starting ensure-sysext.service... Sep 3 23:23:03.916373 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 3 23:23:03.932409 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 3 23:23:03.932443 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 3 23:23:03.932772 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 3 23:23:03.932965 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 3 23:23:03.933551 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 3 23:23:03.933792 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Sep 3 23:23:03.933833 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Sep 3 23:23:03.936227 systemd-tmpfiles[1287]: Detected autofs mount point /boot during canonicalization of boot. Sep 3 23:23:03.936242 systemd-tmpfiles[1287]: Skipping /boot Sep 3 23:23:03.942609 systemd[1]: Reload requested from client PID 1286 ('systemctl') (unit ensure-sysext.service)... Sep 3 23:23:03.942752 systemd[1]: Reloading... Sep 3 23:23:03.944917 systemd-tmpfiles[1287]: Detected autofs mount point /boot during canonicalization of boot. Sep 3 23:23:03.944935 systemd-tmpfiles[1287]: Skipping /boot Sep 3 23:23:04.021713 zram_generator::config[1323]: No configuration found. Sep 3 23:23:04.081431 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 3 23:23:04.145058 systemd[1]: Reloading finished in 200 ms. Sep 3 23:23:04.170234 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 3 23:23:04.175788 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 3 23:23:04.185730 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 3 23:23:04.188220 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 3 23:23:04.190317 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 3 23:23:04.193807 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 3 23:23:04.196062 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 3 23:23:04.198141 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 3 23:23:04.208155 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 3 23:23:04.212026 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 3 23:23:04.213496 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 3 23:23:04.215547 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 3 23:23:04.217765 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 3 23:23:04.218755 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 3 23:23:04.218901 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 3 23:23:04.219828 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 3 23:23:04.226157 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 3 23:23:04.228580 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 3 23:23:04.230948 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 3 23:23:04.231162 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 3 23:23:04.232784 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 3 23:23:04.232972 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 3 23:23:04.238355 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 3 23:23:04.239842 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 3 23:23:04.243072 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 3 23:23:04.251610 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 3 23:23:04.252574 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 3 23:23:04.252712 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 3 23:23:04.253953 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 3 23:23:04.256066 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 3 23:23:04.257965 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 3 23:23:04.258189 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 3 23:23:04.262168 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 3 23:23:04.262631 augenrules[1387]: No rules Sep 3 23:23:04.263032 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 3 23:23:04.266747 systemd-udevd[1355]: Using default interface naming scheme 'v255'. Sep 3 23:23:04.267274 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 3 23:23:04.268925 systemd[1]: audit-rules.service: Deactivated successfully. Sep 3 23:23:04.269128 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 3 23:23:04.270374 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 3 23:23:04.271861 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 3 23:23:04.279987 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 3 23:23:04.281497 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 3 23:23:04.294200 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 3 23:23:04.296900 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 3 23:23:04.300850 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 3 23:23:04.303933 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 3 23:23:04.311073 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 3 23:23:04.318472 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 3 23:23:04.319448 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 3 23:23:04.319565 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 3 23:23:04.320750 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 3 23:23:04.324311 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 3 23:23:04.328046 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 3 23:23:04.328237 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 3 23:23:04.334091 systemd[1]: Finished ensure-sysext.service. Sep 3 23:23:04.345011 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 3 23:23:04.345802 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 3 23:23:04.351071 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 3 23:23:04.355171 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 3 23:23:04.357070 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 3 23:23:04.357251 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 3 23:23:04.359861 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 3 23:23:04.365178 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 3 23:23:04.365359 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 3 23:23:04.367436 systemd-resolved[1353]: Positive Trust Anchors: Sep 3 23:23:04.367451 systemd-resolved[1353]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 3 23:23:04.367483 systemd-resolved[1353]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 3 23:23:04.368369 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 3 23:23:04.369380 augenrules[1406]: /sbin/augenrules: No change Sep 3 23:23:04.374984 systemd-resolved[1353]: Defaulting to hostname 'linux'. Sep 3 23:23:04.378769 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 3 23:23:04.379859 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 3 23:23:04.389927 augenrules[1460]: No rules Sep 3 23:23:04.392099 systemd[1]: audit-rules.service: Deactivated successfully. Sep 3 23:23:04.392338 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 3 23:23:04.398540 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 3 23:23:04.426345 systemd-networkd[1445]: lo: Link UP Sep 3 23:23:04.426354 systemd-networkd[1445]: lo: Gained carrier Sep 3 23:23:04.427081 systemd-networkd[1445]: Enumeration completed Sep 3 23:23:04.427806 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 3 23:23:04.428733 systemd[1]: Reached target network.target - Network. Sep 3 23:23:04.434311 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 3 23:23:04.442252 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 3 23:23:04.446272 systemd-networkd[1445]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 3 23:23:04.446285 systemd-networkd[1445]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 3 23:23:04.446846 systemd-networkd[1445]: eth0: Link UP Sep 3 23:23:04.446951 systemd-networkd[1445]: eth0: Gained carrier Sep 3 23:23:04.446969 systemd-networkd[1445]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 3 23:23:04.454843 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 3 23:23:04.458802 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 3 23:23:04.465700 systemd-networkd[1445]: eth0: DHCPv4 address 10.0.0.50/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 3 23:23:04.469789 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 3 23:23:04.470989 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 3 23:23:04.471336 systemd-timesyncd[1448]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 3 23:23:04.471675 systemd-timesyncd[1448]: Initial clock synchronization to Wed 2025-09-03 23:23:04.445982 UTC. Sep 3 23:23:04.472378 systemd[1]: Reached target sysinit.target - System Initialization. Sep 3 23:23:04.473365 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 3 23:23:04.474354 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 3 23:23:04.475351 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 3 23:23:04.476328 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 3 23:23:04.476359 systemd[1]: Reached target paths.target - Path Units. Sep 3 23:23:04.477112 systemd[1]: Reached target time-set.target - System Time Set. Sep 3 23:23:04.477997 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 3 23:23:04.478890 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 3 23:23:04.479817 systemd[1]: Reached target timers.target - Timer Units. Sep 3 23:23:04.481732 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 3 23:23:04.483956 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 3 23:23:04.486861 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 3 23:23:04.488101 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 3 23:23:04.489209 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 3 23:23:04.492493 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 3 23:23:04.494015 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 3 23:23:04.496569 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 3 23:23:04.498600 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 3 23:23:04.499957 systemd[1]: Reached target sockets.target - Socket Units. Sep 3 23:23:04.500962 systemd[1]: Reached target basic.target - Basic System. Sep 3 23:23:04.502101 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 3 23:23:04.502139 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 3 23:23:04.504564 systemd[1]: Starting containerd.service - containerd container runtime... Sep 3 23:23:04.507876 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 3 23:23:04.521799 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 3 23:23:04.524224 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 3 23:23:04.527875 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 3 23:23:04.529435 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 3 23:23:04.532721 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 3 23:23:04.534971 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 3 23:23:04.536834 jq[1498]: false Sep 3 23:23:04.537482 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 3 23:23:04.539860 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 3 23:23:04.545155 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 3 23:23:04.546841 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 3 23:23:04.547276 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 3 23:23:04.547917 systemd[1]: Starting update-engine.service - Update Engine... Sep 3 23:23:04.551818 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 3 23:23:04.554362 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 3 23:23:04.555823 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 3 23:23:04.557686 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 3 23:23:04.561933 extend-filesystems[1500]: Found /dev/vda6 Sep 3 23:23:04.562387 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 3 23:23:04.562988 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 3 23:23:04.566898 systemd[1]: motdgen.service: Deactivated successfully. Sep 3 23:23:04.567104 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 3 23:23:04.567409 extend-filesystems[1500]: Found /dev/vda9 Sep 3 23:23:04.571866 extend-filesystems[1500]: Checking size of /dev/vda9 Sep 3 23:23:04.575908 (ntainerd)[1526]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 3 23:23:04.580387 jq[1513]: true Sep 3 23:23:04.586099 update_engine[1508]: I20250903 23:23:04.585412 1508 main.cc:92] Flatcar Update Engine starting Sep 3 23:23:04.592576 extend-filesystems[1500]: Resized partition /dev/vda9 Sep 3 23:23:04.594577 extend-filesystems[1538]: resize2fs 1.47.2 (1-Jan-2025) Sep 3 23:23:04.598024 tar[1520]: linux-arm64/helm Sep 3 23:23:04.599822 jq[1534]: true Sep 3 23:23:04.601171 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 3 23:23:04.605821 dbus-daemon[1493]: [system] SELinux support is enabled Sep 3 23:23:04.606211 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 3 23:23:04.614123 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 3 23:23:04.614287 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 3 23:23:04.616970 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 3 23:23:04.617100 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 3 23:23:04.620704 systemd[1]: Started update-engine.service - Update Engine. Sep 3 23:23:04.621088 update_engine[1508]: I20250903 23:23:04.620831 1508 update_check_scheduler.cc:74] Next update check in 8m53s Sep 3 23:23:04.624673 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 3 23:23:04.627435 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 3 23:23:04.632347 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 3 23:23:04.640321 extend-filesystems[1538]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 3 23:23:04.640321 extend-filesystems[1538]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 3 23:23:04.640321 extend-filesystems[1538]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 3 23:23:04.646711 extend-filesystems[1500]: Resized filesystem in /dev/vda9 Sep 3 23:23:04.646569 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 3 23:23:04.647882 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 3 23:23:04.682534 bash[1560]: Updated "/home/core/.ssh/authorized_keys" Sep 3 23:23:04.687723 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 3 23:23:04.689184 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 3 23:23:04.753585 systemd-logind[1505]: Watching system buttons on /dev/input/event0 (Power Button) Sep 3 23:23:04.754712 systemd-logind[1505]: New seat seat0. Sep 3 23:23:04.757284 systemd[1]: Started systemd-logind.service - User Login Management. Sep 3 23:23:04.761585 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:23:04.780936 locksmithd[1543]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 3 23:23:04.782608 containerd[1526]: time="2025-09-03T23:23:04Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 3 23:23:04.783668 containerd[1526]: time="2025-09-03T23:23:04.783449440Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 3 23:23:04.792009 containerd[1526]: time="2025-09-03T23:23:04.791973800Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.52µs" Sep 3 23:23:04.792009 containerd[1526]: time="2025-09-03T23:23:04.792006120Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 3 23:23:04.792099 containerd[1526]: time="2025-09-03T23:23:04.792023320Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 3 23:23:04.792184 containerd[1526]: time="2025-09-03T23:23:04.792165320Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 3 23:23:04.792212 containerd[1526]: time="2025-09-03T23:23:04.792186040Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 3 23:23:04.792212 containerd[1526]: time="2025-09-03T23:23:04.792209200Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 3 23:23:04.792270 containerd[1526]: time="2025-09-03T23:23:04.792255040Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 3 23:23:04.792291 containerd[1526]: time="2025-09-03T23:23:04.792269440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 3 23:23:04.792489 containerd[1526]: time="2025-09-03T23:23:04.792470800Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 3 23:23:04.792509 containerd[1526]: time="2025-09-03T23:23:04.792489920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 3 23:23:04.792509 containerd[1526]: time="2025-09-03T23:23:04.792500840Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 3 23:23:04.792545 containerd[1526]: time="2025-09-03T23:23:04.792508240Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 3 23:23:04.792592 containerd[1526]: time="2025-09-03T23:23:04.792578640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 3 23:23:04.792832 containerd[1526]: time="2025-09-03T23:23:04.792810480Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 3 23:23:04.792860 containerd[1526]: time="2025-09-03T23:23:04.792847920Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 3 23:23:04.792860 containerd[1526]: time="2025-09-03T23:23:04.792858320Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 3 23:23:04.792899 containerd[1526]: time="2025-09-03T23:23:04.792889200Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 3 23:23:04.793085 containerd[1526]: time="2025-09-03T23:23:04.793068960Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 3 23:23:04.793145 containerd[1526]: time="2025-09-03T23:23:04.793131040Z" level=info msg="metadata content store policy set" policy=shared Sep 3 23:23:04.796250 containerd[1526]: time="2025-09-03T23:23:04.796212760Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 3 23:23:04.796293 containerd[1526]: time="2025-09-03T23:23:04.796270880Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 3 23:23:04.796293 containerd[1526]: time="2025-09-03T23:23:04.796286080Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 3 23:23:04.796366 containerd[1526]: time="2025-09-03T23:23:04.796297280Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 3 23:23:04.796366 containerd[1526]: time="2025-09-03T23:23:04.796310320Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 3 23:23:04.796366 containerd[1526]: time="2025-09-03T23:23:04.796349680Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 3 23:23:04.796366 containerd[1526]: time="2025-09-03T23:23:04.796362560Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 3 23:23:04.796435 containerd[1526]: time="2025-09-03T23:23:04.796375240Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 3 23:23:04.796435 containerd[1526]: time="2025-09-03T23:23:04.796394800Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 3 23:23:04.796435 containerd[1526]: time="2025-09-03T23:23:04.796408280Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 3 23:23:04.796435 containerd[1526]: time="2025-09-03T23:23:04.796417560Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 3 23:23:04.796435 containerd[1526]: time="2025-09-03T23:23:04.796430200Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 3 23:23:04.796555 containerd[1526]: time="2025-09-03T23:23:04.796535280Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 3 23:23:04.796581 containerd[1526]: time="2025-09-03T23:23:04.796561440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 3 23:23:04.796599 containerd[1526]: time="2025-09-03T23:23:04.796582080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 3 23:23:04.796599 containerd[1526]: time="2025-09-03T23:23:04.796593600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 3 23:23:04.796644 containerd[1526]: time="2025-09-03T23:23:04.796603480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 3 23:23:04.796644 containerd[1526]: time="2025-09-03T23:23:04.796614400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 3 23:23:04.796644 containerd[1526]: time="2025-09-03T23:23:04.796624960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 3 23:23:04.796713 containerd[1526]: time="2025-09-03T23:23:04.796648520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 3 23:23:04.796713 containerd[1526]: time="2025-09-03T23:23:04.796670400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 3 23:23:04.796713 containerd[1526]: time="2025-09-03T23:23:04.796682920Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 3 23:23:04.796713 containerd[1526]: time="2025-09-03T23:23:04.796693280Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 3 23:23:04.796911 containerd[1526]: time="2025-09-03T23:23:04.796880120Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 3 23:23:04.796911 containerd[1526]: time="2025-09-03T23:23:04.796902200Z" level=info msg="Start snapshots syncer" Sep 3 23:23:04.797107 containerd[1526]: time="2025-09-03T23:23:04.796927040Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 3 23:23:04.797181 containerd[1526]: time="2025-09-03T23:23:04.797143920Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 3 23:23:04.797290 containerd[1526]: time="2025-09-03T23:23:04.797192720Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 3 23:23:04.797290 containerd[1526]: time="2025-09-03T23:23:04.797268600Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 3 23:23:04.797417 containerd[1526]: time="2025-09-03T23:23:04.797394080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 3 23:23:04.797445 containerd[1526]: time="2025-09-03T23:23:04.797423480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 3 23:23:04.797445 containerd[1526]: time="2025-09-03T23:23:04.797442440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 3 23:23:04.797484 containerd[1526]: time="2025-09-03T23:23:04.797456280Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 3 23:23:04.797484 containerd[1526]: time="2025-09-03T23:23:04.797467920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 3 23:23:04.797484 containerd[1526]: time="2025-09-03T23:23:04.797478160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 3 23:23:04.797535 containerd[1526]: time="2025-09-03T23:23:04.797487880Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 3 23:23:04.797535 containerd[1526]: time="2025-09-03T23:23:04.797512720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 3 23:23:04.797535 containerd[1526]: time="2025-09-03T23:23:04.797526800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 3 23:23:04.797581 containerd[1526]: time="2025-09-03T23:23:04.797536520Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 3 23:23:04.797601 containerd[1526]: time="2025-09-03T23:23:04.797583120Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 3 23:23:04.797601 containerd[1526]: time="2025-09-03T23:23:04.797596640Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 3 23:23:04.797672 containerd[1526]: time="2025-09-03T23:23:04.797604960Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 3 23:23:04.797672 containerd[1526]: time="2025-09-03T23:23:04.797614440Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 3 23:23:04.797672 containerd[1526]: time="2025-09-03T23:23:04.797622200Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 3 23:23:04.797726 containerd[1526]: time="2025-09-03T23:23:04.797702720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 3 23:23:04.797726 containerd[1526]: time="2025-09-03T23:23:04.797716840Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 3 23:23:04.797805 containerd[1526]: time="2025-09-03T23:23:04.797791440Z" level=info msg="runtime interface created" Sep 3 23:23:04.797805 containerd[1526]: time="2025-09-03T23:23:04.797801160Z" level=info msg="created NRI interface" Sep 3 23:23:04.797842 containerd[1526]: time="2025-09-03T23:23:04.797809920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 3 23:23:04.797842 containerd[1526]: time="2025-09-03T23:23:04.797821080Z" level=info msg="Connect containerd service" Sep 3 23:23:04.797876 containerd[1526]: time="2025-09-03T23:23:04.797846280Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 3 23:23:04.798786 containerd[1526]: time="2025-09-03T23:23:04.798756600Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 3 23:23:04.861166 sshd_keygen[1517]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 3 23:23:04.880706 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 3 23:23:04.881165 containerd[1526]: time="2025-09-03T23:23:04.880923200Z" level=info msg="Start subscribing containerd event" Sep 3 23:23:04.881165 containerd[1526]: time="2025-09-03T23:23:04.880987720Z" level=info msg="Start recovering state" Sep 3 23:23:04.881165 containerd[1526]: time="2025-09-03T23:23:04.881080280Z" level=info msg="Start event monitor" Sep 3 23:23:04.881165 containerd[1526]: time="2025-09-03T23:23:04.881094040Z" level=info msg="Start cni network conf syncer for default" Sep 3 23:23:04.881165 containerd[1526]: time="2025-09-03T23:23:04.881103360Z" level=info msg="Start streaming server" Sep 3 23:23:04.881165 containerd[1526]: time="2025-09-03T23:23:04.881113000Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 3 23:23:04.881165 containerd[1526]: time="2025-09-03T23:23:04.881120800Z" level=info msg="runtime interface starting up..." Sep 3 23:23:04.881165 containerd[1526]: time="2025-09-03T23:23:04.881127560Z" level=info msg="starting plugins..." Sep 3 23:23:04.881165 containerd[1526]: time="2025-09-03T23:23:04.881141240Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 3 23:23:04.881355 containerd[1526]: time="2025-09-03T23:23:04.881222880Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 3 23:23:04.881355 containerd[1526]: time="2025-09-03T23:23:04.881279520Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 3 23:23:04.881742 containerd[1526]: time="2025-09-03T23:23:04.881719560Z" level=info msg="containerd successfully booted in 0.099464s" Sep 3 23:23:04.882361 systemd[1]: Started containerd.service - containerd container runtime. Sep 3 23:23:04.884954 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 3 23:23:04.906967 systemd[1]: issuegen.service: Deactivated successfully. Sep 3 23:23:04.907211 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 3 23:23:04.909809 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 3 23:23:04.928256 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 3 23:23:04.930942 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 3 23:23:04.934899 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 3 23:23:04.935975 systemd[1]: Reached target getty.target - Login Prompts. Sep 3 23:23:05.035844 tar[1520]: linux-arm64/LICENSE Sep 3 23:23:05.035997 tar[1520]: linux-arm64/README.md Sep 3 23:23:05.052584 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 3 23:23:06.122060 systemd-networkd[1445]: eth0: Gained IPv6LL Sep 3 23:23:06.124466 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 3 23:23:06.126367 systemd[1]: Reached target network-online.target - Network is Online. Sep 3 23:23:06.131089 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 3 23:23:06.133678 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:23:06.135616 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 3 23:23:06.159755 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 3 23:23:06.161333 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 3 23:23:06.161530 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 3 23:23:06.163565 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 3 23:23:06.705199 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:23:06.706616 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 3 23:23:06.707574 systemd[1]: Startup finished in 2.020s (kernel) + 5.325s (initrd) + 3.830s (userspace) = 11.176s. Sep 3 23:23:06.710406 (kubelet)[1636]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 3 23:23:07.103078 kubelet[1636]: E0903 23:23:07.102957 1636 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 3 23:23:07.105362 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 3 23:23:07.105509 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 3 23:23:07.106788 systemd[1]: kubelet.service: Consumed 780ms CPU time, 257.5M memory peak. Sep 3 23:23:10.664836 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 3 23:23:10.666248 systemd[1]: Started sshd@0-10.0.0.50:22-10.0.0.1:40988.service - OpenSSH per-connection server daemon (10.0.0.1:40988). Sep 3 23:23:10.721067 sshd[1650]: Accepted publickey for core from 10.0.0.1 port 40988 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:23:10.722968 sshd-session[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:23:10.732497 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 3 23:23:10.734073 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 3 23:23:10.742025 systemd-logind[1505]: New session 1 of user core. Sep 3 23:23:10.760683 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 3 23:23:10.763712 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 3 23:23:10.784694 (systemd)[1654]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 3 23:23:10.786724 systemd-logind[1505]: New session c1 of user core. Sep 3 23:23:10.900320 systemd[1654]: Queued start job for default target default.target. Sep 3 23:23:10.911537 systemd[1654]: Created slice app.slice - User Application Slice. Sep 3 23:23:10.911567 systemd[1654]: Reached target paths.target - Paths. Sep 3 23:23:10.911617 systemd[1654]: Reached target timers.target - Timers. Sep 3 23:23:10.912832 systemd[1654]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 3 23:23:10.923465 systemd[1654]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 3 23:23:10.923567 systemd[1654]: Reached target sockets.target - Sockets. Sep 3 23:23:10.923620 systemd[1654]: Reached target basic.target - Basic System. Sep 3 23:23:10.923673 systemd[1654]: Reached target default.target - Main User Target. Sep 3 23:23:10.923700 systemd[1654]: Startup finished in 131ms. Sep 3 23:23:10.923780 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 3 23:23:10.925406 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 3 23:23:10.982586 systemd[1]: Started sshd@1-10.0.0.50:22-10.0.0.1:40998.service - OpenSSH per-connection server daemon (10.0.0.1:40998). Sep 3 23:23:11.023828 sshd[1665]: Accepted publickey for core from 10.0.0.1 port 40998 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:23:11.025046 sshd-session[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:23:11.028930 systemd-logind[1505]: New session 2 of user core. Sep 3 23:23:11.042831 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 3 23:23:11.092888 sshd[1667]: Connection closed by 10.0.0.1 port 40998 Sep 3 23:23:11.093240 sshd-session[1665]: pam_unix(sshd:session): session closed for user core Sep 3 23:23:11.113493 systemd[1]: sshd@1-10.0.0.50:22-10.0.0.1:40998.service: Deactivated successfully. Sep 3 23:23:11.115829 systemd[1]: session-2.scope: Deactivated successfully. Sep 3 23:23:11.116428 systemd-logind[1505]: Session 2 logged out. Waiting for processes to exit. Sep 3 23:23:11.120573 systemd[1]: Started sshd@2-10.0.0.50:22-10.0.0.1:41006.service - OpenSSH per-connection server daemon (10.0.0.1:41006). Sep 3 23:23:11.121208 systemd-logind[1505]: Removed session 2. Sep 3 23:23:11.176884 sshd[1673]: Accepted publickey for core from 10.0.0.1 port 41006 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:23:11.178144 sshd-session[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:23:11.182513 systemd-logind[1505]: New session 3 of user core. Sep 3 23:23:11.201853 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 3 23:23:11.249659 sshd[1675]: Connection closed by 10.0.0.1 port 41006 Sep 3 23:23:11.250062 sshd-session[1673]: pam_unix(sshd:session): session closed for user core Sep 3 23:23:11.265330 systemd[1]: sshd@2-10.0.0.50:22-10.0.0.1:41006.service: Deactivated successfully. Sep 3 23:23:11.267691 systemd[1]: session-3.scope: Deactivated successfully. Sep 3 23:23:11.268258 systemd-logind[1505]: Session 3 logged out. Waiting for processes to exit. Sep 3 23:23:11.271861 systemd[1]: Started sshd@3-10.0.0.50:22-10.0.0.1:41008.service - OpenSSH per-connection server daemon (10.0.0.1:41008). Sep 3 23:23:11.272514 systemd-logind[1505]: Removed session 3. Sep 3 23:23:11.329898 sshd[1681]: Accepted publickey for core from 10.0.0.1 port 41008 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:23:11.330996 sshd-session[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:23:11.335338 systemd-logind[1505]: New session 4 of user core. Sep 3 23:23:11.348790 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 3 23:23:11.404897 sshd[1683]: Connection closed by 10.0.0.1 port 41008 Sep 3 23:23:11.402984 sshd-session[1681]: pam_unix(sshd:session): session closed for user core Sep 3 23:23:11.419794 systemd[1]: sshd@3-10.0.0.50:22-10.0.0.1:41008.service: Deactivated successfully. Sep 3 23:23:11.421266 systemd[1]: session-4.scope: Deactivated successfully. Sep 3 23:23:11.423312 systemd-logind[1505]: Session 4 logged out. Waiting for processes to exit. Sep 3 23:23:11.425840 systemd[1]: Started sshd@4-10.0.0.50:22-10.0.0.1:41020.service - OpenSSH per-connection server daemon (10.0.0.1:41020). Sep 3 23:23:11.426659 systemd-logind[1505]: Removed session 4. Sep 3 23:23:11.497621 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 41020 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:23:11.498843 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:23:11.503985 systemd-logind[1505]: New session 5 of user core. Sep 3 23:23:11.510898 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 3 23:23:11.567594 sudo[1692]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 3 23:23:11.567872 sudo[1692]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 3 23:23:11.585173 sudo[1692]: pam_unix(sudo:session): session closed for user root Sep 3 23:23:11.586671 sshd[1691]: Connection closed by 10.0.0.1 port 41020 Sep 3 23:23:11.587177 sshd-session[1689]: pam_unix(sshd:session): session closed for user core Sep 3 23:23:11.599789 systemd[1]: sshd@4-10.0.0.50:22-10.0.0.1:41020.service: Deactivated successfully. Sep 3 23:23:11.601962 systemd[1]: session-5.scope: Deactivated successfully. Sep 3 23:23:11.602655 systemd-logind[1505]: Session 5 logged out. Waiting for processes to exit. Sep 3 23:23:11.605063 systemd[1]: Started sshd@5-10.0.0.50:22-10.0.0.1:41022.service - OpenSSH per-connection server daemon (10.0.0.1:41022). Sep 3 23:23:11.605692 systemd-logind[1505]: Removed session 5. Sep 3 23:23:11.657433 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 41022 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:23:11.658558 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:23:11.662427 systemd-logind[1505]: New session 6 of user core. Sep 3 23:23:11.677785 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 3 23:23:11.728223 sudo[1702]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 3 23:23:11.728494 sudo[1702]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 3 23:23:11.801059 sudo[1702]: pam_unix(sudo:session): session closed for user root Sep 3 23:23:11.805976 sudo[1701]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 3 23:23:11.806229 sudo[1701]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 3 23:23:11.815215 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 3 23:23:11.853007 augenrules[1724]: No rules Sep 3 23:23:11.854317 systemd[1]: audit-rules.service: Deactivated successfully. Sep 3 23:23:11.854513 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 3 23:23:11.855458 sudo[1701]: pam_unix(sudo:session): session closed for user root Sep 3 23:23:11.857382 sshd[1700]: Connection closed by 10.0.0.1 port 41022 Sep 3 23:23:11.857259 sshd-session[1698]: pam_unix(sshd:session): session closed for user core Sep 3 23:23:11.869550 systemd[1]: sshd@5-10.0.0.50:22-10.0.0.1:41022.service: Deactivated successfully. Sep 3 23:23:11.871848 systemd[1]: session-6.scope: Deactivated successfully. Sep 3 23:23:11.872490 systemd-logind[1505]: Session 6 logged out. Waiting for processes to exit. Sep 3 23:23:11.874705 systemd[1]: Started sshd@6-10.0.0.50:22-10.0.0.1:41038.service - OpenSSH per-connection server daemon (10.0.0.1:41038). Sep 3 23:23:11.875199 systemd-logind[1505]: Removed session 6. Sep 3 23:23:11.931543 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 41038 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:23:11.932002 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:23:11.936733 systemd-logind[1505]: New session 7 of user core. Sep 3 23:23:11.955850 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 3 23:23:12.006666 sudo[1736]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 3 23:23:12.007237 sudo[1736]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 3 23:23:12.309045 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 3 23:23:12.329943 (dockerd)[1757]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 3 23:23:12.544216 dockerd[1757]: time="2025-09-03T23:23:12.544155708Z" level=info msg="Starting up" Sep 3 23:23:12.545404 dockerd[1757]: time="2025-09-03T23:23:12.545362532Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 3 23:23:12.586837 dockerd[1757]: time="2025-09-03T23:23:12.586583785Z" level=info msg="Loading containers: start." Sep 3 23:23:12.595665 kernel: Initializing XFRM netlink socket Sep 3 23:23:12.777764 systemd-networkd[1445]: docker0: Link UP Sep 3 23:23:12.781748 dockerd[1757]: time="2025-09-03T23:23:12.781697875Z" level=info msg="Loading containers: done." Sep 3 23:23:12.795126 dockerd[1757]: time="2025-09-03T23:23:12.795070489Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 3 23:23:12.795259 dockerd[1757]: time="2025-09-03T23:23:12.795166832Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 3 23:23:12.795286 dockerd[1757]: time="2025-09-03T23:23:12.795268010Z" level=info msg="Initializing buildkit" Sep 3 23:23:12.816507 dockerd[1757]: time="2025-09-03T23:23:12.816462029Z" level=info msg="Completed buildkit initialization" Sep 3 23:23:12.822179 dockerd[1757]: time="2025-09-03T23:23:12.822123488Z" level=info msg="Daemon has completed initialization" Sep 3 23:23:12.822503 dockerd[1757]: time="2025-09-03T23:23:12.822216275Z" level=info msg="API listen on /run/docker.sock" Sep 3 23:23:12.822362 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 3 23:23:13.409984 containerd[1526]: time="2025-09-03T23:23:13.409947060Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 3 23:23:13.971520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount31925032.mount: Deactivated successfully. Sep 3 23:23:15.135166 containerd[1526]: time="2025-09-03T23:23:15.135121892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:15.135675 containerd[1526]: time="2025-09-03T23:23:15.135602013Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=25652443" Sep 3 23:23:15.136671 containerd[1526]: time="2025-09-03T23:23:15.136618730Z" level=info msg="ImageCreate event name:\"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:15.139558 containerd[1526]: time="2025-09-03T23:23:15.139530754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:15.141132 containerd[1526]: time="2025-09-03T23:23:15.141101771Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"25649241\" in 1.731113268s" Sep 3 23:23:15.141239 containerd[1526]: time="2025-09-03T23:23:15.141223910Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\"" Sep 3 23:23:15.142397 containerd[1526]: time="2025-09-03T23:23:15.142371798Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 3 23:23:16.116972 containerd[1526]: time="2025-09-03T23:23:16.116920431Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:16.117911 containerd[1526]: time="2025-09-03T23:23:16.117881683Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=22460311" Sep 3 23:23:16.118550 containerd[1526]: time="2025-09-03T23:23:16.118518588Z" level=info msg="ImageCreate event name:\"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:16.121191 containerd[1526]: time="2025-09-03T23:23:16.121156736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:16.122308 containerd[1526]: time="2025-09-03T23:23:16.122272509Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"23997423\" in 979.871295ms" Sep 3 23:23:16.122354 containerd[1526]: time="2025-09-03T23:23:16.122309160Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\"" Sep 3 23:23:16.123004 containerd[1526]: time="2025-09-03T23:23:16.122953139Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 3 23:23:17.243949 containerd[1526]: time="2025-09-03T23:23:17.243235493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:17.243949 containerd[1526]: time="2025-09-03T23:23:17.243939540Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=17125905" Sep 3 23:23:17.244591 containerd[1526]: time="2025-09-03T23:23:17.244560487Z" level=info msg="ImageCreate event name:\"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:17.248057 containerd[1526]: time="2025-09-03T23:23:17.248016768Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:17.249538 containerd[1526]: time="2025-09-03T23:23:17.249508120Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"18663035\" in 1.126482517s" Sep 3 23:23:17.249666 containerd[1526]: time="2025-09-03T23:23:17.249624396Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\"" Sep 3 23:23:17.250128 containerd[1526]: time="2025-09-03T23:23:17.250100009Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 3 23:23:17.355899 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 3 23:23:17.357347 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:23:17.491323 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:23:17.495133 (kubelet)[2037]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 3 23:23:17.535044 kubelet[2037]: E0903 23:23:17.534988 2037 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 3 23:23:17.538148 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 3 23:23:17.538282 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 3 23:23:17.538667 systemd[1]: kubelet.service: Consumed 146ms CPU time, 107.9M memory peak. Sep 3 23:23:18.221042 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount722154904.mount: Deactivated successfully. Sep 3 23:23:18.586064 containerd[1526]: time="2025-09-03T23:23:18.585958341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:18.586901 containerd[1526]: time="2025-09-03T23:23:18.586759353Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=26916097" Sep 3 23:23:18.587619 containerd[1526]: time="2025-09-03T23:23:18.587563683Z" level=info msg="ImageCreate event name:\"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:18.589686 containerd[1526]: time="2025-09-03T23:23:18.589627513Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:18.590416 containerd[1526]: time="2025-09-03T23:23:18.590094993Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"26915114\" in 1.339955013s" Sep 3 23:23:18.590416 containerd[1526]: time="2025-09-03T23:23:18.590127651Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\"" Sep 3 23:23:18.590590 containerd[1526]: time="2025-09-03T23:23:18.590552401Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 3 23:23:19.063125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount564265522.mount: Deactivated successfully. Sep 3 23:23:19.766674 containerd[1526]: time="2025-09-03T23:23:19.765908351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:19.767539 containerd[1526]: time="2025-09-03T23:23:19.767511564Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 3 23:23:19.768585 containerd[1526]: time="2025-09-03T23:23:19.768562251Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:19.771476 containerd[1526]: time="2025-09-03T23:23:19.771440647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:19.772469 containerd[1526]: time="2025-09-03T23:23:19.772442165Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.181857986s" Sep 3 23:23:19.772524 containerd[1526]: time="2025-09-03T23:23:19.772475144Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 3 23:23:19.773197 containerd[1526]: time="2025-09-03T23:23:19.772891078Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 3 23:23:20.299841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount612256875.mount: Deactivated successfully. Sep 3 23:23:20.306470 containerd[1526]: time="2025-09-03T23:23:20.306408309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 3 23:23:20.307647 containerd[1526]: time="2025-09-03T23:23:20.307580605Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 3 23:23:20.308594 containerd[1526]: time="2025-09-03T23:23:20.308561976Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 3 23:23:20.312474 containerd[1526]: time="2025-09-03T23:23:20.312412263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 3 23:23:20.313271 containerd[1526]: time="2025-09-03T23:23:20.313053758Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 540.134578ms" Sep 3 23:23:20.313271 containerd[1526]: time="2025-09-03T23:23:20.313086978Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 3 23:23:20.313651 containerd[1526]: time="2025-09-03T23:23:20.313602109Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 3 23:23:20.836535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount998030845.mount: Deactivated successfully. Sep 3 23:23:22.525929 containerd[1526]: time="2025-09-03T23:23:22.525878990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:22.526427 containerd[1526]: time="2025-09-03T23:23:22.526392678Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537163" Sep 3 23:23:22.527578 containerd[1526]: time="2025-09-03T23:23:22.527523322Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:22.531664 containerd[1526]: time="2025-09-03T23:23:22.530087648Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:22.532382 containerd[1526]: time="2025-09-03T23:23:22.532329984Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.218592196s" Sep 3 23:23:22.532427 containerd[1526]: time="2025-09-03T23:23:22.532378119Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 3 23:23:27.779690 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 3 23:23:27.781310 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:23:27.799140 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 3 23:23:27.799209 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 3 23:23:27.800679 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:23:27.802658 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:23:27.829164 systemd[1]: Reload requested from client PID 2195 ('systemctl') (unit session-7.scope)... Sep 3 23:23:27.829183 systemd[1]: Reloading... Sep 3 23:23:27.909661 zram_generator::config[2238]: No configuration found. Sep 3 23:23:28.056694 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 3 23:23:28.142413 systemd[1]: Reloading finished in 312 ms. Sep 3 23:23:28.203255 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 3 23:23:28.203340 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 3 23:23:28.203606 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:23:28.203666 systemd[1]: kubelet.service: Consumed 89ms CPU time, 94.9M memory peak. Sep 3 23:23:28.205265 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:23:28.331417 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:23:28.342961 (kubelet)[2283]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 3 23:23:28.376997 kubelet[2283]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 3 23:23:28.376997 kubelet[2283]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 3 23:23:28.376997 kubelet[2283]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 3 23:23:28.377328 kubelet[2283]: I0903 23:23:28.377079 2283 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 3 23:23:28.974088 kubelet[2283]: I0903 23:23:28.974053 2283 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 3 23:23:28.974088 kubelet[2283]: I0903 23:23:28.974081 2283 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 3 23:23:28.974397 kubelet[2283]: I0903 23:23:28.974378 2283 server.go:934] "Client rotation is on, will bootstrap in background" Sep 3 23:23:28.993056 kubelet[2283]: E0903 23:23:28.993020 2283 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.50:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" Sep 3 23:23:28.994096 kubelet[2283]: I0903 23:23:28.994017 2283 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 3 23:23:29.001322 kubelet[2283]: I0903 23:23:29.001299 2283 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 3 23:23:29.006090 kubelet[2283]: I0903 23:23:29.005157 2283 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 3 23:23:29.006090 kubelet[2283]: I0903 23:23:29.005409 2283 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 3 23:23:29.006090 kubelet[2283]: I0903 23:23:29.005508 2283 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 3 23:23:29.006090 kubelet[2283]: I0903 23:23:29.005535 2283 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 3 23:23:29.006300 kubelet[2283]: I0903 23:23:29.005844 2283 topology_manager.go:138] "Creating topology manager with none policy" Sep 3 23:23:29.006300 kubelet[2283]: I0903 23:23:29.005855 2283 container_manager_linux.go:300] "Creating device plugin manager" Sep 3 23:23:29.006300 kubelet[2283]: I0903 23:23:29.006139 2283 state_mem.go:36] "Initialized new in-memory state store" Sep 3 23:23:29.008487 kubelet[2283]: I0903 23:23:29.008461 2283 kubelet.go:408] "Attempting to sync node with API server" Sep 3 23:23:29.008487 kubelet[2283]: I0903 23:23:29.008491 2283 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 3 23:23:29.008568 kubelet[2283]: I0903 23:23:29.008515 2283 kubelet.go:314] "Adding apiserver pod source" Sep 3 23:23:29.008568 kubelet[2283]: I0903 23:23:29.008525 2283 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 3 23:23:29.011782 kubelet[2283]: W0903 23:23:29.011562 2283 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Sep 3 23:23:29.011859 kubelet[2283]: E0903 23:23:29.011789 2283 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" Sep 3 23:23:29.012290 kubelet[2283]: W0903 23:23:29.012253 2283 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Sep 3 23:23:29.012346 kubelet[2283]: E0903 23:23:29.012299 2283 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" Sep 3 23:23:29.012386 kubelet[2283]: I0903 23:23:29.012366 2283 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 3 23:23:29.013183 kubelet[2283]: I0903 23:23:29.013168 2283 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 3 23:23:29.013525 kubelet[2283]: W0903 23:23:29.013498 2283 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 3 23:23:29.015186 kubelet[2283]: I0903 23:23:29.014611 2283 server.go:1274] "Started kubelet" Sep 3 23:23:29.015746 kubelet[2283]: I0903 23:23:29.015701 2283 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 3 23:23:29.015887 kubelet[2283]: I0903 23:23:29.015446 2283 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 3 23:23:29.016481 kubelet[2283]: I0903 23:23:29.016013 2283 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 3 23:23:29.016942 kubelet[2283]: I0903 23:23:29.016895 2283 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 3 23:23:29.017211 kubelet[2283]: I0903 23:23:29.017163 2283 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 3 23:23:29.017756 kubelet[2283]: I0903 23:23:29.017735 2283 server.go:449] "Adding debug handlers to kubelet server" Sep 3 23:23:29.018584 kubelet[2283]: I0903 23:23:29.018562 2283 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 3 23:23:29.018695 kubelet[2283]: I0903 23:23:29.018688 2283 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 3 23:23:29.018739 kubelet[2283]: I0903 23:23:29.018733 2283 reconciler.go:26] "Reconciler: start to sync state" Sep 3 23:23:29.019137 kubelet[2283]: W0903 23:23:29.019071 2283 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Sep 3 23:23:29.019137 kubelet[2283]: E0903 23:23:29.019120 2283 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" Sep 3 23:23:29.019338 kubelet[2283]: I0903 23:23:29.019302 2283 factory.go:221] Registration of the systemd container factory successfully Sep 3 23:23:29.019405 kubelet[2283]: I0903 23:23:29.019387 2283 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 3 23:23:29.019824 kubelet[2283]: E0903 23:23:29.019800 2283 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 3 23:23:29.020083 kubelet[2283]: E0903 23:23:29.020005 2283 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="200ms" Sep 3 23:23:29.020329 kubelet[2283]: E0903 23:23:29.018398 2283 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.50:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.50:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1861e94441589a98 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-03 23:23:29.014577816 +0000 UTC m=+0.668693030,LastTimestamp:2025-09-03 23:23:29.014577816 +0000 UTC m=+0.668693030,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 3 23:23:29.020426 kubelet[2283]: I0903 23:23:29.020395 2283 factory.go:221] Registration of the containerd container factory successfully Sep 3 23:23:29.020651 kubelet[2283]: E0903 23:23:29.020566 2283 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 3 23:23:29.029268 kubelet[2283]: I0903 23:23:29.029238 2283 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 3 23:23:29.029268 kubelet[2283]: I0903 23:23:29.029255 2283 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 3 23:23:29.029268 kubelet[2283]: I0903 23:23:29.029271 2283 state_mem.go:36] "Initialized new in-memory state store" Sep 3 23:23:29.032102 kubelet[2283]: I0903 23:23:29.032038 2283 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 3 23:23:29.033151 kubelet[2283]: I0903 23:23:29.033120 2283 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 3 23:23:29.033151 kubelet[2283]: I0903 23:23:29.033144 2283 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 3 23:23:29.033236 kubelet[2283]: I0903 23:23:29.033163 2283 kubelet.go:2321] "Starting kubelet main sync loop" Sep 3 23:23:29.033236 kubelet[2283]: E0903 23:23:29.033204 2283 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 3 23:23:29.117900 kubelet[2283]: I0903 23:23:29.117847 2283 policy_none.go:49] "None policy: Start" Sep 3 23:23:29.118030 kubelet[2283]: W0903 23:23:29.117974 2283 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Sep 3 23:23:29.118070 kubelet[2283]: E0903 23:23:29.118032 2283 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" Sep 3 23:23:29.118738 kubelet[2283]: I0903 23:23:29.118721 2283 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 3 23:23:29.118789 kubelet[2283]: I0903 23:23:29.118748 2283 state_mem.go:35] "Initializing new in-memory state store" Sep 3 23:23:29.120210 kubelet[2283]: E0903 23:23:29.120177 2283 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 3 23:23:29.125143 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 3 23:23:29.133760 kubelet[2283]: E0903 23:23:29.133727 2283 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 3 23:23:29.135885 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 3 23:23:29.138753 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 3 23:23:29.146521 kubelet[2283]: I0903 23:23:29.146464 2283 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 3 23:23:29.146898 kubelet[2283]: I0903 23:23:29.146722 2283 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 3 23:23:29.146898 kubelet[2283]: I0903 23:23:29.146741 2283 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 3 23:23:29.147056 kubelet[2283]: I0903 23:23:29.146995 2283 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 3 23:23:29.148522 kubelet[2283]: E0903 23:23:29.148484 2283 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 3 23:23:29.220737 kubelet[2283]: E0903 23:23:29.220690 2283 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="400ms" Sep 3 23:23:29.248695 kubelet[2283]: I0903 23:23:29.247851 2283 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 3 23:23:29.248695 kubelet[2283]: E0903 23:23:29.248298 2283 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Sep 3 23:23:29.343625 systemd[1]: Created slice kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice - libcontainer container kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice. Sep 3 23:23:29.368990 systemd[1]: Created slice kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice - libcontainer container kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice. Sep 3 23:23:29.382891 systemd[1]: Created slice kubepods-burstable-podd21cea10b243ac810f9704ffdd55450c.slice - libcontainer container kubepods-burstable-podd21cea10b243ac810f9704ffdd55450c.slice. Sep 3 23:23:29.449763 kubelet[2283]: I0903 23:23:29.449730 2283 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 3 23:23:29.450115 kubelet[2283]: E0903 23:23:29.450058 2283 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Sep 3 23:23:29.521060 kubelet[2283]: I0903 23:23:29.520941 2283 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 3 23:23:29.521060 kubelet[2283]: I0903 23:23:29.520985 2283 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 3 23:23:29.521060 kubelet[2283]: I0903 23:23:29.521012 2283 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 3 23:23:29.521060 kubelet[2283]: I0903 23:23:29.521028 2283 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d21cea10b243ac810f9704ffdd55450c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d21cea10b243ac810f9704ffdd55450c\") " pod="kube-system/kube-apiserver-localhost" Sep 3 23:23:29.521060 kubelet[2283]: I0903 23:23:29.521045 2283 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 3 23:23:29.521229 kubelet[2283]: I0903 23:23:29.521061 2283 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 3 23:23:29.521229 kubelet[2283]: I0903 23:23:29.521077 2283 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 3 23:23:29.521229 kubelet[2283]: I0903 23:23:29.521094 2283 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d21cea10b243ac810f9704ffdd55450c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d21cea10b243ac810f9704ffdd55450c\") " pod="kube-system/kube-apiserver-localhost" Sep 3 23:23:29.521229 kubelet[2283]: I0903 23:23:29.521110 2283 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d21cea10b243ac810f9704ffdd55450c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d21cea10b243ac810f9704ffdd55450c\") " pod="kube-system/kube-apiserver-localhost" Sep 3 23:23:29.621658 kubelet[2283]: E0903 23:23:29.621581 2283 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="800ms" Sep 3 23:23:29.666500 containerd[1526]: time="2025-09-03T23:23:29.666457760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 3 23:23:29.672136 containerd[1526]: time="2025-09-03T23:23:29.671990382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 3 23:23:29.687034 containerd[1526]: time="2025-09-03T23:23:29.686288219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d21cea10b243ac810f9704ffdd55450c,Namespace:kube-system,Attempt:0,}" Sep 3 23:23:29.692269 containerd[1526]: time="2025-09-03T23:23:29.691570245Z" level=info msg="connecting to shim 7f736e13efdcac20f1b5a33056284c1a9594efc0ed086e5df92e243118301683" address="unix:///run/containerd/s/f7b73ab8d52f49f71a3a6f412fdd9d3ae662539f568f3230ff34d0a793c7ec29" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:23:29.713735 containerd[1526]: time="2025-09-03T23:23:29.712804192Z" level=info msg="connecting to shim 4f88dfdd5cac1c1e81b1f0e135bdaf327f170f5088c14f8a5c8136d89d7ec544" address="unix:///run/containerd/s/1ce43352f23f2d60d1796de9d99c4ca186768e299d39c6da8c6f373044ba0027" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:23:29.733187 systemd[1]: Started cri-containerd-7f736e13efdcac20f1b5a33056284c1a9594efc0ed086e5df92e243118301683.scope - libcontainer container 7f736e13efdcac20f1b5a33056284c1a9594efc0ed086e5df92e243118301683. Sep 3 23:23:29.734704 containerd[1526]: time="2025-09-03T23:23:29.734666608Z" level=info msg="connecting to shim b3dcd636024f62ff1fbfa2648c0ac1aefdd3cbb29e37b7123badb5d644b2db5c" address="unix:///run/containerd/s/5da4a009c8c7834859674a8f42c1a8d4db2afd2a77d733a49dee785e0cd8e9ef" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:23:29.749812 systemd[1]: Started cri-containerd-4f88dfdd5cac1c1e81b1f0e135bdaf327f170f5088c14f8a5c8136d89d7ec544.scope - libcontainer container 4f88dfdd5cac1c1e81b1f0e135bdaf327f170f5088c14f8a5c8136d89d7ec544. Sep 3 23:23:29.754054 systemd[1]: Started cri-containerd-b3dcd636024f62ff1fbfa2648c0ac1aefdd3cbb29e37b7123badb5d644b2db5c.scope - libcontainer container b3dcd636024f62ff1fbfa2648c0ac1aefdd3cbb29e37b7123badb5d644b2db5c. Sep 3 23:23:29.784418 containerd[1526]: time="2025-09-03T23:23:29.784299536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f736e13efdcac20f1b5a33056284c1a9594efc0ed086e5df92e243118301683\"" Sep 3 23:23:29.788133 containerd[1526]: time="2025-09-03T23:23:29.788077826Z" level=info msg="CreateContainer within sandbox \"7f736e13efdcac20f1b5a33056284c1a9594efc0ed086e5df92e243118301683\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 3 23:23:29.796860 containerd[1526]: time="2025-09-03T23:23:29.796819850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d21cea10b243ac810f9704ffdd55450c,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3dcd636024f62ff1fbfa2648c0ac1aefdd3cbb29e37b7123badb5d644b2db5c\"" Sep 3 23:23:29.797102 containerd[1526]: time="2025-09-03T23:23:29.797077803Z" level=info msg="Container f3445c46760855f35ec75154081e2a1cd0e08fba59c87e3c1e06632fd35f93d0: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:23:29.800563 containerd[1526]: time="2025-09-03T23:23:29.800520207Z" level=info msg="CreateContainer within sandbox \"b3dcd636024f62ff1fbfa2648c0ac1aefdd3cbb29e37b7123badb5d644b2db5c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 3 23:23:29.804118 containerd[1526]: time="2025-09-03T23:23:29.804072973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f88dfdd5cac1c1e81b1f0e135bdaf327f170f5088c14f8a5c8136d89d7ec544\"" Sep 3 23:23:29.806448 containerd[1526]: time="2025-09-03T23:23:29.806396233Z" level=info msg="CreateContainer within sandbox \"4f88dfdd5cac1c1e81b1f0e135bdaf327f170f5088c14f8a5c8136d89d7ec544\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 3 23:23:29.809249 containerd[1526]: time="2025-09-03T23:23:29.809217085Z" level=info msg="CreateContainer within sandbox \"7f736e13efdcac20f1b5a33056284c1a9594efc0ed086e5df92e243118301683\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f3445c46760855f35ec75154081e2a1cd0e08fba59c87e3c1e06632fd35f93d0\"" Sep 3 23:23:29.809888 containerd[1526]: time="2025-09-03T23:23:29.809859070Z" level=info msg="StartContainer for \"f3445c46760855f35ec75154081e2a1cd0e08fba59c87e3c1e06632fd35f93d0\"" Sep 3 23:23:29.810884 containerd[1526]: time="2025-09-03T23:23:29.810861453Z" level=info msg="connecting to shim f3445c46760855f35ec75154081e2a1cd0e08fba59c87e3c1e06632fd35f93d0" address="unix:///run/containerd/s/f7b73ab8d52f49f71a3a6f412fdd9d3ae662539f568f3230ff34d0a793c7ec29" protocol=ttrpc version=3 Sep 3 23:23:29.814586 containerd[1526]: time="2025-09-03T23:23:29.814495632Z" level=info msg="Container 474e1ca5a75d951e0e7c98bac2788442140e877dd24caf0e8814f707bc5a5526: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:23:29.818456 containerd[1526]: time="2025-09-03T23:23:29.818392363Z" level=info msg="Container 4aa4edb6df08c727c2b9e80e658e23d30aa8b526740418a2a2e5c8682fe1d374: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:23:29.821514 containerd[1526]: time="2025-09-03T23:23:29.821476647Z" level=info msg="CreateContainer within sandbox \"b3dcd636024f62ff1fbfa2648c0ac1aefdd3cbb29e37b7123badb5d644b2db5c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"474e1ca5a75d951e0e7c98bac2788442140e877dd24caf0e8814f707bc5a5526\"" Sep 3 23:23:29.822274 containerd[1526]: time="2025-09-03T23:23:29.822230874Z" level=info msg="StartContainer for \"474e1ca5a75d951e0e7c98bac2788442140e877dd24caf0e8814f707bc5a5526\"" Sep 3 23:23:29.824400 containerd[1526]: time="2025-09-03T23:23:29.824351162Z" level=info msg="connecting to shim 474e1ca5a75d951e0e7c98bac2788442140e877dd24caf0e8814f707bc5a5526" address="unix:///run/containerd/s/5da4a009c8c7834859674a8f42c1a8d4db2afd2a77d733a49dee785e0cd8e9ef" protocol=ttrpc version=3 Sep 3 23:23:29.828007 containerd[1526]: time="2025-09-03T23:23:29.826975880Z" level=info msg="CreateContainer within sandbox \"4f88dfdd5cac1c1e81b1f0e135bdaf327f170f5088c14f8a5c8136d89d7ec544\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4aa4edb6df08c727c2b9e80e658e23d30aa8b526740418a2a2e5c8682fe1d374\"" Sep 3 23:23:29.828007 containerd[1526]: time="2025-09-03T23:23:29.827445562Z" level=info msg="StartContainer for \"4aa4edb6df08c727c2b9e80e658e23d30aa8b526740418a2a2e5c8682fe1d374\"" Sep 3 23:23:29.828422 containerd[1526]: time="2025-09-03T23:23:29.828383207Z" level=info msg="connecting to shim 4aa4edb6df08c727c2b9e80e658e23d30aa8b526740418a2a2e5c8682fe1d374" address="unix:///run/containerd/s/1ce43352f23f2d60d1796de9d99c4ca186768e299d39c6da8c6f373044ba0027" protocol=ttrpc version=3 Sep 3 23:23:29.832845 systemd[1]: Started cri-containerd-f3445c46760855f35ec75154081e2a1cd0e08fba59c87e3c1e06632fd35f93d0.scope - libcontainer container f3445c46760855f35ec75154081e2a1cd0e08fba59c87e3c1e06632fd35f93d0. Sep 3 23:23:29.851606 kubelet[2283]: I0903 23:23:29.851543 2283 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 3 23:23:29.852003 kubelet[2283]: E0903 23:23:29.851963 2283 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Sep 3 23:23:29.855818 systemd[1]: Started cri-containerd-474e1ca5a75d951e0e7c98bac2788442140e877dd24caf0e8814f707bc5a5526.scope - libcontainer container 474e1ca5a75d951e0e7c98bac2788442140e877dd24caf0e8814f707bc5a5526. Sep 3 23:23:29.856950 systemd[1]: Started cri-containerd-4aa4edb6df08c727c2b9e80e658e23d30aa8b526740418a2a2e5c8682fe1d374.scope - libcontainer container 4aa4edb6df08c727c2b9e80e658e23d30aa8b526740418a2a2e5c8682fe1d374. Sep 3 23:23:29.892955 containerd[1526]: time="2025-09-03T23:23:29.892915490Z" level=info msg="StartContainer for \"f3445c46760855f35ec75154081e2a1cd0e08fba59c87e3c1e06632fd35f93d0\" returns successfully" Sep 3 23:23:29.908649 containerd[1526]: time="2025-09-03T23:23:29.908573230Z" level=info msg="StartContainer for \"4aa4edb6df08c727c2b9e80e658e23d30aa8b526740418a2a2e5c8682fe1d374\" returns successfully" Sep 3 23:23:29.909307 containerd[1526]: time="2025-09-03T23:23:29.909248204Z" level=info msg="StartContainer for \"474e1ca5a75d951e0e7c98bac2788442140e877dd24caf0e8814f707bc5a5526\" returns successfully" Sep 3 23:23:29.986083 kubelet[2283]: W0903 23:23:29.986010 2283 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Sep 3 23:23:29.986083 kubelet[2283]: E0903 23:23:29.986083 2283 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" Sep 3 23:23:30.653600 kubelet[2283]: I0903 23:23:30.653464 2283 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 3 23:23:31.913720 kubelet[2283]: E0903 23:23:31.913612 2283 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 3 23:23:32.011504 kubelet[2283]: I0903 23:23:32.011251 2283 apiserver.go:52] "Watching apiserver" Sep 3 23:23:32.019089 kubelet[2283]: I0903 23:23:32.019057 2283 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 3 23:23:32.110987 kubelet[2283]: I0903 23:23:32.110836 2283 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 3 23:23:33.767967 systemd[1]: Reload requested from client PID 2557 ('systemctl') (unit session-7.scope)... Sep 3 23:23:33.767983 systemd[1]: Reloading... Sep 3 23:23:33.837687 zram_generator::config[2603]: No configuration found. Sep 3 23:23:33.987833 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 3 23:23:34.085667 systemd[1]: Reloading finished in 317 ms. Sep 3 23:23:34.110307 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:23:34.133724 systemd[1]: kubelet.service: Deactivated successfully. Sep 3 23:23:34.134718 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:23:34.134782 systemd[1]: kubelet.service: Consumed 1.018s CPU time, 125M memory peak. Sep 3 23:23:34.136832 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:23:34.312410 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:23:34.325976 (kubelet)[2643]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 3 23:23:34.371736 kubelet[2643]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 3 23:23:34.371736 kubelet[2643]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 3 23:23:34.371736 kubelet[2643]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 3 23:23:34.371736 kubelet[2643]: I0903 23:23:34.371351 2643 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 3 23:23:34.377074 kubelet[2643]: I0903 23:23:34.377029 2643 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 3 23:23:34.377074 kubelet[2643]: I0903 23:23:34.377070 2643 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 3 23:23:34.377338 kubelet[2643]: I0903 23:23:34.377323 2643 server.go:934] "Client rotation is on, will bootstrap in background" Sep 3 23:23:34.379265 kubelet[2643]: I0903 23:23:34.379238 2643 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 3 23:23:34.382731 kubelet[2643]: I0903 23:23:34.382698 2643 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 3 23:23:34.386449 kubelet[2643]: I0903 23:23:34.386417 2643 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 3 23:23:34.389072 kubelet[2643]: I0903 23:23:34.389018 2643 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 3 23:23:34.389304 kubelet[2643]: I0903 23:23:34.389286 2643 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 3 23:23:34.389490 kubelet[2643]: I0903 23:23:34.389460 2643 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 3 23:23:34.389879 kubelet[2643]: I0903 23:23:34.389491 2643 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 3 23:23:34.389957 kubelet[2643]: I0903 23:23:34.389931 2643 topology_manager.go:138] "Creating topology manager with none policy" Sep 3 23:23:34.389957 kubelet[2643]: I0903 23:23:34.389946 2643 container_manager_linux.go:300] "Creating device plugin manager" Sep 3 23:23:34.389996 kubelet[2643]: I0903 23:23:34.389989 2643 state_mem.go:36] "Initialized new in-memory state store" Sep 3 23:23:34.390286 kubelet[2643]: I0903 23:23:34.390123 2643 kubelet.go:408] "Attempting to sync node with API server" Sep 3 23:23:34.390286 kubelet[2643]: I0903 23:23:34.390139 2643 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 3 23:23:34.390286 kubelet[2643]: I0903 23:23:34.390173 2643 kubelet.go:314] "Adding apiserver pod source" Sep 3 23:23:34.390286 kubelet[2643]: I0903 23:23:34.390188 2643 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 3 23:23:34.397445 kubelet[2643]: I0903 23:23:34.397382 2643 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 3 23:23:34.398070 kubelet[2643]: I0903 23:23:34.398036 2643 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 3 23:23:34.398577 kubelet[2643]: I0903 23:23:34.398555 2643 server.go:1274] "Started kubelet" Sep 3 23:23:34.401120 kubelet[2643]: I0903 23:23:34.401096 2643 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 3 23:23:34.402463 kubelet[2643]: I0903 23:23:34.401923 2643 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 3 23:23:34.403862 kubelet[2643]: I0903 23:23:34.403835 2643 server.go:449] "Adding debug handlers to kubelet server" Sep 3 23:23:34.404585 kubelet[2643]: I0903 23:23:34.404503 2643 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 3 23:23:34.404851 kubelet[2643]: I0903 23:23:34.404828 2643 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 3 23:23:34.405780 kubelet[2643]: I0903 23:23:34.405704 2643 factory.go:221] Registration of the systemd container factory successfully Sep 3 23:23:34.405780 kubelet[2643]: I0903 23:23:34.405760 2643 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 3 23:23:34.405937 kubelet[2643]: I0903 23:23:34.405823 2643 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 3 23:23:34.408545 kubelet[2643]: I0903 23:23:34.408514 2643 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 3 23:23:34.408725 kubelet[2643]: E0903 23:23:34.408702 2643 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 3 23:23:34.409803 kubelet[2643]: I0903 23:23:34.409260 2643 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 3 23:23:34.409803 kubelet[2643]: I0903 23:23:34.409400 2643 reconciler.go:26] "Reconciler: start to sync state" Sep 3 23:23:34.410332 kubelet[2643]: I0903 23:23:34.410235 2643 factory.go:221] Registration of the containerd container factory successfully Sep 3 23:23:34.431182 kubelet[2643]: I0903 23:23:34.430029 2643 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 3 23:23:34.432861 kubelet[2643]: I0903 23:23:34.432833 2643 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 3 23:23:34.432861 kubelet[2643]: I0903 23:23:34.432859 2643 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 3 23:23:34.432951 kubelet[2643]: I0903 23:23:34.432875 2643 kubelet.go:2321] "Starting kubelet main sync loop" Sep 3 23:23:34.432951 kubelet[2643]: E0903 23:23:34.432934 2643 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 3 23:23:34.452672 kubelet[2643]: I0903 23:23:34.451381 2643 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 3 23:23:34.452672 kubelet[2643]: I0903 23:23:34.451401 2643 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 3 23:23:34.452672 kubelet[2643]: I0903 23:23:34.451423 2643 state_mem.go:36] "Initialized new in-memory state store" Sep 3 23:23:34.452672 kubelet[2643]: I0903 23:23:34.451598 2643 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 3 23:23:34.452672 kubelet[2643]: I0903 23:23:34.451609 2643 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 3 23:23:34.452672 kubelet[2643]: I0903 23:23:34.451632 2643 policy_none.go:49] "None policy: Start" Sep 3 23:23:34.452672 kubelet[2643]: I0903 23:23:34.452212 2643 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 3 23:23:34.452672 kubelet[2643]: I0903 23:23:34.452234 2643 state_mem.go:35] "Initializing new in-memory state store" Sep 3 23:23:34.452672 kubelet[2643]: I0903 23:23:34.452418 2643 state_mem.go:75] "Updated machine memory state" Sep 3 23:23:34.460001 kubelet[2643]: I0903 23:23:34.459964 2643 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 3 23:23:34.460151 kubelet[2643]: I0903 23:23:34.460134 2643 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 3 23:23:34.460182 kubelet[2643]: I0903 23:23:34.460152 2643 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 3 23:23:34.460830 kubelet[2643]: I0903 23:23:34.460794 2643 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 3 23:23:34.563469 kubelet[2643]: I0903 23:23:34.563418 2643 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 3 23:23:34.572041 kubelet[2643]: I0903 23:23:34.571987 2643 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 3 23:23:34.572152 kubelet[2643]: I0903 23:23:34.572132 2643 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 3 23:23:34.711495 kubelet[2643]: I0903 23:23:34.711388 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d21cea10b243ac810f9704ffdd55450c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d21cea10b243ac810f9704ffdd55450c\") " pod="kube-system/kube-apiserver-localhost" Sep 3 23:23:34.711495 kubelet[2643]: I0903 23:23:34.711432 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 3 23:23:34.711495 kubelet[2643]: I0903 23:23:34.711452 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 3 23:23:34.711495 kubelet[2643]: I0903 23:23:34.711468 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 3 23:23:34.711495 kubelet[2643]: I0903 23:23:34.711484 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d21cea10b243ac810f9704ffdd55450c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d21cea10b243ac810f9704ffdd55450c\") " pod="kube-system/kube-apiserver-localhost" Sep 3 23:23:34.711708 kubelet[2643]: I0903 23:23:34.711502 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d21cea10b243ac810f9704ffdd55450c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d21cea10b243ac810f9704ffdd55450c\") " pod="kube-system/kube-apiserver-localhost" Sep 3 23:23:34.711708 kubelet[2643]: I0903 23:23:34.711517 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 3 23:23:34.711708 kubelet[2643]: I0903 23:23:34.711534 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 3 23:23:34.711708 kubelet[2643]: I0903 23:23:34.711549 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 3 23:23:34.781491 sudo[2677]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 3 23:23:34.781789 sudo[2677]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 3 23:23:35.245986 sudo[2677]: pam_unix(sudo:session): session closed for user root Sep 3 23:23:35.391694 kubelet[2643]: I0903 23:23:35.391659 2643 apiserver.go:52] "Watching apiserver" Sep 3 23:23:35.410285 kubelet[2643]: I0903 23:23:35.410200 2643 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 3 23:23:35.487594 kubelet[2643]: I0903 23:23:35.487508 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.487487269 podStartE2EDuration="1.487487269s" podCreationTimestamp="2025-09-03 23:23:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:23:35.467558453 +0000 UTC m=+1.138613379" watchObservedRunningTime="2025-09-03 23:23:35.487487269 +0000 UTC m=+1.158542235" Sep 3 23:23:35.515045 kubelet[2643]: I0903 23:23:35.514157 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.514137871 podStartE2EDuration="1.514137871s" podCreationTimestamp="2025-09-03 23:23:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:23:35.487742251 +0000 UTC m=+1.158797217" watchObservedRunningTime="2025-09-03 23:23:35.514137871 +0000 UTC m=+1.185192837" Sep 3 23:23:35.529451 kubelet[2643]: I0903 23:23:35.529372 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.5293540810000001 podStartE2EDuration="1.529354081s" podCreationTimestamp="2025-09-03 23:23:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:23:35.515198829 +0000 UTC m=+1.186253795" watchObservedRunningTime="2025-09-03 23:23:35.529354081 +0000 UTC m=+1.200409047" Sep 3 23:23:36.866747 sudo[1736]: pam_unix(sudo:session): session closed for user root Sep 3 23:23:36.867942 sshd[1735]: Connection closed by 10.0.0.1 port 41038 Sep 3 23:23:36.868417 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Sep 3 23:23:36.872085 systemd[1]: sshd@6-10.0.0.50:22-10.0.0.1:41038.service: Deactivated successfully. Sep 3 23:23:36.874328 systemd[1]: session-7.scope: Deactivated successfully. Sep 3 23:23:36.874596 systemd[1]: session-7.scope: Consumed 7.273s CPU time, 267.4M memory peak. Sep 3 23:23:36.875803 systemd-logind[1505]: Session 7 logged out. Waiting for processes to exit. Sep 3 23:23:36.877293 systemd-logind[1505]: Removed session 7. Sep 3 23:23:38.782063 kubelet[2643]: I0903 23:23:38.781932 2643 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 3 23:23:38.782760 containerd[1526]: time="2025-09-03T23:23:38.782672528Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 3 23:23:38.783680 kubelet[2643]: I0903 23:23:38.782935 2643 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 3 23:23:39.729067 systemd[1]: Created slice kubepods-besteffort-pod9d7cb80b_f17c_4bb4_a0ad_7948b22ca9a8.slice - libcontainer container kubepods-besteffort-pod9d7cb80b_f17c_4bb4_a0ad_7948b22ca9a8.slice. Sep 3 23:23:39.746836 kubelet[2643]: I0903 23:23:39.746800 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5wld\" (UniqueName: \"kubernetes.io/projected/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-kube-api-access-w5wld\") pod \"cilium-7vfzn\" (UID: \"c5b6f835-f6d4-4c14-957a-ec9a889e6aa4\") " pod="kube-system/cilium-7vfzn" Sep 3 23:23:39.747255 systemd[1]: Created slice kubepods-burstable-podc5b6f835_f6d4_4c14_957a_ec9a889e6aa4.slice - libcontainer container kubepods-burstable-podc5b6f835_f6d4_4c14_957a_ec9a889e6aa4.slice. Sep 3 23:23:39.748617 kubelet[2643]: I0903 23:23:39.747324 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9d7cb80b-f17c-4bb4-a0ad-7948b22ca9a8-kube-proxy\") pod \"kube-proxy-cppvh\" (UID: \"9d7cb80b-f17c-4bb4-a0ad-7948b22ca9a8\") " pod="kube-system/kube-proxy-cppvh" Sep 3 23:23:39.748617 kubelet[2643]: I0903 23:23:39.747352 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-bpf-maps\") pod \"cilium-7vfzn\" (UID: \"c5b6f835-f6d4-4c14-957a-ec9a889e6aa4\") " pod="kube-system/cilium-7vfzn" Sep 3 23:23:39.748617 kubelet[2643]: I0903 23:23:39.747368 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-etc-cni-netd\") pod \"cilium-7vfzn\" (UID: \"c5b6f835-f6d4-4c14-957a-ec9a889e6aa4\") " pod="kube-system/cilium-7vfzn" Sep 3 23:23:39.748617 kubelet[2643]: I0903 23:23:39.747383 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-host-proc-sys-net\") pod \"cilium-7vfzn\" (UID: \"c5b6f835-f6d4-4c14-957a-ec9a889e6aa4\") " pod="kube-system/cilium-7vfzn" Sep 3 23:23:39.748617 kubelet[2643]: I0903 23:23:39.747397 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-cilium-cgroup\") pod \"cilium-7vfzn\" (UID: \"c5b6f835-f6d4-4c14-957a-ec9a889e6aa4\") " pod="kube-system/cilium-7vfzn" Sep 3 23:23:39.748617 kubelet[2643]: I0903 23:23:39.747412 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-clustermesh-secrets\") pod \"cilium-7vfzn\" (UID: \"c5b6f835-f6d4-4c14-957a-ec9a889e6aa4\") " pod="kube-system/cilium-7vfzn" Sep 3 23:23:39.748816 kubelet[2643]: I0903 23:23:39.747429 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5b82\" (UniqueName: \"kubernetes.io/projected/9d7cb80b-f17c-4bb4-a0ad-7948b22ca9a8-kube-api-access-g5b82\") pod \"kube-proxy-cppvh\" (UID: \"9d7cb80b-f17c-4bb4-a0ad-7948b22ca9a8\") " pod="kube-system/kube-proxy-cppvh" Sep 3 23:23:39.748816 kubelet[2643]: I0903 23:23:39.747448 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-cilium-config-path\") pod \"cilium-7vfzn\" (UID: \"c5b6f835-f6d4-4c14-957a-ec9a889e6aa4\") " pod="kube-system/cilium-7vfzn" Sep 3 23:23:39.748816 kubelet[2643]: I0903 23:23:39.747463 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-host-proc-sys-kernel\") pod \"cilium-7vfzn\" (UID: \"c5b6f835-f6d4-4c14-957a-ec9a889e6aa4\") " pod="kube-system/cilium-7vfzn" Sep 3 23:23:39.748816 kubelet[2643]: I0903 23:23:39.747477 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-hubble-tls\") pod \"cilium-7vfzn\" (UID: \"c5b6f835-f6d4-4c14-957a-ec9a889e6aa4\") " pod="kube-system/cilium-7vfzn" Sep 3 23:23:39.748816 kubelet[2643]: I0903 23:23:39.747492 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d7cb80b-f17c-4bb4-a0ad-7948b22ca9a8-lib-modules\") pod \"kube-proxy-cppvh\" (UID: \"9d7cb80b-f17c-4bb4-a0ad-7948b22ca9a8\") " pod="kube-system/kube-proxy-cppvh" Sep 3 23:23:39.748915 kubelet[2643]: I0903 23:23:39.747505 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d7cb80b-f17c-4bb4-a0ad-7948b22ca9a8-xtables-lock\") pod \"kube-proxy-cppvh\" (UID: \"9d7cb80b-f17c-4bb4-a0ad-7948b22ca9a8\") " pod="kube-system/kube-proxy-cppvh" Sep 3 23:23:39.748915 kubelet[2643]: I0903 23:23:39.747519 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-lib-modules\") pod \"cilium-7vfzn\" (UID: \"c5b6f835-f6d4-4c14-957a-ec9a889e6aa4\") " pod="kube-system/cilium-7vfzn" Sep 3 23:23:39.748915 kubelet[2643]: I0903 23:23:39.747535 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-hostproc\") pod \"cilium-7vfzn\" (UID: \"c5b6f835-f6d4-4c14-957a-ec9a889e6aa4\") " pod="kube-system/cilium-7vfzn" Sep 3 23:23:39.748915 kubelet[2643]: I0903 23:23:39.747548 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-xtables-lock\") pod \"cilium-7vfzn\" (UID: \"c5b6f835-f6d4-4c14-957a-ec9a889e6aa4\") " pod="kube-system/cilium-7vfzn" Sep 3 23:23:39.748915 kubelet[2643]: I0903 23:23:39.747575 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-cilium-run\") pod \"cilium-7vfzn\" (UID: \"c5b6f835-f6d4-4c14-957a-ec9a889e6aa4\") " pod="kube-system/cilium-7vfzn" Sep 3 23:23:39.748915 kubelet[2643]: I0903 23:23:39.747592 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-cni-path\") pod \"cilium-7vfzn\" (UID: \"c5b6f835-f6d4-4c14-957a-ec9a889e6aa4\") " pod="kube-system/cilium-7vfzn" Sep 3 23:23:39.897342 systemd[1]: Created slice kubepods-besteffort-podc619e907_2c9c_4a1c_b128_59585fd77354.slice - libcontainer container kubepods-besteffort-podc619e907_2c9c_4a1c_b128_59585fd77354.slice. Sep 3 23:23:39.949775 kubelet[2643]: I0903 23:23:39.949684 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c619e907-2c9c-4a1c-b128-59585fd77354-cilium-config-path\") pod \"cilium-operator-5d85765b45-f5n52\" (UID: \"c619e907-2c9c-4a1c-b128-59585fd77354\") " pod="kube-system/cilium-operator-5d85765b45-f5n52" Sep 3 23:23:39.949775 kubelet[2643]: I0903 23:23:39.949782 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgnbx\" (UniqueName: \"kubernetes.io/projected/c619e907-2c9c-4a1c-b128-59585fd77354-kube-api-access-xgnbx\") pod \"cilium-operator-5d85765b45-f5n52\" (UID: \"c619e907-2c9c-4a1c-b128-59585fd77354\") " pod="kube-system/cilium-operator-5d85765b45-f5n52" Sep 3 23:23:40.044429 containerd[1526]: time="2025-09-03T23:23:40.044318157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cppvh,Uid:9d7cb80b-f17c-4bb4-a0ad-7948b22ca9a8,Namespace:kube-system,Attempt:0,}" Sep 3 23:23:40.052293 containerd[1526]: time="2025-09-03T23:23:40.052254166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7vfzn,Uid:c5b6f835-f6d4-4c14-957a-ec9a889e6aa4,Namespace:kube-system,Attempt:0,}" Sep 3 23:23:40.069794 containerd[1526]: time="2025-09-03T23:23:40.069530553Z" level=info msg="connecting to shim be420abba53b7c0707f81dc9d82034ce99b793a30dbc137bd5570484201c5f6c" address="unix:///run/containerd/s/31c2d3707a4f16401de2367852dfeaa16bff42b4a67f36c1f4c98a4d8fee01c8" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:23:40.082251 containerd[1526]: time="2025-09-03T23:23:40.082198102Z" level=info msg="connecting to shim d4660bc61a95351ac81521b0a70140c87bb4719dbbf5fa17de91438562cee884" address="unix:///run/containerd/s/5aaf3f109188e861f140df54f8386d445ebbca39f9e42161d0812076e5eb3eaf" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:23:40.100831 systemd[1]: Started cri-containerd-be420abba53b7c0707f81dc9d82034ce99b793a30dbc137bd5570484201c5f6c.scope - libcontainer container be420abba53b7c0707f81dc9d82034ce99b793a30dbc137bd5570484201c5f6c. Sep 3 23:23:40.108678 systemd[1]: Started cri-containerd-d4660bc61a95351ac81521b0a70140c87bb4719dbbf5fa17de91438562cee884.scope - libcontainer container d4660bc61a95351ac81521b0a70140c87bb4719dbbf5fa17de91438562cee884. Sep 3 23:23:40.131145 containerd[1526]: time="2025-09-03T23:23:40.130748644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cppvh,Uid:9d7cb80b-f17c-4bb4-a0ad-7948b22ca9a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"be420abba53b7c0707f81dc9d82034ce99b793a30dbc137bd5570484201c5f6c\"" Sep 3 23:23:40.134004 containerd[1526]: time="2025-09-03T23:23:40.133959674Z" level=info msg="CreateContainer within sandbox \"be420abba53b7c0707f81dc9d82034ce99b793a30dbc137bd5570484201c5f6c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 3 23:23:40.140398 containerd[1526]: time="2025-09-03T23:23:40.140355858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7vfzn,Uid:c5b6f835-f6d4-4c14-957a-ec9a889e6aa4,Namespace:kube-system,Attempt:0,} returns sandbox id \"d4660bc61a95351ac81521b0a70140c87bb4719dbbf5fa17de91438562cee884\"" Sep 3 23:23:40.142365 containerd[1526]: time="2025-09-03T23:23:40.142268302Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 3 23:23:40.144595 containerd[1526]: time="2025-09-03T23:23:40.144549845Z" level=info msg="Container ee9447a7946eb749ac4fa388d48602cff6af83cc92225cb3dc3c4b85342414c8: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:23:40.153240 containerd[1526]: time="2025-09-03T23:23:40.153182100Z" level=info msg="CreateContainer within sandbox \"be420abba53b7c0707f81dc9d82034ce99b793a30dbc137bd5570484201c5f6c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ee9447a7946eb749ac4fa388d48602cff6af83cc92225cb3dc3c4b85342414c8\"" Sep 3 23:23:40.153867 containerd[1526]: time="2025-09-03T23:23:40.153816275Z" level=info msg="StartContainer for \"ee9447a7946eb749ac4fa388d48602cff6af83cc92225cb3dc3c4b85342414c8\"" Sep 3 23:23:40.156329 containerd[1526]: time="2025-09-03T23:23:40.156269710Z" level=info msg="connecting to shim ee9447a7946eb749ac4fa388d48602cff6af83cc92225cb3dc3c4b85342414c8" address="unix:///run/containerd/s/31c2d3707a4f16401de2367852dfeaa16bff42b4a67f36c1f4c98a4d8fee01c8" protocol=ttrpc version=3 Sep 3 23:23:40.177837 systemd[1]: Started cri-containerd-ee9447a7946eb749ac4fa388d48602cff6af83cc92225cb3dc3c4b85342414c8.scope - libcontainer container ee9447a7946eb749ac4fa388d48602cff6af83cc92225cb3dc3c4b85342414c8. Sep 3 23:23:40.203078 containerd[1526]: time="2025-09-03T23:23:40.203037147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-f5n52,Uid:c619e907-2c9c-4a1c-b128-59585fd77354,Namespace:kube-system,Attempt:0,}" Sep 3 23:23:40.220512 containerd[1526]: time="2025-09-03T23:23:40.220473788Z" level=info msg="StartContainer for \"ee9447a7946eb749ac4fa388d48602cff6af83cc92225cb3dc3c4b85342414c8\" returns successfully" Sep 3 23:23:40.226149 containerd[1526]: time="2025-09-03T23:23:40.226107657Z" level=info msg="connecting to shim 78cee60b1d7b28f86760758e60aef861aca9c816903df50b5f534447acdb0c94" address="unix:///run/containerd/s/1916d4e245f4f28019e36a01acd6b7d963070e4a0f5701c1b7bdc4380af466a9" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:23:40.249832 systemd[1]: Started cri-containerd-78cee60b1d7b28f86760758e60aef861aca9c816903df50b5f534447acdb0c94.scope - libcontainer container 78cee60b1d7b28f86760758e60aef861aca9c816903df50b5f534447acdb0c94. Sep 3 23:23:40.305877 containerd[1526]: time="2025-09-03T23:23:40.305753905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-f5n52,Uid:c619e907-2c9c-4a1c-b128-59585fd77354,Namespace:kube-system,Attempt:0,} returns sandbox id \"78cee60b1d7b28f86760758e60aef861aca9c816903df50b5f534447acdb0c94\"" Sep 3 23:23:42.939763 kubelet[2643]: I0903 23:23:42.939459 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cppvh" podStartSLOduration=3.9394425809999998 podStartE2EDuration="3.939442581s" podCreationTimestamp="2025-09-03 23:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:23:40.467844498 +0000 UTC m=+6.138899864" watchObservedRunningTime="2025-09-03 23:23:42.939442581 +0000 UTC m=+8.610497547" Sep 3 23:23:45.932620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1640752433.mount: Deactivated successfully. Sep 3 23:23:47.303030 containerd[1526]: time="2025-09-03T23:23:47.302429840Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:47.303030 containerd[1526]: time="2025-09-03T23:23:47.302992981Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 3 23:23:47.303973 containerd[1526]: time="2025-09-03T23:23:47.303942321Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:47.305451 containerd[1526]: time="2025-09-03T23:23:47.305420966Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.16311471s" Sep 3 23:23:47.305529 containerd[1526]: time="2025-09-03T23:23:47.305454362Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 3 23:23:47.310284 containerd[1526]: time="2025-09-03T23:23:47.310256257Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 3 23:23:47.321576 containerd[1526]: time="2025-09-03T23:23:47.321535032Z" level=info msg="CreateContainer within sandbox \"d4660bc61a95351ac81521b0a70140c87bb4719dbbf5fa17de91438562cee884\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 3 23:23:47.327564 containerd[1526]: time="2025-09-03T23:23:47.326772882Z" level=info msg="Container b2295769f473acaa94df24dae524dba4528238d765c15c98d2b88e021f057143: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:23:47.329987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4020315277.mount: Deactivated successfully. Sep 3 23:23:47.333893 containerd[1526]: time="2025-09-03T23:23:47.333839859Z" level=info msg="CreateContainer within sandbox \"d4660bc61a95351ac81521b0a70140c87bb4719dbbf5fa17de91438562cee884\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b2295769f473acaa94df24dae524dba4528238d765c15c98d2b88e021f057143\"" Sep 3 23:23:47.335414 containerd[1526]: time="2025-09-03T23:23:47.335234592Z" level=info msg="StartContainer for \"b2295769f473acaa94df24dae524dba4528238d765c15c98d2b88e021f057143\"" Sep 3 23:23:47.336648 containerd[1526]: time="2025-09-03T23:23:47.336566572Z" level=info msg="connecting to shim b2295769f473acaa94df24dae524dba4528238d765c15c98d2b88e021f057143" address="unix:///run/containerd/s/5aaf3f109188e861f140df54f8386d445ebbca39f9e42161d0812076e5eb3eaf" protocol=ttrpc version=3 Sep 3 23:23:47.385846 systemd[1]: Started cri-containerd-b2295769f473acaa94df24dae524dba4528238d765c15c98d2b88e021f057143.scope - libcontainer container b2295769f473acaa94df24dae524dba4528238d765c15c98d2b88e021f057143. Sep 3 23:23:47.413328 containerd[1526]: time="2025-09-03T23:23:47.413292268Z" level=info msg="StartContainer for \"b2295769f473acaa94df24dae524dba4528238d765c15c98d2b88e021f057143\" returns successfully" Sep 3 23:23:47.436605 systemd[1]: cri-containerd-b2295769f473acaa94df24dae524dba4528238d765c15c98d2b88e021f057143.scope: Deactivated successfully. Sep 3 23:23:47.437102 systemd[1]: cri-containerd-b2295769f473acaa94df24dae524dba4528238d765c15c98d2b88e021f057143.scope: Consumed 25ms CPU time, 5.3M memory peak, 3.1M written to disk. Sep 3 23:23:47.473937 containerd[1526]: time="2025-09-03T23:23:47.473764952Z" level=info msg="received exit event container_id:\"b2295769f473acaa94df24dae524dba4528238d765c15c98d2b88e021f057143\" id:\"b2295769f473acaa94df24dae524dba4528238d765c15c98d2b88e021f057143\" pid:3064 exited_at:{seconds:1756941827 nanos:457545017}" Sep 3 23:23:47.484981 containerd[1526]: time="2025-09-03T23:23:47.484897702Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b2295769f473acaa94df24dae524dba4528238d765c15c98d2b88e021f057143\" id:\"b2295769f473acaa94df24dae524dba4528238d765c15c98d2b88e021f057143\" pid:3064 exited_at:{seconds:1756941827 nanos:457545017}" Sep 3 23:23:47.554721 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b2295769f473acaa94df24dae524dba4528238d765c15c98d2b88e021f057143-rootfs.mount: Deactivated successfully. Sep 3 23:23:48.478361 containerd[1526]: time="2025-09-03T23:23:48.478318750Z" level=info msg="CreateContainer within sandbox \"d4660bc61a95351ac81521b0a70140c87bb4719dbbf5fa17de91438562cee884\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 3 23:23:48.489367 containerd[1526]: time="2025-09-03T23:23:48.489329305Z" level=info msg="Container 008f19a4f0b298fe8f4604e1b281113047957510d694f576191759cc0da625cf: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:23:48.496599 containerd[1526]: time="2025-09-03T23:23:48.496562112Z" level=info msg="CreateContainer within sandbox \"d4660bc61a95351ac81521b0a70140c87bb4719dbbf5fa17de91438562cee884\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"008f19a4f0b298fe8f4604e1b281113047957510d694f576191759cc0da625cf\"" Sep 3 23:23:48.499318 containerd[1526]: time="2025-09-03T23:23:48.499130299Z" level=info msg="StartContainer for \"008f19a4f0b298fe8f4604e1b281113047957510d694f576191759cc0da625cf\"" Sep 3 23:23:48.503861 containerd[1526]: time="2025-09-03T23:23:48.503828676Z" level=info msg="connecting to shim 008f19a4f0b298fe8f4604e1b281113047957510d694f576191759cc0da625cf" address="unix:///run/containerd/s/5aaf3f109188e861f140df54f8386d445ebbca39f9e42161d0812076e5eb3eaf" protocol=ttrpc version=3 Sep 3 23:23:48.527812 systemd[1]: Started cri-containerd-008f19a4f0b298fe8f4604e1b281113047957510d694f576191759cc0da625cf.scope - libcontainer container 008f19a4f0b298fe8f4604e1b281113047957510d694f576191759cc0da625cf. Sep 3 23:23:48.587020 containerd[1526]: time="2025-09-03T23:23:48.586983123Z" level=info msg="StartContainer for \"008f19a4f0b298fe8f4604e1b281113047957510d694f576191759cc0da625cf\" returns successfully" Sep 3 23:23:48.595301 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 3 23:23:48.595773 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 3 23:23:48.597526 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 3 23:23:48.598966 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 3 23:23:48.600454 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 3 23:23:48.602746 systemd[1]: cri-containerd-008f19a4f0b298fe8f4604e1b281113047957510d694f576191759cc0da625cf.scope: Deactivated successfully. Sep 3 23:23:48.603153 systemd[1]: cri-containerd-008f19a4f0b298fe8f4604e1b281113047957510d694f576191759cc0da625cf.scope: Consumed 20ms CPU time, 5.8M memory peak, 4K read from disk, 2.3M written to disk. Sep 3 23:23:48.619974 containerd[1526]: time="2025-09-03T23:23:48.619940676Z" level=info msg="TaskExit event in podsandbox handler container_id:\"008f19a4f0b298fe8f4604e1b281113047957510d694f576191759cc0da625cf\" id:\"008f19a4f0b298fe8f4604e1b281113047957510d694f576191759cc0da625cf\" pid:3118 exited_at:{seconds:1756941828 nanos:619632026}" Sep 3 23:23:48.622834 containerd[1526]: time="2025-09-03T23:23:48.621362055Z" level=info msg="received exit event container_id:\"008f19a4f0b298fe8f4604e1b281113047957510d694f576191759cc0da625cf\" id:\"008f19a4f0b298fe8f4604e1b281113047957510d694f576191759cc0da625cf\" pid:3118 exited_at:{seconds:1756941828 nanos:619632026}" Sep 3 23:23:48.635687 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 3 23:23:48.809704 containerd[1526]: time="2025-09-03T23:23:48.809546633Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:48.810073 containerd[1526]: time="2025-09-03T23:23:48.810041144Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 3 23:23:48.811255 containerd[1526]: time="2025-09-03T23:23:48.811223468Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:48.812328 containerd[1526]: time="2025-09-03T23:23:48.812284163Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.501993549s" Sep 3 23:23:48.812328 containerd[1526]: time="2025-09-03T23:23:48.812319440Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 3 23:23:48.815407 containerd[1526]: time="2025-09-03T23:23:48.815356421Z" level=info msg="CreateContainer within sandbox \"78cee60b1d7b28f86760758e60aef861aca9c816903df50b5f534447acdb0c94\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 3 23:23:48.821800 containerd[1526]: time="2025-09-03T23:23:48.821760070Z" level=info msg="Container 9d134745a614f6f3c82c119b572386caf9bdfa81efb242573e109106cf12bc11: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:23:48.828182 containerd[1526]: time="2025-09-03T23:23:48.828129322Z" level=info msg="CreateContainer within sandbox \"78cee60b1d7b28f86760758e60aef861aca9c816903df50b5f534447acdb0c94\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9d134745a614f6f3c82c119b572386caf9bdfa81efb242573e109106cf12bc11\"" Sep 3 23:23:48.828727 containerd[1526]: time="2025-09-03T23:23:48.828695546Z" level=info msg="StartContainer for \"9d134745a614f6f3c82c119b572386caf9bdfa81efb242573e109106cf12bc11\"" Sep 3 23:23:48.830092 containerd[1526]: time="2025-09-03T23:23:48.829913666Z" level=info msg="connecting to shim 9d134745a614f6f3c82c119b572386caf9bdfa81efb242573e109106cf12bc11" address="unix:///run/containerd/s/1916d4e245f4f28019e36a01acd6b7d963070e4a0f5701c1b7bdc4380af466a9" protocol=ttrpc version=3 Sep 3 23:23:48.852828 systemd[1]: Started cri-containerd-9d134745a614f6f3c82c119b572386caf9bdfa81efb242573e109106cf12bc11.scope - libcontainer container 9d134745a614f6f3c82c119b572386caf9bdfa81efb242573e109106cf12bc11. Sep 3 23:23:48.925492 containerd[1526]: time="2025-09-03T23:23:48.925457492Z" level=info msg="StartContainer for \"9d134745a614f6f3c82c119b572386caf9bdfa81efb242573e109106cf12bc11\" returns successfully" Sep 3 23:23:49.332012 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-008f19a4f0b298fe8f4604e1b281113047957510d694f576191759cc0da625cf-rootfs.mount: Deactivated successfully. Sep 3 23:23:49.504976 containerd[1526]: time="2025-09-03T23:23:49.504930581Z" level=info msg="CreateContainer within sandbox \"d4660bc61a95351ac81521b0a70140c87bb4719dbbf5fa17de91438562cee884\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 3 23:23:49.511587 kubelet[2643]: I0903 23:23:49.511531 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-f5n52" podStartSLOduration=2.00556186 podStartE2EDuration="10.511513773s" podCreationTimestamp="2025-09-03 23:23:39 +0000 UTC" firstStartedPulling="2025-09-03 23:23:40.307521173 +0000 UTC m=+5.978576139" lastFinishedPulling="2025-09-03 23:23:48.813473086 +0000 UTC m=+14.484528052" observedRunningTime="2025-09-03 23:23:49.511510413 +0000 UTC m=+15.182565379" watchObservedRunningTime="2025-09-03 23:23:49.511513773 +0000 UTC m=+15.182568739" Sep 3 23:23:49.519761 containerd[1526]: time="2025-09-03T23:23:49.519712736Z" level=info msg="Container b0af62135677e4fcd90eca08388874f487b5c59aa6f5be1a9c41747fcaa98d29: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:23:49.531295 update_engine[1508]: I20250903 23:23:49.531088 1508 update_attempter.cc:509] Updating boot flags... Sep 3 23:23:49.538273 containerd[1526]: time="2025-09-03T23:23:49.538100757Z" level=info msg="CreateContainer within sandbox \"d4660bc61a95351ac81521b0a70140c87bb4719dbbf5fa17de91438562cee884\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b0af62135677e4fcd90eca08388874f487b5c59aa6f5be1a9c41747fcaa98d29\"" Sep 3 23:23:49.539323 containerd[1526]: time="2025-09-03T23:23:49.539282048Z" level=info msg="StartContainer for \"b0af62135677e4fcd90eca08388874f487b5c59aa6f5be1a9c41747fcaa98d29\"" Sep 3 23:23:49.541480 containerd[1526]: time="2025-09-03T23:23:49.541445008Z" level=info msg="connecting to shim b0af62135677e4fcd90eca08388874f487b5c59aa6f5be1a9c41747fcaa98d29" address="unix:///run/containerd/s/5aaf3f109188e861f140df54f8386d445ebbca39f9e42161d0812076e5eb3eaf" protocol=ttrpc version=3 Sep 3 23:23:49.573810 systemd[1]: Started cri-containerd-b0af62135677e4fcd90eca08388874f487b5c59aa6f5be1a9c41747fcaa98d29.scope - libcontainer container b0af62135677e4fcd90eca08388874f487b5c59aa6f5be1a9c41747fcaa98d29. Sep 3 23:23:49.708956 containerd[1526]: time="2025-09-03T23:23:49.708914178Z" level=info msg="StartContainer for \"b0af62135677e4fcd90eca08388874f487b5c59aa6f5be1a9c41747fcaa98d29\" returns successfully" Sep 3 23:23:49.717497 systemd[1]: cri-containerd-b0af62135677e4fcd90eca08388874f487b5c59aa6f5be1a9c41747fcaa98d29.scope: Deactivated successfully. Sep 3 23:23:49.720566 containerd[1526]: time="2025-09-03T23:23:49.720526626Z" level=info msg="received exit event container_id:\"b0af62135677e4fcd90eca08388874f487b5c59aa6f5be1a9c41747fcaa98d29\" id:\"b0af62135677e4fcd90eca08388874f487b5c59aa6f5be1a9c41747fcaa98d29\" pid:3221 exited_at:{seconds:1756941829 nanos:720065228}" Sep 3 23:23:49.720933 containerd[1526]: time="2025-09-03T23:23:49.720880193Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b0af62135677e4fcd90eca08388874f487b5c59aa6f5be1a9c41747fcaa98d29\" id:\"b0af62135677e4fcd90eca08388874f487b5c59aa6f5be1a9c41747fcaa98d29\" pid:3221 exited_at:{seconds:1756941829 nanos:720065228}" Sep 3 23:23:49.760583 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0af62135677e4fcd90eca08388874f487b5c59aa6f5be1a9c41747fcaa98d29-rootfs.mount: Deactivated successfully. Sep 3 23:23:50.510665 containerd[1526]: time="2025-09-03T23:23:50.510581380Z" level=info msg="CreateContainer within sandbox \"d4660bc61a95351ac81521b0a70140c87bb4719dbbf5fa17de91438562cee884\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 3 23:23:50.530480 containerd[1526]: time="2025-09-03T23:23:50.530403744Z" level=info msg="Container 070626d235ff4848911ff876ca19d8aaeb7573adee20bd5f3691021452650e41: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:23:50.537180 containerd[1526]: time="2025-09-03T23:23:50.537065647Z" level=info msg="CreateContainer within sandbox \"d4660bc61a95351ac81521b0a70140c87bb4719dbbf5fa17de91438562cee884\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"070626d235ff4848911ff876ca19d8aaeb7573adee20bd5f3691021452650e41\"" Sep 3 23:23:50.537759 containerd[1526]: time="2025-09-03T23:23:50.537727949Z" level=info msg="StartContainer for \"070626d235ff4848911ff876ca19d8aaeb7573adee20bd5f3691021452650e41\"" Sep 3 23:23:50.538569 containerd[1526]: time="2025-09-03T23:23:50.538536239Z" level=info msg="connecting to shim 070626d235ff4848911ff876ca19d8aaeb7573adee20bd5f3691021452650e41" address="unix:///run/containerd/s/5aaf3f109188e861f140df54f8386d445ebbca39f9e42161d0812076e5eb3eaf" protocol=ttrpc version=3 Sep 3 23:23:50.560821 systemd[1]: Started cri-containerd-070626d235ff4848911ff876ca19d8aaeb7573adee20bd5f3691021452650e41.scope - libcontainer container 070626d235ff4848911ff876ca19d8aaeb7573adee20bd5f3691021452650e41. Sep 3 23:23:50.583056 systemd[1]: cri-containerd-070626d235ff4848911ff876ca19d8aaeb7573adee20bd5f3691021452650e41.scope: Deactivated successfully. Sep 3 23:23:50.584318 containerd[1526]: time="2025-09-03T23:23:50.584278718Z" level=info msg="TaskExit event in podsandbox handler container_id:\"070626d235ff4848911ff876ca19d8aaeb7573adee20bd5f3691021452650e41\" id:\"070626d235ff4848911ff876ca19d8aaeb7573adee20bd5f3691021452650e41\" pid:3265 exited_at:{seconds:1756941830 nanos:583968145}" Sep 3 23:23:50.584809 containerd[1526]: time="2025-09-03T23:23:50.584677323Z" level=info msg="received exit event container_id:\"070626d235ff4848911ff876ca19d8aaeb7573adee20bd5f3691021452650e41\" id:\"070626d235ff4848911ff876ca19d8aaeb7573adee20bd5f3691021452650e41\" pid:3265 exited_at:{seconds:1756941830 nanos:583968145}" Sep 3 23:23:50.594816 containerd[1526]: time="2025-09-03T23:23:50.594768770Z" level=info msg="StartContainer for \"070626d235ff4848911ff876ca19d8aaeb7573adee20bd5f3691021452650e41\" returns successfully" Sep 3 23:23:50.606786 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-070626d235ff4848911ff876ca19d8aaeb7573adee20bd5f3691021452650e41-rootfs.mount: Deactivated successfully. Sep 3 23:23:51.521666 containerd[1526]: time="2025-09-03T23:23:51.521425295Z" level=info msg="CreateContainer within sandbox \"d4660bc61a95351ac81521b0a70140c87bb4719dbbf5fa17de91438562cee884\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 3 23:23:51.547655 containerd[1526]: time="2025-09-03T23:23:51.546988220Z" level=info msg="Container 7e9c4dfb493ab7e9edde1314d3b7e1b3ed2b5313a7f3830cb7497f651cb99797: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:23:51.554188 containerd[1526]: time="2025-09-03T23:23:51.554129520Z" level=info msg="CreateContainer within sandbox \"d4660bc61a95351ac81521b0a70140c87bb4719dbbf5fa17de91438562cee884\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7e9c4dfb493ab7e9edde1314d3b7e1b3ed2b5313a7f3830cb7497f651cb99797\"" Sep 3 23:23:51.554662 containerd[1526]: time="2025-09-03T23:23:51.554622760Z" level=info msg="StartContainer for \"7e9c4dfb493ab7e9edde1314d3b7e1b3ed2b5313a7f3830cb7497f651cb99797\"" Sep 3 23:23:51.555543 containerd[1526]: time="2025-09-03T23:23:51.555515927Z" level=info msg="connecting to shim 7e9c4dfb493ab7e9edde1314d3b7e1b3ed2b5313a7f3830cb7497f651cb99797" address="unix:///run/containerd/s/5aaf3f109188e861f140df54f8386d445ebbca39f9e42161d0812076e5eb3eaf" protocol=ttrpc version=3 Sep 3 23:23:51.578805 systemd[1]: Started cri-containerd-7e9c4dfb493ab7e9edde1314d3b7e1b3ed2b5313a7f3830cb7497f651cb99797.scope - libcontainer container 7e9c4dfb493ab7e9edde1314d3b7e1b3ed2b5313a7f3830cb7497f651cb99797. Sep 3 23:23:51.610293 containerd[1526]: time="2025-09-03T23:23:51.610179849Z" level=info msg="StartContainer for \"7e9c4dfb493ab7e9edde1314d3b7e1b3ed2b5313a7f3830cb7497f651cb99797\" returns successfully" Sep 3 23:23:51.697554 containerd[1526]: time="2025-09-03T23:23:51.697506839Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7e9c4dfb493ab7e9edde1314d3b7e1b3ed2b5313a7f3830cb7497f651cb99797\" id:\"7afb870d403f1b020e23c4dabac330206db6caad04d216bffe354a4db09cdeac\" pid:3334 exited_at:{seconds:1756941831 nanos:697215783}" Sep 3 23:23:51.712239 kubelet[2643]: I0903 23:23:51.712195 2643 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 3 23:23:51.757905 systemd[1]: Created slice kubepods-burstable-pod802f9f46_fb26_4052_a8ab_aa9588dab9e2.slice - libcontainer container kubepods-burstable-pod802f9f46_fb26_4052_a8ab_aa9588dab9e2.slice. Sep 3 23:23:51.765229 systemd[1]: Created slice kubepods-burstable-pod455b7e6f_6d39_41a2_bbca_9abace7acb34.slice - libcontainer container kubepods-burstable-pod455b7e6f_6d39_41a2_bbca_9abace7acb34.slice. Sep 3 23:23:51.930304 kubelet[2643]: I0903 23:23:51.930262 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/802f9f46-fb26-4052-a8ab-aa9588dab9e2-config-volume\") pod \"coredns-7c65d6cfc9-jphnr\" (UID: \"802f9f46-fb26-4052-a8ab-aa9588dab9e2\") " pod="kube-system/coredns-7c65d6cfc9-jphnr" Sep 3 23:23:51.930601 kubelet[2643]: I0903 23:23:51.930312 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg9z5\" (UniqueName: \"kubernetes.io/projected/455b7e6f-6d39-41a2-bbca-9abace7acb34-kube-api-access-sg9z5\") pod \"coredns-7c65d6cfc9-qr8mf\" (UID: \"455b7e6f-6d39-41a2-bbca-9abace7acb34\") " pod="kube-system/coredns-7c65d6cfc9-qr8mf" Sep 3 23:23:51.930601 kubelet[2643]: I0903 23:23:51.930336 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/455b7e6f-6d39-41a2-bbca-9abace7acb34-config-volume\") pod \"coredns-7c65d6cfc9-qr8mf\" (UID: \"455b7e6f-6d39-41a2-bbca-9abace7acb34\") " pod="kube-system/coredns-7c65d6cfc9-qr8mf" Sep 3 23:23:51.930601 kubelet[2643]: I0903 23:23:51.930421 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gqnx\" (UniqueName: \"kubernetes.io/projected/802f9f46-fb26-4052-a8ab-aa9588dab9e2-kube-api-access-9gqnx\") pod \"coredns-7c65d6cfc9-jphnr\" (UID: \"802f9f46-fb26-4052-a8ab-aa9588dab9e2\") " pod="kube-system/coredns-7c65d6cfc9-jphnr" Sep 3 23:23:52.063079 containerd[1526]: time="2025-09-03T23:23:52.063040834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-jphnr,Uid:802f9f46-fb26-4052-a8ab-aa9588dab9e2,Namespace:kube-system,Attempt:0,}" Sep 3 23:23:52.068814 containerd[1526]: time="2025-09-03T23:23:52.068772598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qr8mf,Uid:455b7e6f-6d39-41a2-bbca-9abace7acb34,Namespace:kube-system,Attempt:0,}" Sep 3 23:23:52.545715 kubelet[2643]: I0903 23:23:52.545352 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7vfzn" podStartSLOduration=6.376872816 podStartE2EDuration="13.545333046s" podCreationTimestamp="2025-09-03 23:23:39 +0000 UTC" firstStartedPulling="2025-09-03 23:23:40.141666881 +0000 UTC m=+5.812721847" lastFinishedPulling="2025-09-03 23:23:47.310127111 +0000 UTC m=+12.981182077" observedRunningTime="2025-09-03 23:23:52.543627176 +0000 UTC m=+18.214682142" watchObservedRunningTime="2025-09-03 23:23:52.545333046 +0000 UTC m=+18.216388012" Sep 3 23:23:53.643163 systemd-networkd[1445]: cilium_host: Link UP Sep 3 23:23:53.643349 systemd-networkd[1445]: cilium_net: Link UP Sep 3 23:23:53.643730 systemd-networkd[1445]: cilium_net: Gained carrier Sep 3 23:23:53.643857 systemd-networkd[1445]: cilium_host: Gained carrier Sep 3 23:23:53.741131 systemd-networkd[1445]: cilium_vxlan: Link UP Sep 3 23:23:53.741141 systemd-networkd[1445]: cilium_vxlan: Gained carrier Sep 3 23:23:53.999673 kernel: NET: Registered PF_ALG protocol family Sep 3 23:23:54.057918 systemd-networkd[1445]: cilium_host: Gained IPv6LL Sep 3 23:23:54.313780 systemd-networkd[1445]: cilium_net: Gained IPv6LL Sep 3 23:23:54.585398 systemd-networkd[1445]: lxc_health: Link UP Sep 3 23:23:54.586872 systemd-networkd[1445]: lxc_health: Gained carrier Sep 3 23:23:55.017831 systemd-networkd[1445]: cilium_vxlan: Gained IPv6LL Sep 3 23:23:55.139719 kernel: eth0: renamed from tmpdb44b Sep 3 23:23:55.141671 kernel: eth0: renamed from tmp30550 Sep 3 23:23:55.144584 systemd-networkd[1445]: lxc496fc61ab7d5: Link UP Sep 3 23:23:55.146829 systemd-networkd[1445]: lxc69683b82caad: Link UP Sep 3 23:23:55.147243 systemd-networkd[1445]: lxc69683b82caad: Gained carrier Sep 3 23:23:55.147418 systemd-networkd[1445]: lxc496fc61ab7d5: Gained carrier Sep 3 23:23:55.977778 systemd-networkd[1445]: lxc_health: Gained IPv6LL Sep 3 23:23:56.681862 systemd-networkd[1445]: lxc496fc61ab7d5: Gained IPv6LL Sep 3 23:23:56.939128 systemd-networkd[1445]: lxc69683b82caad: Gained IPv6LL Sep 3 23:23:58.668389 containerd[1526]: time="2025-09-03T23:23:58.668337393Z" level=info msg="connecting to shim 30550a500e16dacde2d2f6c519958cad8a8365baff39ac2808c42f9218ba75c7" address="unix:///run/containerd/s/d055c65852368d5da225cc24c8d5575a7b5b7e04297a348f3b76e10a591addf6" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:23:58.668791 containerd[1526]: time="2025-09-03T23:23:58.668536582Z" level=info msg="connecting to shim db44be200d19ed236e0f4e2d51a8a8017b11498d025ea01a3e73f5d27e3466f1" address="unix:///run/containerd/s/7ffe5a1fa4cc479a8fbe724534f30c064b26b59eb15e743231b852345b36d896" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:23:58.691825 systemd[1]: Started cri-containerd-30550a500e16dacde2d2f6c519958cad8a8365baff39ac2808c42f9218ba75c7.scope - libcontainer container 30550a500e16dacde2d2f6c519958cad8a8365baff39ac2808c42f9218ba75c7. Sep 3 23:23:58.693384 systemd[1]: Started cri-containerd-db44be200d19ed236e0f4e2d51a8a8017b11498d025ea01a3e73f5d27e3466f1.scope - libcontainer container db44be200d19ed236e0f4e2d51a8a8017b11498d025ea01a3e73f5d27e3466f1. Sep 3 23:23:58.705244 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 3 23:23:58.706556 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 3 23:23:58.727485 containerd[1526]: time="2025-09-03T23:23:58.727442419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qr8mf,Uid:455b7e6f-6d39-41a2-bbca-9abace7acb34,Namespace:kube-system,Attempt:0,} returns sandbox id \"db44be200d19ed236e0f4e2d51a8a8017b11498d025ea01a3e73f5d27e3466f1\"" Sep 3 23:23:58.730698 containerd[1526]: time="2025-09-03T23:23:58.729993527Z" level=info msg="CreateContainer within sandbox \"db44be200d19ed236e0f4e2d51a8a8017b11498d025ea01a3e73f5d27e3466f1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 3 23:23:58.742730 containerd[1526]: time="2025-09-03T23:23:58.742693511Z" level=info msg="Container aa5e651a25cd17663c7ad075e0201e6b11bde7198356e6184f8b7a5e2260022c: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:23:58.753091 containerd[1526]: time="2025-09-03T23:23:58.753043136Z" level=info msg="CreateContainer within sandbox \"db44be200d19ed236e0f4e2d51a8a8017b11498d025ea01a3e73f5d27e3466f1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"aa5e651a25cd17663c7ad075e0201e6b11bde7198356e6184f8b7a5e2260022c\"" Sep 3 23:23:58.753913 containerd[1526]: time="2025-09-03T23:23:58.753852574Z" level=info msg="StartContainer for \"aa5e651a25cd17663c7ad075e0201e6b11bde7198356e6184f8b7a5e2260022c\"" Sep 3 23:23:58.756178 containerd[1526]: time="2025-09-03T23:23:58.756112017Z" level=info msg="connecting to shim aa5e651a25cd17663c7ad075e0201e6b11bde7198356e6184f8b7a5e2260022c" address="unix:///run/containerd/s/7ffe5a1fa4cc479a8fbe724534f30c064b26b59eb15e743231b852345b36d896" protocol=ttrpc version=3 Sep 3 23:23:58.757787 containerd[1526]: time="2025-09-03T23:23:58.757752532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-jphnr,Uid:802f9f46-fb26-4052-a8ab-aa9588dab9e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"30550a500e16dacde2d2f6c519958cad8a8365baff39ac2808c42f9218ba75c7\"" Sep 3 23:23:58.772838 containerd[1526]: time="2025-09-03T23:23:58.772806034Z" level=info msg="CreateContainer within sandbox \"30550a500e16dacde2d2f6c519958cad8a8365baff39ac2808c42f9218ba75c7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 3 23:23:58.778842 systemd[1]: Started cri-containerd-aa5e651a25cd17663c7ad075e0201e6b11bde7198356e6184f8b7a5e2260022c.scope - libcontainer container aa5e651a25cd17663c7ad075e0201e6b11bde7198356e6184f8b7a5e2260022c. Sep 3 23:23:58.779190 containerd[1526]: time="2025-09-03T23:23:58.779157946Z" level=info msg="Container 13f715c5aaec21953dcda0702069e38476482af81f2c3180bd8558321950c244: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:23:58.786137 containerd[1526]: time="2025-09-03T23:23:58.786100347Z" level=info msg="CreateContainer within sandbox \"30550a500e16dacde2d2f6c519958cad8a8365baff39ac2808c42f9218ba75c7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"13f715c5aaec21953dcda0702069e38476482af81f2c3180bd8558321950c244\"" Sep 3 23:23:58.787082 containerd[1526]: time="2025-09-03T23:23:58.787058378Z" level=info msg="StartContainer for \"13f715c5aaec21953dcda0702069e38476482af81f2c3180bd8558321950c244\"" Sep 3 23:23:58.787917 containerd[1526]: time="2025-09-03T23:23:58.787887935Z" level=info msg="connecting to shim 13f715c5aaec21953dcda0702069e38476482af81f2c3180bd8558321950c244" address="unix:///run/containerd/s/d055c65852368d5da225cc24c8d5575a7b5b7e04297a348f3b76e10a591addf6" protocol=ttrpc version=3 Sep 3 23:23:58.807811 systemd[1]: Started cri-containerd-13f715c5aaec21953dcda0702069e38476482af81f2c3180bd8558321950c244.scope - libcontainer container 13f715c5aaec21953dcda0702069e38476482af81f2c3180bd8558321950c244. Sep 3 23:23:58.826914 containerd[1526]: time="2025-09-03T23:23:58.826864241Z" level=info msg="StartContainer for \"aa5e651a25cd17663c7ad075e0201e6b11bde7198356e6184f8b7a5e2260022c\" returns successfully" Sep 3 23:23:58.837471 containerd[1526]: time="2025-09-03T23:23:58.837436575Z" level=info msg="StartContainer for \"13f715c5aaec21953dcda0702069e38476482af81f2c3180bd8558321950c244\" returns successfully" Sep 3 23:23:59.548183 kubelet[2643]: I0903 23:23:59.548078 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-jphnr" podStartSLOduration=20.548061302 podStartE2EDuration="20.548061302s" podCreationTimestamp="2025-09-03 23:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:23:59.546397703 +0000 UTC m=+25.217452669" watchObservedRunningTime="2025-09-03 23:23:59.548061302 +0000 UTC m=+25.219116268" Sep 3 23:23:59.574652 kubelet[2643]: I0903 23:23:59.574401 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-qr8mf" podStartSLOduration=20.574383027 podStartE2EDuration="20.574383027s" podCreationTimestamp="2025-09-03 23:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:23:59.573733098 +0000 UTC m=+25.244788064" watchObservedRunningTime="2025-09-03 23:23:59.574383027 +0000 UTC m=+25.245437953" Sep 3 23:24:03.950402 systemd[1]: Started sshd@7-10.0.0.50:22-10.0.0.1:50168.service - OpenSSH per-connection server daemon (10.0.0.1:50168). Sep 3 23:24:04.007345 sshd[3981]: Accepted publickey for core from 10.0.0.1 port 50168 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:24:04.008983 sshd-session[3981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:24:04.013666 systemd-logind[1505]: New session 8 of user core. Sep 3 23:24:04.023870 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 3 23:24:04.158432 sshd[3983]: Connection closed by 10.0.0.1 port 50168 Sep 3 23:24:04.158979 sshd-session[3981]: pam_unix(sshd:session): session closed for user core Sep 3 23:24:04.163575 systemd[1]: sshd@7-10.0.0.50:22-10.0.0.1:50168.service: Deactivated successfully. Sep 3 23:24:04.165388 systemd[1]: session-8.scope: Deactivated successfully. Sep 3 23:24:04.167672 systemd-logind[1505]: Session 8 logged out. Waiting for processes to exit. Sep 3 23:24:04.168916 systemd-logind[1505]: Removed session 8. Sep 3 23:24:09.174684 systemd[1]: Started sshd@8-10.0.0.50:22-10.0.0.1:50176.service - OpenSSH per-connection server daemon (10.0.0.1:50176). Sep 3 23:24:09.236924 sshd[3997]: Accepted publickey for core from 10.0.0.1 port 50176 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:24:09.238519 sshd-session[3997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:24:09.243073 systemd-logind[1505]: New session 9 of user core. Sep 3 23:24:09.258833 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 3 23:24:09.399573 sshd[3999]: Connection closed by 10.0.0.1 port 50176 Sep 3 23:24:09.399915 sshd-session[3997]: pam_unix(sshd:session): session closed for user core Sep 3 23:24:09.403932 systemd-logind[1505]: Session 9 logged out. Waiting for processes to exit. Sep 3 23:24:09.404305 systemd[1]: sshd@8-10.0.0.50:22-10.0.0.1:50176.service: Deactivated successfully. Sep 3 23:24:09.407239 systemd[1]: session-9.scope: Deactivated successfully. Sep 3 23:24:09.409054 systemd-logind[1505]: Removed session 9. Sep 3 23:24:14.414580 systemd[1]: Started sshd@9-10.0.0.50:22-10.0.0.1:58988.service - OpenSSH per-connection server daemon (10.0.0.1:58988). Sep 3 23:24:14.461648 sshd[4019]: Accepted publickey for core from 10.0.0.1 port 58988 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:24:14.463524 sshd-session[4019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:24:14.468751 systemd-logind[1505]: New session 10 of user core. Sep 3 23:24:14.478837 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 3 23:24:14.596523 sshd[4021]: Connection closed by 10.0.0.1 port 58988 Sep 3 23:24:14.596926 sshd-session[4019]: pam_unix(sshd:session): session closed for user core Sep 3 23:24:14.607005 systemd[1]: sshd@9-10.0.0.50:22-10.0.0.1:58988.service: Deactivated successfully. Sep 3 23:24:14.609988 systemd[1]: session-10.scope: Deactivated successfully. Sep 3 23:24:14.610625 systemd-logind[1505]: Session 10 logged out. Waiting for processes to exit. Sep 3 23:24:14.613905 systemd[1]: Started sshd@10-10.0.0.50:22-10.0.0.1:58996.service - OpenSSH per-connection server daemon (10.0.0.1:58996). Sep 3 23:24:14.614955 systemd-logind[1505]: Removed session 10. Sep 3 23:24:14.668810 sshd[4035]: Accepted publickey for core from 10.0.0.1 port 58996 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:24:14.670015 sshd-session[4035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:24:14.674849 systemd-logind[1505]: New session 11 of user core. Sep 3 23:24:14.681896 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 3 23:24:14.839686 sshd[4037]: Connection closed by 10.0.0.1 port 58996 Sep 3 23:24:14.840161 sshd-session[4035]: pam_unix(sshd:session): session closed for user core Sep 3 23:24:14.854021 systemd[1]: sshd@10-10.0.0.50:22-10.0.0.1:58996.service: Deactivated successfully. Sep 3 23:24:14.859201 systemd[1]: session-11.scope: Deactivated successfully. Sep 3 23:24:14.861893 systemd-logind[1505]: Session 11 logged out. Waiting for processes to exit. Sep 3 23:24:14.866487 systemd[1]: Started sshd@11-10.0.0.50:22-10.0.0.1:59004.service - OpenSSH per-connection server daemon (10.0.0.1:59004). Sep 3 23:24:14.871570 systemd-logind[1505]: Removed session 11. Sep 3 23:24:14.931480 sshd[4054]: Accepted publickey for core from 10.0.0.1 port 59004 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:24:14.933492 sshd-session[4054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:24:14.937698 systemd-logind[1505]: New session 12 of user core. Sep 3 23:24:14.946861 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 3 23:24:15.060484 sshd[4056]: Connection closed by 10.0.0.1 port 59004 Sep 3 23:24:15.060852 sshd-session[4054]: pam_unix(sshd:session): session closed for user core Sep 3 23:24:15.063565 systemd[1]: sshd@11-10.0.0.50:22-10.0.0.1:59004.service: Deactivated successfully. Sep 3 23:24:15.066440 systemd[1]: session-12.scope: Deactivated successfully. Sep 3 23:24:15.067787 systemd-logind[1505]: Session 12 logged out. Waiting for processes to exit. Sep 3 23:24:15.068939 systemd-logind[1505]: Removed session 12. Sep 3 23:24:20.084802 systemd[1]: Started sshd@12-10.0.0.50:22-10.0.0.1:52320.service - OpenSSH per-connection server daemon (10.0.0.1:52320). Sep 3 23:24:20.156119 sshd[4069]: Accepted publickey for core from 10.0.0.1 port 52320 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:24:20.157387 sshd-session[4069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:24:20.161180 systemd-logind[1505]: New session 13 of user core. Sep 3 23:24:20.171843 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 3 23:24:20.298650 sshd[4071]: Connection closed by 10.0.0.1 port 52320 Sep 3 23:24:20.299041 sshd-session[4069]: pam_unix(sshd:session): session closed for user core Sep 3 23:24:20.302929 systemd[1]: sshd@12-10.0.0.50:22-10.0.0.1:52320.service: Deactivated successfully. Sep 3 23:24:20.304768 systemd[1]: session-13.scope: Deactivated successfully. Sep 3 23:24:20.305797 systemd-logind[1505]: Session 13 logged out. Waiting for processes to exit. Sep 3 23:24:20.306891 systemd-logind[1505]: Removed session 13. Sep 3 23:24:25.314099 systemd[1]: Started sshd@13-10.0.0.50:22-10.0.0.1:52324.service - OpenSSH per-connection server daemon (10.0.0.1:52324). Sep 3 23:24:25.387755 sshd[4084]: Accepted publickey for core from 10.0.0.1 port 52324 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:24:25.389801 sshd-session[4084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:24:25.393708 systemd-logind[1505]: New session 14 of user core. Sep 3 23:24:25.407848 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 3 23:24:25.527527 sshd[4086]: Connection closed by 10.0.0.1 port 52324 Sep 3 23:24:25.527892 sshd-session[4084]: pam_unix(sshd:session): session closed for user core Sep 3 23:24:25.541178 systemd[1]: sshd@13-10.0.0.50:22-10.0.0.1:52324.service: Deactivated successfully. Sep 3 23:24:25.543131 systemd[1]: session-14.scope: Deactivated successfully. Sep 3 23:24:25.545366 systemd-logind[1505]: Session 14 logged out. Waiting for processes to exit. Sep 3 23:24:25.547779 systemd[1]: Started sshd@14-10.0.0.50:22-10.0.0.1:52326.service - OpenSSH per-connection server daemon (10.0.0.1:52326). Sep 3 23:24:25.549669 systemd-logind[1505]: Removed session 14. Sep 3 23:24:25.598015 sshd[4099]: Accepted publickey for core from 10.0.0.1 port 52326 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:24:25.599454 sshd-session[4099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:24:25.603731 systemd-logind[1505]: New session 15 of user core. Sep 3 23:24:25.612799 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 3 23:24:25.811562 sshd[4101]: Connection closed by 10.0.0.1 port 52326 Sep 3 23:24:25.812041 sshd-session[4099]: pam_unix(sshd:session): session closed for user core Sep 3 23:24:25.829013 systemd[1]: sshd@14-10.0.0.50:22-10.0.0.1:52326.service: Deactivated successfully. Sep 3 23:24:25.830727 systemd[1]: session-15.scope: Deactivated successfully. Sep 3 23:24:25.832025 systemd-logind[1505]: Session 15 logged out. Waiting for processes to exit. Sep 3 23:24:25.834028 systemd[1]: Started sshd@15-10.0.0.50:22-10.0.0.1:52332.service - OpenSSH per-connection server daemon (10.0.0.1:52332). Sep 3 23:24:25.835230 systemd-logind[1505]: Removed session 15. Sep 3 23:24:25.894629 sshd[4112]: Accepted publickey for core from 10.0.0.1 port 52332 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:24:25.896264 sshd-session[4112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:24:25.902689 systemd-logind[1505]: New session 16 of user core. Sep 3 23:24:25.923870 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 3 23:24:26.992295 sshd[4114]: Connection closed by 10.0.0.1 port 52332 Sep 3 23:24:26.992614 sshd-session[4112]: pam_unix(sshd:session): session closed for user core Sep 3 23:24:27.002803 systemd[1]: sshd@15-10.0.0.50:22-10.0.0.1:52332.service: Deactivated successfully. Sep 3 23:24:27.005770 systemd[1]: session-16.scope: Deactivated successfully. Sep 3 23:24:27.007350 systemd-logind[1505]: Session 16 logged out. Waiting for processes to exit. Sep 3 23:24:27.012082 systemd[1]: Started sshd@16-10.0.0.50:22-10.0.0.1:52338.service - OpenSSH per-connection server daemon (10.0.0.1:52338). Sep 3 23:24:27.015345 systemd-logind[1505]: Removed session 16. Sep 3 23:24:27.062868 sshd[4134]: Accepted publickey for core from 10.0.0.1 port 52338 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:24:27.064309 sshd-session[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:24:27.067932 systemd-logind[1505]: New session 17 of user core. Sep 3 23:24:27.078803 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 3 23:24:27.295918 sshd[4137]: Connection closed by 10.0.0.1 port 52338 Sep 3 23:24:27.296809 sshd-session[4134]: pam_unix(sshd:session): session closed for user core Sep 3 23:24:27.306090 systemd[1]: sshd@16-10.0.0.50:22-10.0.0.1:52338.service: Deactivated successfully. Sep 3 23:24:27.309607 systemd[1]: session-17.scope: Deactivated successfully. Sep 3 23:24:27.311128 systemd-logind[1505]: Session 17 logged out. Waiting for processes to exit. Sep 3 23:24:27.314270 systemd[1]: Started sshd@17-10.0.0.50:22-10.0.0.1:52350.service - OpenSSH per-connection server daemon (10.0.0.1:52350). Sep 3 23:24:27.315207 systemd-logind[1505]: Removed session 17. Sep 3 23:24:27.368601 sshd[4149]: Accepted publickey for core from 10.0.0.1 port 52350 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:24:27.369846 sshd-session[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:24:27.374574 systemd-logind[1505]: New session 18 of user core. Sep 3 23:24:27.393884 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 3 23:24:27.501865 sshd[4151]: Connection closed by 10.0.0.1 port 52350 Sep 3 23:24:27.502189 sshd-session[4149]: pam_unix(sshd:session): session closed for user core Sep 3 23:24:27.505777 systemd[1]: sshd@17-10.0.0.50:22-10.0.0.1:52350.service: Deactivated successfully. Sep 3 23:24:27.507384 systemd[1]: session-18.scope: Deactivated successfully. Sep 3 23:24:27.509273 systemd-logind[1505]: Session 18 logged out. Waiting for processes to exit. Sep 3 23:24:27.510759 systemd-logind[1505]: Removed session 18. Sep 3 23:24:32.516027 systemd[1]: Started sshd@18-10.0.0.50:22-10.0.0.1:35480.service - OpenSSH per-connection server daemon (10.0.0.1:35480). Sep 3 23:24:32.571661 sshd[4169]: Accepted publickey for core from 10.0.0.1 port 35480 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:24:32.572967 sshd-session[4169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:24:32.577261 systemd-logind[1505]: New session 19 of user core. Sep 3 23:24:32.589830 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 3 23:24:32.702378 sshd[4171]: Connection closed by 10.0.0.1 port 35480 Sep 3 23:24:32.702751 sshd-session[4169]: pam_unix(sshd:session): session closed for user core Sep 3 23:24:32.707180 systemd[1]: sshd@18-10.0.0.50:22-10.0.0.1:35480.service: Deactivated successfully. Sep 3 23:24:32.708988 systemd[1]: session-19.scope: Deactivated successfully. Sep 3 23:24:32.711184 systemd-logind[1505]: Session 19 logged out. Waiting for processes to exit. Sep 3 23:24:32.712179 systemd-logind[1505]: Removed session 19. Sep 3 23:24:37.719677 systemd[1]: Started sshd@19-10.0.0.50:22-10.0.0.1:35492.service - OpenSSH per-connection server daemon (10.0.0.1:35492). Sep 3 23:24:37.776235 sshd[4186]: Accepted publickey for core from 10.0.0.1 port 35492 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:24:37.778142 sshd-session[4186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:24:37.782225 systemd-logind[1505]: New session 20 of user core. Sep 3 23:24:37.794816 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 3 23:24:37.925671 sshd[4188]: Connection closed by 10.0.0.1 port 35492 Sep 3 23:24:37.926171 sshd-session[4186]: pam_unix(sshd:session): session closed for user core Sep 3 23:24:37.930585 systemd-logind[1505]: Session 20 logged out. Waiting for processes to exit. Sep 3 23:24:37.931063 systemd[1]: sshd@19-10.0.0.50:22-10.0.0.1:35492.service: Deactivated successfully. Sep 3 23:24:37.933189 systemd[1]: session-20.scope: Deactivated successfully. Sep 3 23:24:37.936539 systemd-logind[1505]: Removed session 20. Sep 3 23:24:42.941274 systemd[1]: Started sshd@20-10.0.0.50:22-10.0.0.1:47944.service - OpenSSH per-connection server daemon (10.0.0.1:47944). Sep 3 23:24:43.003005 sshd[4203]: Accepted publickey for core from 10.0.0.1 port 47944 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:24:43.004291 sshd-session[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:24:43.008577 systemd-logind[1505]: New session 21 of user core. Sep 3 23:24:43.020825 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 3 23:24:43.127238 sshd[4205]: Connection closed by 10.0.0.1 port 47944 Sep 3 23:24:43.127819 sshd-session[4203]: pam_unix(sshd:session): session closed for user core Sep 3 23:24:43.143005 systemd[1]: sshd@20-10.0.0.50:22-10.0.0.1:47944.service: Deactivated successfully. Sep 3 23:24:43.144739 systemd[1]: session-21.scope: Deactivated successfully. Sep 3 23:24:43.145495 systemd-logind[1505]: Session 21 logged out. Waiting for processes to exit. Sep 3 23:24:43.148381 systemd[1]: Started sshd@21-10.0.0.50:22-10.0.0.1:47952.service - OpenSSH per-connection server daemon (10.0.0.1:47952). Sep 3 23:24:43.149135 systemd-logind[1505]: Removed session 21. Sep 3 23:24:43.202196 sshd[4218]: Accepted publickey for core from 10.0.0.1 port 47952 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:24:43.203612 sshd-session[4218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:24:43.207575 systemd-logind[1505]: New session 22 of user core. Sep 3 23:24:43.216845 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 3 23:24:44.961431 containerd[1526]: time="2025-09-03T23:24:44.961228330Z" level=info msg="StopContainer for \"9d134745a614f6f3c82c119b572386caf9bdfa81efb242573e109106cf12bc11\" with timeout 30 (s)" Sep 3 23:24:44.965919 containerd[1526]: time="2025-09-03T23:24:44.965500877Z" level=info msg="Stop container \"9d134745a614f6f3c82c119b572386caf9bdfa81efb242573e109106cf12bc11\" with signal terminated" Sep 3 23:24:44.982819 systemd[1]: cri-containerd-9d134745a614f6f3c82c119b572386caf9bdfa81efb242573e109106cf12bc11.scope: Deactivated successfully. Sep 3 23:24:44.983352 systemd[1]: cri-containerd-9d134745a614f6f3c82c119b572386caf9bdfa81efb242573e109106cf12bc11.scope: Consumed 342ms CPU time, 26.6M memory peak, 1.1M read from disk, 4K written to disk. Sep 3 23:24:44.987582 containerd[1526]: time="2025-09-03T23:24:44.987445491Z" level=info msg="received exit event container_id:\"9d134745a614f6f3c82c119b572386caf9bdfa81efb242573e109106cf12bc11\" id:\"9d134745a614f6f3c82c119b572386caf9bdfa81efb242573e109106cf12bc11\" pid:3173 exited_at:{seconds:1756941884 nanos:987100772}" Sep 3 23:24:44.988029 containerd[1526]: time="2025-09-03T23:24:44.987994369Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9d134745a614f6f3c82c119b572386caf9bdfa81efb242573e109106cf12bc11\" id:\"9d134745a614f6f3c82c119b572386caf9bdfa81efb242573e109106cf12bc11\" pid:3173 exited_at:{seconds:1756941884 nanos:987100772}" Sep 3 23:24:45.001864 containerd[1526]: time="2025-09-03T23:24:45.001816927Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 3 23:24:45.008413 containerd[1526]: time="2025-09-03T23:24:45.008376348Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7e9c4dfb493ab7e9edde1314d3b7e1b3ed2b5313a7f3830cb7497f651cb99797\" id:\"646c4654d03c61b2f07ffc1e3adfeb6a0c9b7f6d3160a68cef2a860017a60ead\" pid:4248 exited_at:{seconds:1756941885 nanos:8099949}" Sep 3 23:24:45.010752 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d134745a614f6f3c82c119b572386caf9bdfa81efb242573e109106cf12bc11-rootfs.mount: Deactivated successfully. Sep 3 23:24:45.013338 containerd[1526]: time="2025-09-03T23:24:45.013301453Z" level=info msg="StopContainer for \"7e9c4dfb493ab7e9edde1314d3b7e1b3ed2b5313a7f3830cb7497f651cb99797\" with timeout 2 (s)" Sep 3 23:24:45.013839 containerd[1526]: time="2025-09-03T23:24:45.013808292Z" level=info msg="Stop container \"7e9c4dfb493ab7e9edde1314d3b7e1b3ed2b5313a7f3830cb7497f651cb99797\" with signal terminated" Sep 3 23:24:45.021801 systemd-networkd[1445]: lxc_health: Link DOWN Sep 3 23:24:45.022687 systemd-networkd[1445]: lxc_health: Lost carrier Sep 3 23:24:45.030800 containerd[1526]: time="2025-09-03T23:24:45.030765361Z" level=info msg="StopContainer for \"9d134745a614f6f3c82c119b572386caf9bdfa81efb242573e109106cf12bc11\" returns successfully" Sep 3 23:24:45.034239 containerd[1526]: time="2025-09-03T23:24:45.034177551Z" level=info msg="StopPodSandbox for \"78cee60b1d7b28f86760758e60aef861aca9c816903df50b5f534447acdb0c94\"" Sep 3 23:24:45.037508 systemd[1]: cri-containerd-7e9c4dfb493ab7e9edde1314d3b7e1b3ed2b5313a7f3830cb7497f651cb99797.scope: Deactivated successfully. Sep 3 23:24:45.037830 systemd[1]: cri-containerd-7e9c4dfb493ab7e9edde1314d3b7e1b3ed2b5313a7f3830cb7497f651cb99797.scope: Consumed 6.160s CPU time, 125.4M memory peak, 128K read from disk, 12.9M written to disk. Sep 3 23:24:45.039328 containerd[1526]: time="2025-09-03T23:24:45.039119977Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7e9c4dfb493ab7e9edde1314d3b7e1b3ed2b5313a7f3830cb7497f651cb99797\" id:\"7e9c4dfb493ab7e9edde1314d3b7e1b3ed2b5313a7f3830cb7497f651cb99797\" pid:3303 exited_at:{seconds:1756941885 nanos:38608258}" Sep 3 23:24:45.039328 containerd[1526]: time="2025-09-03T23:24:45.039188456Z" level=info msg="received exit event container_id:\"7e9c4dfb493ab7e9edde1314d3b7e1b3ed2b5313a7f3830cb7497f651cb99797\" id:\"7e9c4dfb493ab7e9edde1314d3b7e1b3ed2b5313a7f3830cb7497f651cb99797\" pid:3303 exited_at:{seconds:1756941885 nanos:38608258}" Sep 3 23:24:45.042100 containerd[1526]: time="2025-09-03T23:24:45.042058808Z" level=info msg="Container to stop \"9d134745a614f6f3c82c119b572386caf9bdfa81efb242573e109106cf12bc11\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 3 23:24:45.048046 systemd[1]: cri-containerd-78cee60b1d7b28f86760758e60aef861aca9c816903df50b5f534447acdb0c94.scope: Deactivated successfully. Sep 3 23:24:45.051232 containerd[1526]: time="2025-09-03T23:24:45.051145501Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78cee60b1d7b28f86760758e60aef861aca9c816903df50b5f534447acdb0c94\" id:\"78cee60b1d7b28f86760758e60aef861aca9c816903df50b5f534447acdb0c94\" pid:2879 exit_status:137 exited_at:{seconds:1756941885 nanos:50883262}" Sep 3 23:24:45.060893 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e9c4dfb493ab7e9edde1314d3b7e1b3ed2b5313a7f3830cb7497f651cb99797-rootfs.mount: Deactivated successfully. Sep 3 23:24:45.075365 containerd[1526]: time="2025-09-03T23:24:45.075128990Z" level=info msg="StopContainer for \"7e9c4dfb493ab7e9edde1314d3b7e1b3ed2b5313a7f3830cb7497f651cb99797\" returns successfully" Sep 3 23:24:45.076544 containerd[1526]: time="2025-09-03T23:24:45.076480066Z" level=info msg="StopPodSandbox for \"d4660bc61a95351ac81521b0a70140c87bb4719dbbf5fa17de91438562cee884\"" Sep 3 23:24:45.077137 containerd[1526]: time="2025-09-03T23:24:45.076785545Z" level=info msg="Container to stop \"b2295769f473acaa94df24dae524dba4528238d765c15c98d2b88e021f057143\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 3 23:24:45.077137 containerd[1526]: time="2025-09-03T23:24:45.076817985Z" level=info msg="Container to stop \"008f19a4f0b298fe8f4604e1b281113047957510d694f576191759cc0da625cf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 3 23:24:45.077137 containerd[1526]: time="2025-09-03T23:24:45.076827065Z" level=info msg="Container to stop \"b0af62135677e4fcd90eca08388874f487b5c59aa6f5be1a9c41747fcaa98d29\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 3 23:24:45.077137 containerd[1526]: time="2025-09-03T23:24:45.076835545Z" level=info msg="Container to stop \"7e9c4dfb493ab7e9edde1314d3b7e1b3ed2b5313a7f3830cb7497f651cb99797\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 3 23:24:45.077137 containerd[1526]: time="2025-09-03T23:24:45.076845905Z" level=info msg="Container to stop \"070626d235ff4848911ff876ca19d8aaeb7573adee20bd5f3691021452650e41\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 3 23:24:45.080836 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78cee60b1d7b28f86760758e60aef861aca9c816903df50b5f534447acdb0c94-rootfs.mount: Deactivated successfully. Sep 3 23:24:45.084861 systemd[1]: cri-containerd-d4660bc61a95351ac81521b0a70140c87bb4719dbbf5fa17de91438562cee884.scope: Deactivated successfully. Sep 3 23:24:45.085759 containerd[1526]: time="2025-09-03T23:24:45.085728678Z" level=info msg="shim disconnected" id=78cee60b1d7b28f86760758e60aef861aca9c816903df50b5f534447acdb0c94 namespace=k8s.io Sep 3 23:24:45.093657 containerd[1526]: time="2025-09-03T23:24:45.085759918Z" level=warning msg="cleaning up after shim disconnected" id=78cee60b1d7b28f86760758e60aef861aca9c816903df50b5f534447acdb0c94 namespace=k8s.io Sep 3 23:24:45.093657 containerd[1526]: time="2025-09-03T23:24:45.093402095Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 3 23:24:45.111875 containerd[1526]: time="2025-09-03T23:24:45.111835401Z" level=info msg="TearDown network for sandbox \"78cee60b1d7b28f86760758e60aef861aca9c816903df50b5f534447acdb0c94\" successfully" Sep 3 23:24:45.111921 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d4660bc61a95351ac81521b0a70140c87bb4719dbbf5fa17de91438562cee884-rootfs.mount: Deactivated successfully. Sep 3 23:24:45.112162 containerd[1526]: time="2025-09-03T23:24:45.112140520Z" level=info msg="StopPodSandbox for \"78cee60b1d7b28f86760758e60aef861aca9c816903df50b5f534447acdb0c94\" returns successfully" Sep 3 23:24:45.113517 containerd[1526]: time="2025-09-03T23:24:45.111867601Z" level=info msg="received exit event sandbox_id:\"78cee60b1d7b28f86760758e60aef861aca9c816903df50b5f534447acdb0c94\" exit_status:137 exited_at:{seconds:1756941885 nanos:50883262}" Sep 3 23:24:45.113517 containerd[1526]: time="2025-09-03T23:24:45.112016120Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d4660bc61a95351ac81521b0a70140c87bb4719dbbf5fa17de91438562cee884\" id:\"d4660bc61a95351ac81521b0a70140c87bb4719dbbf5fa17de91438562cee884\" pid:2796 exit_status:137 exited_at:{seconds:1756941885 nanos:91921180}" Sep 3 23:24:45.114645 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-78cee60b1d7b28f86760758e60aef861aca9c816903df50b5f534447acdb0c94-shm.mount: Deactivated successfully. Sep 3 23:24:45.119396 containerd[1526]: time="2025-09-03T23:24:45.119316819Z" level=info msg="received exit event sandbox_id:\"d4660bc61a95351ac81521b0a70140c87bb4719dbbf5fa17de91438562cee884\" exit_status:137 exited_at:{seconds:1756941885 nanos:91921180}" Sep 3 23:24:45.119645 containerd[1526]: time="2025-09-03T23:24:45.119608738Z" level=info msg="shim disconnected" id=d4660bc61a95351ac81521b0a70140c87bb4719dbbf5fa17de91438562cee884 namespace=k8s.io Sep 3 23:24:45.119847 containerd[1526]: time="2025-09-03T23:24:45.119651538Z" level=warning msg="cleaning up after shim disconnected" id=d4660bc61a95351ac81521b0a70140c87bb4719dbbf5fa17de91438562cee884 namespace=k8s.io Sep 3 23:24:45.119847 containerd[1526]: time="2025-09-03T23:24:45.119680338Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 3 23:24:45.120227 containerd[1526]: time="2025-09-03T23:24:45.119928737Z" level=info msg="TearDown network for sandbox \"d4660bc61a95351ac81521b0a70140c87bb4719dbbf5fa17de91438562cee884\" successfully" Sep 3 23:24:45.120227 containerd[1526]: time="2025-09-03T23:24:45.120212376Z" level=info msg="StopPodSandbox for \"d4660bc61a95351ac81521b0a70140c87bb4719dbbf5fa17de91438562cee884\" returns successfully" Sep 3 23:24:45.230940 kubelet[2643]: I0903 23:24:45.230802 2643 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgnbx\" (UniqueName: \"kubernetes.io/projected/c619e907-2c9c-4a1c-b128-59585fd77354-kube-api-access-xgnbx\") pod \"c619e907-2c9c-4a1c-b128-59585fd77354\" (UID: \"c619e907-2c9c-4a1c-b128-59585fd77354\") " Sep 3 23:24:45.230940 kubelet[2643]: I0903 23:24:45.230908 2643 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-hubble-tls\") pod \"c5b6f835-f6d4-4c14-957a-ec9a889e6aa4\" (UID: \"c5b6f835-f6d4-4c14-957a-ec9a889e6aa4\") " Sep 3 23:24:45.230940 kubelet[2643]: I0903 23:24:45.230932 2643 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-hostproc\") pod \"c5b6f835-f6d4-4c14-957a-ec9a889e6aa4\" (UID: \"c5b6f835-f6d4-4c14-957a-ec9a889e6aa4\") " Sep 3 23:24:45.232616 kubelet[2643]: I0903 23:24:45.230949 2643 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-cni-path\") pod \"c5b6f835-f6d4-4c14-957a-ec9a889e6aa4\" (UID: \"c5b6f835-f6d4-4c14-957a-ec9a889e6aa4\") " Sep 3 23:24:45.232616 kubelet[2643]: I0903 23:24:45.230970 2643 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5wld\" (UniqueName: \"kubernetes.io/projected/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-kube-api-access-w5wld\") pod \"c5b6f835-f6d4-4c14-957a-ec9a889e6aa4\" (UID: \"c5b6f835-f6d4-4c14-957a-ec9a889e6aa4\") " Sep 3 23:24:45.232616 kubelet[2643]: I0903 23:24:45.230984 2643 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-host-proc-sys-net\") pod \"c5b6f835-f6d4-4c14-957a-ec9a889e6aa4\" (UID: \"c5b6f835-f6d4-4c14-957a-ec9a889e6aa4\") " Sep 3 23:24:45.232616 kubelet[2643]: I0903 23:24:45.231000 2643 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-host-proc-sys-kernel\") pod \"c5b6f835-f6d4-4c14-957a-ec9a889e6aa4\" (UID: \"c5b6f835-f6d4-4c14-957a-ec9a889e6aa4\") " Sep 3 23:24:45.232616 kubelet[2643]: I0903 23:24:45.231017 2643 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-lib-modules\") pod \"c5b6f835-f6d4-4c14-957a-ec9a889e6aa4\" (UID: \"c5b6f835-f6d4-4c14-957a-ec9a889e6aa4\") " Sep 3 23:24:45.232616 kubelet[2643]: I0903 23:24:45.231039 2643 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-clustermesh-secrets\") pod \"c5b6f835-f6d4-4c14-957a-ec9a889e6aa4\" (UID: \"c5b6f835-f6d4-4c14-957a-ec9a889e6aa4\") " Sep 3 23:24:45.232788 kubelet[2643]: I0903 23:24:45.231056 2643 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-cilium-config-path\") pod \"c5b6f835-f6d4-4c14-957a-ec9a889e6aa4\" (UID: \"c5b6f835-f6d4-4c14-957a-ec9a889e6aa4\") " Sep 3 23:24:45.232788 kubelet[2643]: I0903 23:24:45.231070 2643 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-bpf-maps\") pod \"c5b6f835-f6d4-4c14-957a-ec9a889e6aa4\" (UID: \"c5b6f835-f6d4-4c14-957a-ec9a889e6aa4\") " Sep 3 23:24:45.232788 kubelet[2643]: I0903 23:24:45.231085 2643 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-etc-cni-netd\") pod \"c5b6f835-f6d4-4c14-957a-ec9a889e6aa4\" (UID: \"c5b6f835-f6d4-4c14-957a-ec9a889e6aa4\") " Sep 3 23:24:45.232788 kubelet[2643]: I0903 23:24:45.231098 2643 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-xtables-lock\") pod \"c5b6f835-f6d4-4c14-957a-ec9a889e6aa4\" (UID: \"c5b6f835-f6d4-4c14-957a-ec9a889e6aa4\") " Sep 3 23:24:45.232788 kubelet[2643]: I0903 23:24:45.231115 2643 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c619e907-2c9c-4a1c-b128-59585fd77354-cilium-config-path\") pod \"c619e907-2c9c-4a1c-b128-59585fd77354\" (UID: \"c619e907-2c9c-4a1c-b128-59585fd77354\") " Sep 3 23:24:45.235695 kubelet[2643]: I0903 23:24:45.235382 2643 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-cni-path" (OuterVolumeSpecName: "cni-path") pod "c5b6f835-f6d4-4c14-957a-ec9a889e6aa4" (UID: "c5b6f835-f6d4-4c14-957a-ec9a889e6aa4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 3 23:24:45.235695 kubelet[2643]: I0903 23:24:45.235594 2643 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c5b6f835-f6d4-4c14-957a-ec9a889e6aa4" (UID: "c5b6f835-f6d4-4c14-957a-ec9a889e6aa4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 3 23:24:45.235841 kubelet[2643]: I0903 23:24:45.235813 2643 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c619e907-2c9c-4a1c-b128-59585fd77354-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c619e907-2c9c-4a1c-b128-59585fd77354" (UID: "c619e907-2c9c-4a1c-b128-59585fd77354"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 3 23:24:45.235881 kubelet[2643]: I0903 23:24:45.235868 2643 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c5b6f835-f6d4-4c14-957a-ec9a889e6aa4" (UID: "c5b6f835-f6d4-4c14-957a-ec9a889e6aa4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 3 23:24:45.236318 kubelet[2643]: I0903 23:24:45.236271 2643 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-hostproc" (OuterVolumeSpecName: "hostproc") pod "c5b6f835-f6d4-4c14-957a-ec9a889e6aa4" (UID: "c5b6f835-f6d4-4c14-957a-ec9a889e6aa4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 3 23:24:45.236383 kubelet[2643]: I0903 23:24:45.236329 2643 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c5b6f835-f6d4-4c14-957a-ec9a889e6aa4" (UID: "c5b6f835-f6d4-4c14-957a-ec9a889e6aa4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 3 23:24:45.236383 kubelet[2643]: I0903 23:24:45.236349 2643 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c5b6f835-f6d4-4c14-957a-ec9a889e6aa4" (UID: "c5b6f835-f6d4-4c14-957a-ec9a889e6aa4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 3 23:24:45.237688 kubelet[2643]: I0903 23:24:45.236548 2643 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c5b6f835-f6d4-4c14-957a-ec9a889e6aa4" (UID: "c5b6f835-f6d4-4c14-957a-ec9a889e6aa4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 3 23:24:45.237812 kubelet[2643]: I0903 23:24:45.237749 2643 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c619e907-2c9c-4a1c-b128-59585fd77354-kube-api-access-xgnbx" (OuterVolumeSpecName: "kube-api-access-xgnbx") pod "c619e907-2c9c-4a1c-b128-59585fd77354" (UID: "c619e907-2c9c-4a1c-b128-59585fd77354"). InnerVolumeSpecName "kube-api-access-xgnbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 3 23:24:45.237944 kubelet[2643]: I0903 23:24:45.237806 2643 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c5b6f835-f6d4-4c14-957a-ec9a889e6aa4" (UID: "c5b6f835-f6d4-4c14-957a-ec9a889e6aa4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 3 23:24:45.238101 kubelet[2643]: I0903 23:24:45.238071 2643 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c5b6f835-f6d4-4c14-957a-ec9a889e6aa4" (UID: "c5b6f835-f6d4-4c14-957a-ec9a889e6aa4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 3 23:24:45.238470 kubelet[2643]: I0903 23:24:45.238438 2643 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c5b6f835-f6d4-4c14-957a-ec9a889e6aa4" (UID: "c5b6f835-f6d4-4c14-957a-ec9a889e6aa4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 3 23:24:45.238524 kubelet[2643]: I0903 23:24:45.238451 2643 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-kube-api-access-w5wld" (OuterVolumeSpecName: "kube-api-access-w5wld") pod "c5b6f835-f6d4-4c14-957a-ec9a889e6aa4" (UID: "c5b6f835-f6d4-4c14-957a-ec9a889e6aa4"). InnerVolumeSpecName "kube-api-access-w5wld". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 3 23:24:45.239096 kubelet[2643]: I0903 23:24:45.239073 2643 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c5b6f835-f6d4-4c14-957a-ec9a889e6aa4" (UID: "c5b6f835-f6d4-4c14-957a-ec9a889e6aa4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 3 23:24:45.331355 kubelet[2643]: I0903 23:24:45.331301 2643 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-cilium-cgroup\") pod \"c5b6f835-f6d4-4c14-957a-ec9a889e6aa4\" (UID: \"c5b6f835-f6d4-4c14-957a-ec9a889e6aa4\") " Sep 3 23:24:45.331355 kubelet[2643]: I0903 23:24:45.331346 2643 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-cilium-run\") pod \"c5b6f835-f6d4-4c14-957a-ec9a889e6aa4\" (UID: \"c5b6f835-f6d4-4c14-957a-ec9a889e6aa4\") " Sep 3 23:24:45.331714 kubelet[2643]: I0903 23:24:45.331358 2643 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c5b6f835-f6d4-4c14-957a-ec9a889e6aa4" (UID: "c5b6f835-f6d4-4c14-957a-ec9a889e6aa4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 3 23:24:45.331714 kubelet[2643]: I0903 23:24:45.331385 2643 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 3 23:24:45.331714 kubelet[2643]: I0903 23:24:45.331416 2643 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 3 23:24:45.331714 kubelet[2643]: I0903 23:24:45.331426 2643 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c619e907-2c9c-4a1c-b128-59585fd77354-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 3 23:24:45.331714 kubelet[2643]: I0903 23:24:45.331434 2643 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 3 23:24:45.331714 kubelet[2643]: I0903 23:24:45.331441 2643 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 3 23:24:45.331714 kubelet[2643]: I0903 23:24:45.331437 2643 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c5b6f835-f6d4-4c14-957a-ec9a889e6aa4" (UID: "c5b6f835-f6d4-4c14-957a-ec9a889e6aa4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 3 23:24:45.331912 kubelet[2643]: I0903 23:24:45.331453 2643 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 3 23:24:45.331912 kubelet[2643]: I0903 23:24:45.331485 2643 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgnbx\" (UniqueName: \"kubernetes.io/projected/c619e907-2c9c-4a1c-b128-59585fd77354-kube-api-access-xgnbx\") on node \"localhost\" DevicePath \"\"" Sep 3 23:24:45.331912 kubelet[2643]: I0903 23:24:45.331497 2643 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 3 23:24:45.331912 kubelet[2643]: I0903 23:24:45.331506 2643 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 3 23:24:45.331912 kubelet[2643]: I0903 23:24:45.331514 2643 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 3 23:24:45.331912 kubelet[2643]: I0903 23:24:45.331523 2643 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5wld\" (UniqueName: \"kubernetes.io/projected/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-kube-api-access-w5wld\") on node \"localhost\" DevicePath \"\"" Sep 3 23:24:45.331912 kubelet[2643]: I0903 23:24:45.331531 2643 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 3 23:24:45.331912 kubelet[2643]: I0903 23:24:45.331540 2643 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 3 23:24:45.332074 kubelet[2643]: I0903 23:24:45.331551 2643 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 3 23:24:45.432301 kubelet[2643]: I0903 23:24:45.432258 2643 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 3 23:24:45.432301 kubelet[2643]: I0903 23:24:45.432296 2643 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 3 23:24:45.629688 kubelet[2643]: I0903 23:24:45.629578 2643 scope.go:117] "RemoveContainer" containerID="9d134745a614f6f3c82c119b572386caf9bdfa81efb242573e109106cf12bc11" Sep 3 23:24:45.635651 containerd[1526]: time="2025-09-03T23:24:45.635565567Z" level=info msg="RemoveContainer for \"9d134745a614f6f3c82c119b572386caf9bdfa81efb242573e109106cf12bc11\"" Sep 3 23:24:45.643893 systemd[1]: Removed slice kubepods-besteffort-podc619e907_2c9c_4a1c_b128_59585fd77354.slice - libcontainer container kubepods-besteffort-podc619e907_2c9c_4a1c_b128_59585fd77354.slice. Sep 3 23:24:45.644075 systemd[1]: kubepods-besteffort-podc619e907_2c9c_4a1c_b128_59585fd77354.slice: Consumed 360ms CPU time, 26.9M memory peak, 1.1M read from disk, 4K written to disk. Sep 3 23:24:45.648404 systemd[1]: Removed slice kubepods-burstable-podc5b6f835_f6d4_4c14_957a_ec9a889e6aa4.slice - libcontainer container kubepods-burstable-podc5b6f835_f6d4_4c14_957a_ec9a889e6aa4.slice. Sep 3 23:24:45.648533 systemd[1]: kubepods-burstable-podc5b6f835_f6d4_4c14_957a_ec9a889e6aa4.slice: Consumed 6.248s CPU time, 125.7M memory peak, 132K read from disk, 18.4M written to disk. Sep 3 23:24:45.649012 containerd[1526]: time="2025-09-03T23:24:45.648952607Z" level=info msg="RemoveContainer for \"9d134745a614f6f3c82c119b572386caf9bdfa81efb242573e109106cf12bc11\" returns successfully" Sep 3 23:24:45.653335 kubelet[2643]: I0903 23:24:45.653228 2643 scope.go:117] "RemoveContainer" containerID="9d134745a614f6f3c82c119b572386caf9bdfa81efb242573e109106cf12bc11" Sep 3 23:24:45.653882 containerd[1526]: time="2025-09-03T23:24:45.653837393Z" level=error msg="ContainerStatus for \"9d134745a614f6f3c82c119b572386caf9bdfa81efb242573e109106cf12bc11\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9d134745a614f6f3c82c119b572386caf9bdfa81efb242573e109106cf12bc11\": not found" Sep 3 23:24:45.659948 kubelet[2643]: E0903 23:24:45.659669 2643 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9d134745a614f6f3c82c119b572386caf9bdfa81efb242573e109106cf12bc11\": not found" containerID="9d134745a614f6f3c82c119b572386caf9bdfa81efb242573e109106cf12bc11" Sep 3 23:24:45.659948 kubelet[2643]: I0903 23:24:45.659716 2643 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9d134745a614f6f3c82c119b572386caf9bdfa81efb242573e109106cf12bc11"} err="failed to get container status \"9d134745a614f6f3c82c119b572386caf9bdfa81efb242573e109106cf12bc11\": rpc error: code = NotFound desc = an error occurred when try to find container \"9d134745a614f6f3c82c119b572386caf9bdfa81efb242573e109106cf12bc11\": not found" Sep 3 23:24:45.659948 kubelet[2643]: I0903 23:24:45.659794 2643 scope.go:117] "RemoveContainer" containerID="7e9c4dfb493ab7e9edde1314d3b7e1b3ed2b5313a7f3830cb7497f651cb99797" Sep 3 23:24:45.662132 containerd[1526]: time="2025-09-03T23:24:45.662088928Z" level=info msg="RemoveContainer for \"7e9c4dfb493ab7e9edde1314d3b7e1b3ed2b5313a7f3830cb7497f651cb99797\"" Sep 3 23:24:45.669128 containerd[1526]: time="2025-09-03T23:24:45.669088268Z" level=info msg="RemoveContainer for \"7e9c4dfb493ab7e9edde1314d3b7e1b3ed2b5313a7f3830cb7497f651cb99797\" returns successfully" Sep 3 23:24:45.669301 kubelet[2643]: I0903 23:24:45.669271 2643 scope.go:117] "RemoveContainer" containerID="070626d235ff4848911ff876ca19d8aaeb7573adee20bd5f3691021452650e41" Sep 3 23:24:45.670617 containerd[1526]: time="2025-09-03T23:24:45.670566343Z" level=info msg="RemoveContainer for \"070626d235ff4848911ff876ca19d8aaeb7573adee20bd5f3691021452650e41\"" Sep 3 23:24:45.674016 containerd[1526]: time="2025-09-03T23:24:45.673978413Z" level=info msg="RemoveContainer for \"070626d235ff4848911ff876ca19d8aaeb7573adee20bd5f3691021452650e41\" returns successfully" Sep 3 23:24:45.674152 kubelet[2643]: I0903 23:24:45.674130 2643 scope.go:117] "RemoveContainer" containerID="b0af62135677e4fcd90eca08388874f487b5c59aa6f5be1a9c41747fcaa98d29" Sep 3 23:24:45.676170 containerd[1526]: time="2025-09-03T23:24:45.676145247Z" level=info msg="RemoveContainer for \"b0af62135677e4fcd90eca08388874f487b5c59aa6f5be1a9c41747fcaa98d29\"" Sep 3 23:24:45.679245 containerd[1526]: time="2025-09-03T23:24:45.679210598Z" level=info msg="RemoveContainer for \"b0af62135677e4fcd90eca08388874f487b5c59aa6f5be1a9c41747fcaa98d29\" returns successfully" Sep 3 23:24:45.679402 kubelet[2643]: I0903 23:24:45.679384 2643 scope.go:117] "RemoveContainer" containerID="008f19a4f0b298fe8f4604e1b281113047957510d694f576191759cc0da625cf" Sep 3 23:24:45.680644 containerd[1526]: time="2025-09-03T23:24:45.680609674Z" level=info msg="RemoveContainer for \"008f19a4f0b298fe8f4604e1b281113047957510d694f576191759cc0da625cf\"" Sep 3 23:24:45.683068 containerd[1526]: time="2025-09-03T23:24:45.683031226Z" level=info msg="RemoveContainer for \"008f19a4f0b298fe8f4604e1b281113047957510d694f576191759cc0da625cf\" returns successfully" Sep 3 23:24:45.683266 kubelet[2643]: I0903 23:24:45.683249 2643 scope.go:117] "RemoveContainer" containerID="b2295769f473acaa94df24dae524dba4528238d765c15c98d2b88e021f057143" Sep 3 23:24:45.684572 containerd[1526]: time="2025-09-03T23:24:45.684550622Z" level=info msg="RemoveContainer for \"b2295769f473acaa94df24dae524dba4528238d765c15c98d2b88e021f057143\"" Sep 3 23:24:45.688506 containerd[1526]: time="2025-09-03T23:24:45.688472210Z" level=info msg="RemoveContainer for \"b2295769f473acaa94df24dae524dba4528238d765c15c98d2b88e021f057143\" returns successfully" Sep 3 23:24:45.688720 kubelet[2643]: I0903 23:24:45.688620 2643 scope.go:117] "RemoveContainer" containerID="7e9c4dfb493ab7e9edde1314d3b7e1b3ed2b5313a7f3830cb7497f651cb99797" Sep 3 23:24:45.688833 containerd[1526]: time="2025-09-03T23:24:45.688804529Z" level=error msg="ContainerStatus for \"7e9c4dfb493ab7e9edde1314d3b7e1b3ed2b5313a7f3830cb7497f651cb99797\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7e9c4dfb493ab7e9edde1314d3b7e1b3ed2b5313a7f3830cb7497f651cb99797\": not found" Sep 3 23:24:45.688927 kubelet[2643]: E0903 23:24:45.688906 2643 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7e9c4dfb493ab7e9edde1314d3b7e1b3ed2b5313a7f3830cb7497f651cb99797\": not found" containerID="7e9c4dfb493ab7e9edde1314d3b7e1b3ed2b5313a7f3830cb7497f651cb99797" Sep 3 23:24:45.688979 kubelet[2643]: I0903 23:24:45.688950 2643 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7e9c4dfb493ab7e9edde1314d3b7e1b3ed2b5313a7f3830cb7497f651cb99797"} err="failed to get container status \"7e9c4dfb493ab7e9edde1314d3b7e1b3ed2b5313a7f3830cb7497f651cb99797\": rpc error: code = NotFound desc = an error occurred when try to find container \"7e9c4dfb493ab7e9edde1314d3b7e1b3ed2b5313a7f3830cb7497f651cb99797\": not found" Sep 3 23:24:45.688979 kubelet[2643]: I0903 23:24:45.688973 2643 scope.go:117] "RemoveContainer" containerID="070626d235ff4848911ff876ca19d8aaeb7573adee20bd5f3691021452650e41" Sep 3 23:24:45.689136 containerd[1526]: time="2025-09-03T23:24:45.689112648Z" level=error msg="ContainerStatus for \"070626d235ff4848911ff876ca19d8aaeb7573adee20bd5f3691021452650e41\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"070626d235ff4848911ff876ca19d8aaeb7573adee20bd5f3691021452650e41\": not found" Sep 3 23:24:45.689243 kubelet[2643]: E0903 23:24:45.689226 2643 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"070626d235ff4848911ff876ca19d8aaeb7573adee20bd5f3691021452650e41\": not found" containerID="070626d235ff4848911ff876ca19d8aaeb7573adee20bd5f3691021452650e41" Sep 3 23:24:45.689289 kubelet[2643]: I0903 23:24:45.689248 2643 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"070626d235ff4848911ff876ca19d8aaeb7573adee20bd5f3691021452650e41"} err="failed to get container status \"070626d235ff4848911ff876ca19d8aaeb7573adee20bd5f3691021452650e41\": rpc error: code = NotFound desc = an error occurred when try to find container \"070626d235ff4848911ff876ca19d8aaeb7573adee20bd5f3691021452650e41\": not found" Sep 3 23:24:45.689289 kubelet[2643]: I0903 23:24:45.689264 2643 scope.go:117] "RemoveContainer" containerID="b0af62135677e4fcd90eca08388874f487b5c59aa6f5be1a9c41747fcaa98d29" Sep 3 23:24:45.689444 containerd[1526]: time="2025-09-03T23:24:45.689412367Z" level=error msg="ContainerStatus for \"b0af62135677e4fcd90eca08388874f487b5c59aa6f5be1a9c41747fcaa98d29\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b0af62135677e4fcd90eca08388874f487b5c59aa6f5be1a9c41747fcaa98d29\": not found" Sep 3 23:24:45.689591 kubelet[2643]: E0903 23:24:45.689552 2643 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b0af62135677e4fcd90eca08388874f487b5c59aa6f5be1a9c41747fcaa98d29\": not found" containerID="b0af62135677e4fcd90eca08388874f487b5c59aa6f5be1a9c41747fcaa98d29" Sep 3 23:24:45.689629 kubelet[2643]: I0903 23:24:45.689597 2643 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b0af62135677e4fcd90eca08388874f487b5c59aa6f5be1a9c41747fcaa98d29"} err="failed to get container status \"b0af62135677e4fcd90eca08388874f487b5c59aa6f5be1a9c41747fcaa98d29\": rpc error: code = NotFound desc = an error occurred when try to find container \"b0af62135677e4fcd90eca08388874f487b5c59aa6f5be1a9c41747fcaa98d29\": not found" Sep 3 23:24:45.689629 kubelet[2643]: I0903 23:24:45.689612 2643 scope.go:117] "RemoveContainer" containerID="008f19a4f0b298fe8f4604e1b281113047957510d694f576191759cc0da625cf" Sep 3 23:24:45.689815 containerd[1526]: time="2025-09-03T23:24:45.689787646Z" level=error msg="ContainerStatus for \"008f19a4f0b298fe8f4604e1b281113047957510d694f576191759cc0da625cf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"008f19a4f0b298fe8f4604e1b281113047957510d694f576191759cc0da625cf\": not found" Sep 3 23:24:45.689934 kubelet[2643]: E0903 23:24:45.689915 2643 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"008f19a4f0b298fe8f4604e1b281113047957510d694f576191759cc0da625cf\": not found" containerID="008f19a4f0b298fe8f4604e1b281113047957510d694f576191759cc0da625cf" Sep 3 23:24:45.689970 kubelet[2643]: I0903 23:24:45.689936 2643 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"008f19a4f0b298fe8f4604e1b281113047957510d694f576191759cc0da625cf"} err="failed to get container status \"008f19a4f0b298fe8f4604e1b281113047957510d694f576191759cc0da625cf\": rpc error: code = NotFound desc = an error occurred when try to find container \"008f19a4f0b298fe8f4604e1b281113047957510d694f576191759cc0da625cf\": not found" Sep 3 23:24:45.689970 kubelet[2643]: I0903 23:24:45.689949 2643 scope.go:117] "RemoveContainer" containerID="b2295769f473acaa94df24dae524dba4528238d765c15c98d2b88e021f057143" Sep 3 23:24:45.690096 containerd[1526]: time="2025-09-03T23:24:45.690057925Z" level=error msg="ContainerStatus for \"b2295769f473acaa94df24dae524dba4528238d765c15c98d2b88e021f057143\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b2295769f473acaa94df24dae524dba4528238d765c15c98d2b88e021f057143\": not found" Sep 3 23:24:45.690183 kubelet[2643]: E0903 23:24:45.690164 2643 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b2295769f473acaa94df24dae524dba4528238d765c15c98d2b88e021f057143\": not found" containerID="b2295769f473acaa94df24dae524dba4528238d765c15c98d2b88e021f057143" Sep 3 23:24:45.690263 kubelet[2643]: I0903 23:24:45.690244 2643 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b2295769f473acaa94df24dae524dba4528238d765c15c98d2b88e021f057143"} err="failed to get container status \"b2295769f473acaa94df24dae524dba4528238d765c15c98d2b88e021f057143\": rpc error: code = NotFound desc = an error occurred when try to find container \"b2295769f473acaa94df24dae524dba4528238d765c15c98d2b88e021f057143\": not found" Sep 3 23:24:46.010496 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d4660bc61a95351ac81521b0a70140c87bb4719dbbf5fa17de91438562cee884-shm.mount: Deactivated successfully. Sep 3 23:24:46.010604 systemd[1]: var-lib-kubelet-pods-c619e907\x2d2c9c\x2d4a1c\x2db128\x2d59585fd77354-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxgnbx.mount: Deactivated successfully. Sep 3 23:24:46.010675 systemd[1]: var-lib-kubelet-pods-c5b6f835\x2df6d4\x2d4c14\x2d957a\x2dec9a889e6aa4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw5wld.mount: Deactivated successfully. Sep 3 23:24:46.010734 systemd[1]: var-lib-kubelet-pods-c5b6f835\x2df6d4\x2d4c14\x2d957a\x2dec9a889e6aa4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 3 23:24:46.010778 systemd[1]: var-lib-kubelet-pods-c5b6f835\x2df6d4\x2d4c14\x2d957a\x2dec9a889e6aa4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 3 23:24:46.436317 kubelet[2643]: I0903 23:24:46.435498 2643 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5b6f835-f6d4-4c14-957a-ec9a889e6aa4" path="/var/lib/kubelet/pods/c5b6f835-f6d4-4c14-957a-ec9a889e6aa4/volumes" Sep 3 23:24:46.436317 kubelet[2643]: I0903 23:24:46.436047 2643 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c619e907-2c9c-4a1c-b128-59585fd77354" path="/var/lib/kubelet/pods/c619e907-2c9c-4a1c-b128-59585fd77354/volumes" Sep 3 23:24:46.908168 sshd[4220]: Connection closed by 10.0.0.1 port 47952 Sep 3 23:24:46.907539 sshd-session[4218]: pam_unix(sshd:session): session closed for user core Sep 3 23:24:46.918824 systemd[1]: sshd@21-10.0.0.50:22-10.0.0.1:47952.service: Deactivated successfully. Sep 3 23:24:46.920427 systemd[1]: session-22.scope: Deactivated successfully. Sep 3 23:24:46.920695 systemd[1]: session-22.scope: Consumed 1.065s CPU time, 24.5M memory peak. Sep 3 23:24:46.921302 systemd-logind[1505]: Session 22 logged out. Waiting for processes to exit. Sep 3 23:24:46.923912 systemd[1]: Started sshd@22-10.0.0.50:22-10.0.0.1:47968.service - OpenSSH per-connection server daemon (10.0.0.1:47968). Sep 3 23:24:46.924677 systemd-logind[1505]: Removed session 22. Sep 3 23:24:46.973238 sshd[4372]: Accepted publickey for core from 10.0.0.1 port 47968 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:24:46.974405 sshd-session[4372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:24:46.978375 systemd-logind[1505]: New session 23 of user core. Sep 3 23:24:46.987813 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 3 23:24:48.254197 sshd[4374]: Connection closed by 10.0.0.1 port 47968 Sep 3 23:24:48.254778 sshd-session[4372]: pam_unix(sshd:session): session closed for user core Sep 3 23:24:48.263498 systemd[1]: sshd@22-10.0.0.50:22-10.0.0.1:47968.service: Deactivated successfully. Sep 3 23:24:48.265424 systemd[1]: session-23.scope: Deactivated successfully. Sep 3 23:24:48.265854 systemd[1]: session-23.scope: Consumed 1.177s CPU time, 25.9M memory peak. Sep 3 23:24:48.267860 systemd-logind[1505]: Session 23 logged out. Waiting for processes to exit. Sep 3 23:24:48.271005 systemd[1]: Started sshd@23-10.0.0.50:22-10.0.0.1:47976.service - OpenSSH per-connection server daemon (10.0.0.1:47976). Sep 3 23:24:48.274084 systemd-logind[1505]: Removed session 23. Sep 3 23:24:48.309350 kubelet[2643]: E0903 23:24:48.309285 2643 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c5b6f835-f6d4-4c14-957a-ec9a889e6aa4" containerName="mount-bpf-fs" Sep 3 23:24:48.309350 kubelet[2643]: E0903 23:24:48.309323 2643 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c5b6f835-f6d4-4c14-957a-ec9a889e6aa4" containerName="clean-cilium-state" Sep 3 23:24:48.309350 kubelet[2643]: E0903 23:24:48.309330 2643 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c5b6f835-f6d4-4c14-957a-ec9a889e6aa4" containerName="cilium-agent" Sep 3 23:24:48.309350 kubelet[2643]: E0903 23:24:48.309337 2643 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c5b6f835-f6d4-4c14-957a-ec9a889e6aa4" containerName="mount-cgroup" Sep 3 23:24:48.309350 kubelet[2643]: E0903 23:24:48.309342 2643 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c5b6f835-f6d4-4c14-957a-ec9a889e6aa4" containerName="apply-sysctl-overwrites" Sep 3 23:24:48.309792 kubelet[2643]: E0903 23:24:48.309348 2643 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c619e907-2c9c-4a1c-b128-59585fd77354" containerName="cilium-operator" Sep 3 23:24:48.309792 kubelet[2643]: I0903 23:24:48.309468 2643 memory_manager.go:354] "RemoveStaleState removing state" podUID="c619e907-2c9c-4a1c-b128-59585fd77354" containerName="cilium-operator" Sep 3 23:24:48.309792 kubelet[2643]: I0903 23:24:48.309475 2643 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5b6f835-f6d4-4c14-957a-ec9a889e6aa4" containerName="cilium-agent" Sep 3 23:24:48.321985 systemd[1]: Created slice kubepods-burstable-poda1535403_e606_4a1f_9f3b_bf3ad718ffda.slice - libcontainer container kubepods-burstable-poda1535403_e606_4a1f_9f3b_bf3ad718ffda.slice. Sep 3 23:24:48.337673 sshd[4386]: Accepted publickey for core from 10.0.0.1 port 47976 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:24:48.339197 sshd-session[4386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:24:48.350925 systemd-logind[1505]: New session 24 of user core. Sep 3 23:24:48.368893 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 3 23:24:48.420888 sshd[4389]: Connection closed by 10.0.0.1 port 47976 Sep 3 23:24:48.421401 sshd-session[4386]: pam_unix(sshd:session): session closed for user core Sep 3 23:24:48.437232 systemd[1]: sshd@23-10.0.0.50:22-10.0.0.1:47976.service: Deactivated successfully. Sep 3 23:24:48.439736 systemd[1]: session-24.scope: Deactivated successfully. Sep 3 23:24:48.440632 systemd-logind[1505]: Session 24 logged out. Waiting for processes to exit. Sep 3 23:24:48.443445 systemd[1]: Started sshd@24-10.0.0.50:22-10.0.0.1:47984.service - OpenSSH per-connection server daemon (10.0.0.1:47984). Sep 3 23:24:48.443944 systemd-logind[1505]: Removed session 24. Sep 3 23:24:48.449561 kubelet[2643]: I0903 23:24:48.449530 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a1535403-e606-4a1f-9f3b-bf3ad718ffda-cilium-config-path\") pod \"cilium-xfwdt\" (UID: \"a1535403-e606-4a1f-9f3b-bf3ad718ffda\") " pod="kube-system/cilium-xfwdt" Sep 3 23:24:48.449561 kubelet[2643]: I0903 23:24:48.449566 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a1535403-e606-4a1f-9f3b-bf3ad718ffda-host-proc-sys-kernel\") pod \"cilium-xfwdt\" (UID: \"a1535403-e606-4a1f-9f3b-bf3ad718ffda\") " pod="kube-system/cilium-xfwdt" Sep 3 23:24:48.449699 kubelet[2643]: I0903 23:24:48.449587 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw9qm\" (UniqueName: \"kubernetes.io/projected/a1535403-e606-4a1f-9f3b-bf3ad718ffda-kube-api-access-tw9qm\") pod \"cilium-xfwdt\" (UID: \"a1535403-e606-4a1f-9f3b-bf3ad718ffda\") " pod="kube-system/cilium-xfwdt" Sep 3 23:24:48.449699 kubelet[2643]: I0903 23:24:48.449607 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a1535403-e606-4a1f-9f3b-bf3ad718ffda-clustermesh-secrets\") pod \"cilium-xfwdt\" (UID: \"a1535403-e606-4a1f-9f3b-bf3ad718ffda\") " pod="kube-system/cilium-xfwdt" Sep 3 23:24:48.449699 kubelet[2643]: I0903 23:24:48.449624 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a1535403-e606-4a1f-9f3b-bf3ad718ffda-cilium-run\") pod \"cilium-xfwdt\" (UID: \"a1535403-e606-4a1f-9f3b-bf3ad718ffda\") " pod="kube-system/cilium-xfwdt" Sep 3 23:24:48.449699 kubelet[2643]: I0903 23:24:48.449650 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a1535403-e606-4a1f-9f3b-bf3ad718ffda-hostproc\") pod \"cilium-xfwdt\" (UID: \"a1535403-e606-4a1f-9f3b-bf3ad718ffda\") " pod="kube-system/cilium-xfwdt" Sep 3 23:24:48.449699 kubelet[2643]: I0903 23:24:48.449666 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a1535403-e606-4a1f-9f3b-bf3ad718ffda-etc-cni-netd\") pod \"cilium-xfwdt\" (UID: \"a1535403-e606-4a1f-9f3b-bf3ad718ffda\") " pod="kube-system/cilium-xfwdt" Sep 3 23:24:48.449699 kubelet[2643]: I0903 23:24:48.449681 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a1535403-e606-4a1f-9f3b-bf3ad718ffda-xtables-lock\") pod \"cilium-xfwdt\" (UID: \"a1535403-e606-4a1f-9f3b-bf3ad718ffda\") " pod="kube-system/cilium-xfwdt" Sep 3 23:24:48.449817 kubelet[2643]: I0903 23:24:48.449697 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a1535403-e606-4a1f-9f3b-bf3ad718ffda-hubble-tls\") pod \"cilium-xfwdt\" (UID: \"a1535403-e606-4a1f-9f3b-bf3ad718ffda\") " pod="kube-system/cilium-xfwdt" Sep 3 23:24:48.449817 kubelet[2643]: I0903 23:24:48.449716 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a1535403-e606-4a1f-9f3b-bf3ad718ffda-cilium-cgroup\") pod \"cilium-xfwdt\" (UID: \"a1535403-e606-4a1f-9f3b-bf3ad718ffda\") " pod="kube-system/cilium-xfwdt" Sep 3 23:24:48.449817 kubelet[2643]: I0903 23:24:48.449731 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a1535403-e606-4a1f-9f3b-bf3ad718ffda-host-proc-sys-net\") pod \"cilium-xfwdt\" (UID: \"a1535403-e606-4a1f-9f3b-bf3ad718ffda\") " pod="kube-system/cilium-xfwdt" Sep 3 23:24:48.449817 kubelet[2643]: I0903 23:24:48.449746 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a1535403-e606-4a1f-9f3b-bf3ad718ffda-cilium-ipsec-secrets\") pod \"cilium-xfwdt\" (UID: \"a1535403-e606-4a1f-9f3b-bf3ad718ffda\") " pod="kube-system/cilium-xfwdt" Sep 3 23:24:48.449817 kubelet[2643]: I0903 23:24:48.449761 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a1535403-e606-4a1f-9f3b-bf3ad718ffda-bpf-maps\") pod \"cilium-xfwdt\" (UID: \"a1535403-e606-4a1f-9f3b-bf3ad718ffda\") " pod="kube-system/cilium-xfwdt" Sep 3 23:24:48.449817 kubelet[2643]: I0903 23:24:48.449796 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a1535403-e606-4a1f-9f3b-bf3ad718ffda-cni-path\") pod \"cilium-xfwdt\" (UID: \"a1535403-e606-4a1f-9f3b-bf3ad718ffda\") " pod="kube-system/cilium-xfwdt" Sep 3 23:24:48.449937 kubelet[2643]: I0903 23:24:48.449812 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a1535403-e606-4a1f-9f3b-bf3ad718ffda-lib-modules\") pod \"cilium-xfwdt\" (UID: \"a1535403-e606-4a1f-9f3b-bf3ad718ffda\") " pod="kube-system/cilium-xfwdt" Sep 3 23:24:48.497756 sshd[4396]: Accepted publickey for core from 10.0.0.1 port 47984 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:24:48.499011 sshd-session[4396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:24:48.503988 systemd-logind[1505]: New session 25 of user core. Sep 3 23:24:48.515832 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 3 23:24:48.626863 containerd[1526]: time="2025-09-03T23:24:48.626824572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xfwdt,Uid:a1535403-e606-4a1f-9f3b-bf3ad718ffda,Namespace:kube-system,Attempt:0,}" Sep 3 23:24:48.649116 containerd[1526]: time="2025-09-03T23:24:48.649074951Z" level=info msg="connecting to shim 1b60a3f8aaeb4ec78131ad4af69c83d2993088cf7f3a8849482672f34d500920" address="unix:///run/containerd/s/e128f405bf28dfa0bf308fb0552df58c4e142b7bd034564f6e3a70029d01c781" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:24:48.676879 systemd[1]: Started cri-containerd-1b60a3f8aaeb4ec78131ad4af69c83d2993088cf7f3a8849482672f34d500920.scope - libcontainer container 1b60a3f8aaeb4ec78131ad4af69c83d2993088cf7f3a8849482672f34d500920. Sep 3 23:24:48.699085 containerd[1526]: time="2025-09-03T23:24:48.698962892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xfwdt,Uid:a1535403-e606-4a1f-9f3b-bf3ad718ffda,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b60a3f8aaeb4ec78131ad4af69c83d2993088cf7f3a8849482672f34d500920\"" Sep 3 23:24:48.703835 containerd[1526]: time="2025-09-03T23:24:48.703742919Z" level=info msg="CreateContainer within sandbox \"1b60a3f8aaeb4ec78131ad4af69c83d2993088cf7f3a8849482672f34d500920\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 3 23:24:48.709713 containerd[1526]: time="2025-09-03T23:24:48.709661783Z" level=info msg="Container 7c4e78d24bc4f6d69c93fc876642ac43bc3545b4aae7e5351ae96fe42c9610a2: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:24:48.715222 containerd[1526]: time="2025-09-03T23:24:48.715174007Z" level=info msg="CreateContainer within sandbox \"1b60a3f8aaeb4ec78131ad4af69c83d2993088cf7f3a8849482672f34d500920\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7c4e78d24bc4f6d69c93fc876642ac43bc3545b4aae7e5351ae96fe42c9610a2\"" Sep 3 23:24:48.716844 containerd[1526]: time="2025-09-03T23:24:48.716794003Z" level=info msg="StartContainer for \"7c4e78d24bc4f6d69c93fc876642ac43bc3545b4aae7e5351ae96fe42c9610a2\"" Sep 3 23:24:48.718091 containerd[1526]: time="2025-09-03T23:24:48.718063279Z" level=info msg="connecting to shim 7c4e78d24bc4f6d69c93fc876642ac43bc3545b4aae7e5351ae96fe42c9610a2" address="unix:///run/containerd/s/e128f405bf28dfa0bf308fb0552df58c4e142b7bd034564f6e3a70029d01c781" protocol=ttrpc version=3 Sep 3 23:24:48.737378 systemd[1]: Started cri-containerd-7c4e78d24bc4f6d69c93fc876642ac43bc3545b4aae7e5351ae96fe42c9610a2.scope - libcontainer container 7c4e78d24bc4f6d69c93fc876642ac43bc3545b4aae7e5351ae96fe42c9610a2. Sep 3 23:24:48.763906 containerd[1526]: time="2025-09-03T23:24:48.763858752Z" level=info msg="StartContainer for \"7c4e78d24bc4f6d69c93fc876642ac43bc3545b4aae7e5351ae96fe42c9610a2\" returns successfully" Sep 3 23:24:48.772666 systemd[1]: cri-containerd-7c4e78d24bc4f6d69c93fc876642ac43bc3545b4aae7e5351ae96fe42c9610a2.scope: Deactivated successfully. Sep 3 23:24:48.779258 containerd[1526]: time="2025-09-03T23:24:48.779181910Z" level=info msg="received exit event container_id:\"7c4e78d24bc4f6d69c93fc876642ac43bc3545b4aae7e5351ae96fe42c9610a2\" id:\"7c4e78d24bc4f6d69c93fc876642ac43bc3545b4aae7e5351ae96fe42c9610a2\" pid:4467 exited_at:{seconds:1756941888 nanos:778909631}" Sep 3 23:24:48.779351 containerd[1526]: time="2025-09-03T23:24:48.779276510Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7c4e78d24bc4f6d69c93fc876642ac43bc3545b4aae7e5351ae96fe42c9610a2\" id:\"7c4e78d24bc4f6d69c93fc876642ac43bc3545b4aae7e5351ae96fe42c9610a2\" pid:4467 exited_at:{seconds:1756941888 nanos:778909631}" Sep 3 23:24:49.485185 kubelet[2643]: E0903 23:24:49.485115 2643 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 3 23:24:49.653662 containerd[1526]: time="2025-09-03T23:24:49.653329565Z" level=info msg="CreateContainer within sandbox \"1b60a3f8aaeb4ec78131ad4af69c83d2993088cf7f3a8849482672f34d500920\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 3 23:24:49.677332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3132454661.mount: Deactivated successfully. Sep 3 23:24:49.677612 containerd[1526]: time="2025-09-03T23:24:49.677352820Z" level=info msg="Container 30d71aa44c0ebd78cf15aa2f9c3d06d163ed1bfe6c7c893dffe3914a969dd7d3: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:24:49.686870 containerd[1526]: time="2025-09-03T23:24:49.686829634Z" level=info msg="CreateContainer within sandbox \"1b60a3f8aaeb4ec78131ad4af69c83d2993088cf7f3a8849482672f34d500920\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"30d71aa44c0ebd78cf15aa2f9c3d06d163ed1bfe6c7c893dffe3914a969dd7d3\"" Sep 3 23:24:49.687380 containerd[1526]: time="2025-09-03T23:24:49.687360153Z" level=info msg="StartContainer for \"30d71aa44c0ebd78cf15aa2f9c3d06d163ed1bfe6c7c893dffe3914a969dd7d3\"" Sep 3 23:24:49.689455 containerd[1526]: time="2025-09-03T23:24:49.689426227Z" level=info msg="connecting to shim 30d71aa44c0ebd78cf15aa2f9c3d06d163ed1bfe6c7c893dffe3914a969dd7d3" address="unix:///run/containerd/s/e128f405bf28dfa0bf308fb0552df58c4e142b7bd034564f6e3a70029d01c781" protocol=ttrpc version=3 Sep 3 23:24:49.709806 systemd[1]: Started cri-containerd-30d71aa44c0ebd78cf15aa2f9c3d06d163ed1bfe6c7c893dffe3914a969dd7d3.scope - libcontainer container 30d71aa44c0ebd78cf15aa2f9c3d06d163ed1bfe6c7c893dffe3914a969dd7d3. Sep 3 23:24:49.735403 containerd[1526]: time="2025-09-03T23:24:49.735216543Z" level=info msg="StartContainer for \"30d71aa44c0ebd78cf15aa2f9c3d06d163ed1bfe6c7c893dffe3914a969dd7d3\" returns successfully" Sep 3 23:24:49.741554 systemd[1]: cri-containerd-30d71aa44c0ebd78cf15aa2f9c3d06d163ed1bfe6c7c893dffe3914a969dd7d3.scope: Deactivated successfully. Sep 3 23:24:49.741937 containerd[1526]: time="2025-09-03T23:24:49.741904045Z" level=info msg="received exit event container_id:\"30d71aa44c0ebd78cf15aa2f9c3d06d163ed1bfe6c7c893dffe3914a969dd7d3\" id:\"30d71aa44c0ebd78cf15aa2f9c3d06d163ed1bfe6c7c893dffe3914a969dd7d3\" pid:4518 exited_at:{seconds:1756941889 nanos:741693246}" Sep 3 23:24:49.742077 containerd[1526]: time="2025-09-03T23:24:49.742039085Z" level=info msg="TaskExit event in podsandbox handler container_id:\"30d71aa44c0ebd78cf15aa2f9c3d06d163ed1bfe6c7c893dffe3914a969dd7d3\" id:\"30d71aa44c0ebd78cf15aa2f9c3d06d163ed1bfe6c7c893dffe3914a969dd7d3\" pid:4518 exited_at:{seconds:1756941889 nanos:741693246}" Sep 3 23:24:50.555264 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30d71aa44c0ebd78cf15aa2f9c3d06d163ed1bfe6c7c893dffe3914a969dd7d3-rootfs.mount: Deactivated successfully. Sep 3 23:24:50.657208 containerd[1526]: time="2025-09-03T23:24:50.657156641Z" level=info msg="CreateContainer within sandbox \"1b60a3f8aaeb4ec78131ad4af69c83d2993088cf7f3a8849482672f34d500920\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 3 23:24:50.671269 containerd[1526]: time="2025-09-03T23:24:50.671207524Z" level=info msg="Container 8bb78ab1991a85af65fce95d5a249e0e0253e9cbf047e0c24698e4862ed9ca9f: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:24:50.680957 containerd[1526]: time="2025-09-03T23:24:50.680909778Z" level=info msg="CreateContainer within sandbox \"1b60a3f8aaeb4ec78131ad4af69c83d2993088cf7f3a8849482672f34d500920\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8bb78ab1991a85af65fce95d5a249e0e0253e9cbf047e0c24698e4862ed9ca9f\"" Sep 3 23:24:50.681452 containerd[1526]: time="2025-09-03T23:24:50.681413617Z" level=info msg="StartContainer for \"8bb78ab1991a85af65fce95d5a249e0e0253e9cbf047e0c24698e4862ed9ca9f\"" Sep 3 23:24:50.683060 containerd[1526]: time="2025-09-03T23:24:50.683021412Z" level=info msg="connecting to shim 8bb78ab1991a85af65fce95d5a249e0e0253e9cbf047e0c24698e4862ed9ca9f" address="unix:///run/containerd/s/e128f405bf28dfa0bf308fb0552df58c4e142b7bd034564f6e3a70029d01c781" protocol=ttrpc version=3 Sep 3 23:24:50.701799 systemd[1]: Started cri-containerd-8bb78ab1991a85af65fce95d5a249e0e0253e9cbf047e0c24698e4862ed9ca9f.scope - libcontainer container 8bb78ab1991a85af65fce95d5a249e0e0253e9cbf047e0c24698e4862ed9ca9f. Sep 3 23:24:50.734625 systemd[1]: cri-containerd-8bb78ab1991a85af65fce95d5a249e0e0253e9cbf047e0c24698e4862ed9ca9f.scope: Deactivated successfully. Sep 3 23:24:50.735227 containerd[1526]: time="2025-09-03T23:24:50.735168874Z" level=info msg="StartContainer for \"8bb78ab1991a85af65fce95d5a249e0e0253e9cbf047e0c24698e4862ed9ca9f\" returns successfully" Sep 3 23:24:50.736951 containerd[1526]: time="2025-09-03T23:24:50.736912469Z" level=info msg="received exit event container_id:\"8bb78ab1991a85af65fce95d5a249e0e0253e9cbf047e0c24698e4862ed9ca9f\" id:\"8bb78ab1991a85af65fce95d5a249e0e0253e9cbf047e0c24698e4862ed9ca9f\" pid:4565 exited_at:{seconds:1756941890 nanos:736533830}" Sep 3 23:24:50.737588 containerd[1526]: time="2025-09-03T23:24:50.737021909Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8bb78ab1991a85af65fce95d5a249e0e0253e9cbf047e0c24698e4862ed9ca9f\" id:\"8bb78ab1991a85af65fce95d5a249e0e0253e9cbf047e0c24698e4862ed9ca9f\" pid:4565 exited_at:{seconds:1756941890 nanos:736533830}" Sep 3 23:24:51.555388 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8bb78ab1991a85af65fce95d5a249e0e0253e9cbf047e0c24698e4862ed9ca9f-rootfs.mount: Deactivated successfully. Sep 3 23:24:51.663670 containerd[1526]: time="2025-09-03T23:24:51.663415208Z" level=info msg="CreateContainer within sandbox \"1b60a3f8aaeb4ec78131ad4af69c83d2993088cf7f3a8849482672f34d500920\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 3 23:24:51.674216 containerd[1526]: time="2025-09-03T23:24:51.674183260Z" level=info msg="Container 5649ae8e2c929d32a2e02ae688af1dcbc96ba256833e8ca29698d0ef3a688c2c: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:24:51.681956 containerd[1526]: time="2025-09-03T23:24:51.681919840Z" level=info msg="CreateContainer within sandbox \"1b60a3f8aaeb4ec78131ad4af69c83d2993088cf7f3a8849482672f34d500920\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5649ae8e2c929d32a2e02ae688af1dcbc96ba256833e8ca29698d0ef3a688c2c\"" Sep 3 23:24:51.682661 containerd[1526]: time="2025-09-03T23:24:51.682473839Z" level=info msg="StartContainer for \"5649ae8e2c929d32a2e02ae688af1dcbc96ba256833e8ca29698d0ef3a688c2c\"" Sep 3 23:24:51.683614 containerd[1526]: time="2025-09-03T23:24:51.683584636Z" level=info msg="connecting to shim 5649ae8e2c929d32a2e02ae688af1dcbc96ba256833e8ca29698d0ef3a688c2c" address="unix:///run/containerd/s/e128f405bf28dfa0bf308fb0552df58c4e142b7bd034564f6e3a70029d01c781" protocol=ttrpc version=3 Sep 3 23:24:51.710866 systemd[1]: Started cri-containerd-5649ae8e2c929d32a2e02ae688af1dcbc96ba256833e8ca29698d0ef3a688c2c.scope - libcontainer container 5649ae8e2c929d32a2e02ae688af1dcbc96ba256833e8ca29698d0ef3a688c2c. Sep 3 23:24:51.736199 systemd[1]: cri-containerd-5649ae8e2c929d32a2e02ae688af1dcbc96ba256833e8ca29698d0ef3a688c2c.scope: Deactivated successfully. Sep 3 23:24:51.738814 containerd[1526]: time="2025-09-03T23:24:51.738778653Z" level=info msg="received exit event container_id:\"5649ae8e2c929d32a2e02ae688af1dcbc96ba256833e8ca29698d0ef3a688c2c\" id:\"5649ae8e2c929d32a2e02ae688af1dcbc96ba256833e8ca29698d0ef3a688c2c\" pid:4604 exited_at:{seconds:1756941891 nanos:738157014}" Sep 3 23:24:51.739004 containerd[1526]: time="2025-09-03T23:24:51.738971652Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5649ae8e2c929d32a2e02ae688af1dcbc96ba256833e8ca29698d0ef3a688c2c\" id:\"5649ae8e2c929d32a2e02ae688af1dcbc96ba256833e8ca29698d0ef3a688c2c\" pid:4604 exited_at:{seconds:1756941891 nanos:738157014}" Sep 3 23:24:51.745877 containerd[1526]: time="2025-09-03T23:24:51.745828314Z" level=info msg="StartContainer for \"5649ae8e2c929d32a2e02ae688af1dcbc96ba256833e8ca29698d0ef3a688c2c\" returns successfully" Sep 3 23:24:52.555475 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5649ae8e2c929d32a2e02ae688af1dcbc96ba256833e8ca29698d0ef3a688c2c-rootfs.mount: Deactivated successfully. Sep 3 23:24:52.672097 containerd[1526]: time="2025-09-03T23:24:52.671734867Z" level=info msg="CreateContainer within sandbox \"1b60a3f8aaeb4ec78131ad4af69c83d2993088cf7f3a8849482672f34d500920\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 3 23:24:52.681583 containerd[1526]: time="2025-09-03T23:24:52.681540242Z" level=info msg="Container fa9c5ec412f75149111e0df4853c564aab6e527f5ab0f8cc0071a8d07f5dd88e: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:24:52.689173 containerd[1526]: time="2025-09-03T23:24:52.689121382Z" level=info msg="CreateContainer within sandbox \"1b60a3f8aaeb4ec78131ad4af69c83d2993088cf7f3a8849482672f34d500920\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fa9c5ec412f75149111e0df4853c564aab6e527f5ab0f8cc0071a8d07f5dd88e\"" Sep 3 23:24:52.689632 containerd[1526]: time="2025-09-03T23:24:52.689605221Z" level=info msg="StartContainer for \"fa9c5ec412f75149111e0df4853c564aab6e527f5ab0f8cc0071a8d07f5dd88e\"" Sep 3 23:24:52.691614 containerd[1526]: time="2025-09-03T23:24:52.691471617Z" level=info msg="connecting to shim fa9c5ec412f75149111e0df4853c564aab6e527f5ab0f8cc0071a8d07f5dd88e" address="unix:///run/containerd/s/e128f405bf28dfa0bf308fb0552df58c4e142b7bd034564f6e3a70029d01c781" protocol=ttrpc version=3 Sep 3 23:24:52.713820 systemd[1]: Started cri-containerd-fa9c5ec412f75149111e0df4853c564aab6e527f5ab0f8cc0071a8d07f5dd88e.scope - libcontainer container fa9c5ec412f75149111e0df4853c564aab6e527f5ab0f8cc0071a8d07f5dd88e. Sep 3 23:24:52.742862 containerd[1526]: time="2025-09-03T23:24:52.742823006Z" level=info msg="StartContainer for \"fa9c5ec412f75149111e0df4853c564aab6e527f5ab0f8cc0071a8d07f5dd88e\" returns successfully" Sep 3 23:24:52.802928 containerd[1526]: time="2025-09-03T23:24:52.802884773Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa9c5ec412f75149111e0df4853c564aab6e527f5ab0f8cc0071a8d07f5dd88e\" id:\"b073ecae01ab0fd32566ee4a3d34f953e6177a36c53426090ae103c2d8369c7c\" pid:4671 exited_at:{seconds:1756941892 nanos:801754616}" Sep 3 23:24:52.993669 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 3 23:24:54.878392 containerd[1526]: time="2025-09-03T23:24:54.878322683Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa9c5ec412f75149111e0df4853c564aab6e527f5ab0f8cc0071a8d07f5dd88e\" id:\"791b036e27c8d4c9d01eb2f7ada7a0f5e282bdb0ca9abeaf642305d7f0d0d3d2\" pid:4845 exit_status:1 exited_at:{seconds:1756941894 nanos:878002844}" Sep 3 23:24:55.890290 systemd-networkd[1445]: lxc_health: Link UP Sep 3 23:24:55.891596 systemd-networkd[1445]: lxc_health: Gained carrier Sep 3 23:24:56.644077 kubelet[2643]: I0903 23:24:56.643940 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xfwdt" podStartSLOduration=8.643922934 podStartE2EDuration="8.643922934s" podCreationTimestamp="2025-09-03 23:24:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:24:53.688350959 +0000 UTC m=+79.359405925" watchObservedRunningTime="2025-09-03 23:24:56.643922934 +0000 UTC m=+82.314977900" Sep 3 23:24:56.970806 systemd-networkd[1445]: lxc_health: Gained IPv6LL Sep 3 23:24:57.000777 containerd[1526]: time="2025-09-03T23:24:57.000739100Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa9c5ec412f75149111e0df4853c564aab6e527f5ab0f8cc0071a8d07f5dd88e\" id:\"2881c0d8683ac912c035a05c34e38cce7a5c541527e242fc1b512753c0de3798\" pid:5207 exited_at:{seconds:1756941897 nanos:270901}" Sep 3 23:24:59.112451 containerd[1526]: time="2025-09-03T23:24:59.112400077Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa9c5ec412f75149111e0df4853c564aab6e527f5ab0f8cc0071a8d07f5dd88e\" id:\"7952858f19ce2781bf6d4b6c0945599d7340cc7fe117c568e74cfc012f2b0fc9\" pid:5233 exited_at:{seconds:1756941899 nanos:112032997}" Sep 3 23:25:01.264825 containerd[1526]: time="2025-09-03T23:25:01.264712003Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa9c5ec412f75149111e0df4853c564aab6e527f5ab0f8cc0071a8d07f5dd88e\" id:\"b9f11d239d576e4fe107dbf9475142f89a13cb8e6466d557fa1361ff4d7c0562\" pid:5270 exited_at:{seconds:1756941901 nanos:264420283}" Sep 3 23:25:01.269945 sshd[4398]: Connection closed by 10.0.0.1 port 47984 Sep 3 23:25:01.271557 sshd-session[4396]: pam_unix(sshd:session): session closed for user core Sep 3 23:25:01.274231 systemd[1]: sshd@24-10.0.0.50:22-10.0.0.1:47984.service: Deactivated successfully. Sep 3 23:25:01.277504 systemd[1]: session-25.scope: Deactivated successfully. Sep 3 23:25:01.278895 systemd-logind[1505]: Session 25 logged out. Waiting for processes to exit. Sep 3 23:25:01.280707 systemd-logind[1505]: Removed session 25.