Sep 9 23:37:06.751537 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 9 23:37:06.751558 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue Sep 9 22:08:34 -00 2025 Sep 9 23:37:06.751568 kernel: KASLR enabled Sep 9 23:37:06.751574 kernel: efi: EFI v2.7 by EDK II Sep 9 23:37:06.751579 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Sep 9 23:37:06.751584 kernel: random: crng init done Sep 9 23:37:06.751591 kernel: secureboot: Secure boot disabled Sep 9 23:37:06.751596 kernel: ACPI: Early table checksum verification disabled Sep 9 23:37:06.751602 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Sep 9 23:37:06.751609 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 9 23:37:06.751615 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:37:06.751621 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:37:06.751626 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:37:06.751632 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:37:06.751639 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:37:06.751655 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:37:06.751661 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:37:06.751667 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:37:06.751673 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:37:06.751679 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 9 23:37:06.751685 kernel: ACPI: Use ACPI SPCR as default console: No Sep 9 23:37:06.751691 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 23:37:06.751697 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Sep 9 23:37:06.751703 kernel: Zone ranges: Sep 9 23:37:06.751709 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 23:37:06.751716 kernel: DMA32 empty Sep 9 23:37:06.751722 kernel: Normal empty Sep 9 23:37:06.751728 kernel: Device empty Sep 9 23:37:06.751733 kernel: Movable zone start for each node Sep 9 23:37:06.751739 kernel: Early memory node ranges Sep 9 23:37:06.751745 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Sep 9 23:37:06.751751 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Sep 9 23:37:06.751757 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Sep 9 23:37:06.751763 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Sep 9 23:37:06.751769 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Sep 9 23:37:06.751775 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Sep 9 23:37:06.751780 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Sep 9 23:37:06.751788 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Sep 9 23:37:06.751794 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Sep 9 23:37:06.751800 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 9 23:37:06.751808 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 9 23:37:06.751815 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 9 23:37:06.751821 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 9 23:37:06.751829 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 23:37:06.751835 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 9 23:37:06.751841 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Sep 9 23:37:06.751848 kernel: psci: probing for conduit method from ACPI. Sep 9 23:37:06.751854 kernel: psci: PSCIv1.1 detected in firmware. Sep 9 23:37:06.751860 kernel: psci: Using standard PSCI v0.2 function IDs Sep 9 23:37:06.751866 kernel: psci: Trusted OS migration not required Sep 9 23:37:06.751873 kernel: psci: SMC Calling Convention v1.1 Sep 9 23:37:06.751879 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 9 23:37:06.751885 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 9 23:37:06.751893 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 9 23:37:06.751900 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 9 23:37:06.751906 kernel: Detected PIPT I-cache on CPU0 Sep 9 23:37:06.751913 kernel: CPU features: detected: GIC system register CPU interface Sep 9 23:37:06.751919 kernel: CPU features: detected: Spectre-v4 Sep 9 23:37:06.751926 kernel: CPU features: detected: Spectre-BHB Sep 9 23:37:06.751933 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 9 23:37:06.751939 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 9 23:37:06.751946 kernel: CPU features: detected: ARM erratum 1418040 Sep 9 23:37:06.751952 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 9 23:37:06.751959 kernel: alternatives: applying boot alternatives Sep 9 23:37:06.751966 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1a0303a4c67016bd8cbb391a5d1bb2355d0bb259dfb78ea746a1288c781f86ca Sep 9 23:37:06.751974 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 23:37:06.751981 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 23:37:06.751987 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 23:37:06.751993 kernel: Fallback order for Node 0: 0 Sep 9 23:37:06.752000 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 9 23:37:06.752006 kernel: Policy zone: DMA Sep 9 23:37:06.752012 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 23:37:06.752019 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 9 23:37:06.752025 kernel: software IO TLB: area num 4. Sep 9 23:37:06.752032 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 9 23:37:06.752038 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Sep 9 23:37:06.752046 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 9 23:37:06.752052 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 23:37:06.752059 kernel: rcu: RCU event tracing is enabled. Sep 9 23:37:06.752065 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 9 23:37:06.752072 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 23:37:06.752078 kernel: Tracing variant of Tasks RCU enabled. Sep 9 23:37:06.752085 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 23:37:06.752091 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 9 23:37:06.752098 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 23:37:06.752105 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 23:37:06.752111 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 9 23:37:06.752119 kernel: GICv3: 256 SPIs implemented Sep 9 23:37:06.752125 kernel: GICv3: 0 Extended SPIs implemented Sep 9 23:37:06.752132 kernel: Root IRQ handler: gic_handle_irq Sep 9 23:37:06.752139 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 9 23:37:06.752145 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 9 23:37:06.752152 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 9 23:37:06.752159 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 9 23:37:06.752166 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 9 23:37:06.752173 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 9 23:37:06.752180 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 9 23:37:06.752186 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 9 23:37:06.752193 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 23:37:06.752201 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 23:37:06.752207 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 9 23:37:06.752214 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 9 23:37:06.752220 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 9 23:37:06.752226 kernel: arm-pv: using stolen time PV Sep 9 23:37:06.752233 kernel: Console: colour dummy device 80x25 Sep 9 23:37:06.752239 kernel: ACPI: Core revision 20240827 Sep 9 23:37:06.752269 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 9 23:37:06.752277 kernel: pid_max: default: 32768 minimum: 301 Sep 9 23:37:06.752283 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 9 23:37:06.752292 kernel: landlock: Up and running. Sep 9 23:37:06.752299 kernel: SELinux: Initializing. Sep 9 23:37:06.752305 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 23:37:06.752312 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 23:37:06.752319 kernel: rcu: Hierarchical SRCU implementation. Sep 9 23:37:06.752326 kernel: rcu: Max phase no-delay instances is 400. Sep 9 23:37:06.752332 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 9 23:37:06.752339 kernel: Remapping and enabling EFI services. Sep 9 23:37:06.752345 kernel: smp: Bringing up secondary CPUs ... Sep 9 23:37:06.752357 kernel: Detected PIPT I-cache on CPU1 Sep 9 23:37:06.752364 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 9 23:37:06.752371 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 9 23:37:06.752379 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 23:37:06.752386 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 9 23:37:06.752393 kernel: Detected PIPT I-cache on CPU2 Sep 9 23:37:06.752400 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 9 23:37:06.752407 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 9 23:37:06.752415 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 23:37:06.752422 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 9 23:37:06.752429 kernel: Detected PIPT I-cache on CPU3 Sep 9 23:37:06.752436 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 9 23:37:06.752442 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 9 23:37:06.752449 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 23:37:06.752456 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 9 23:37:06.752463 kernel: smp: Brought up 1 node, 4 CPUs Sep 9 23:37:06.752470 kernel: SMP: Total of 4 processors activated. Sep 9 23:37:06.752478 kernel: CPU: All CPU(s) started at EL1 Sep 9 23:37:06.752485 kernel: CPU features: detected: 32-bit EL0 Support Sep 9 23:37:06.752492 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 9 23:37:06.752498 kernel: CPU features: detected: Common not Private translations Sep 9 23:37:06.752505 kernel: CPU features: detected: CRC32 instructions Sep 9 23:37:06.752512 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 9 23:37:06.752519 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 9 23:37:06.752526 kernel: CPU features: detected: LSE atomic instructions Sep 9 23:37:06.752533 kernel: CPU features: detected: Privileged Access Never Sep 9 23:37:06.752541 kernel: CPU features: detected: RAS Extension Support Sep 9 23:37:06.752547 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 9 23:37:06.752554 kernel: alternatives: applying system-wide alternatives Sep 9 23:37:06.752561 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 9 23:37:06.752568 kernel: Memory: 2424480K/2572288K available (11136K kernel code, 2436K rwdata, 9076K rodata, 38976K init, 1038K bss, 125472K reserved, 16384K cma-reserved) Sep 9 23:37:06.752575 kernel: devtmpfs: initialized Sep 9 23:37:06.752582 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 23:37:06.752589 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 9 23:37:06.752596 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 9 23:37:06.752605 kernel: 0 pages in range for non-PLT usage Sep 9 23:37:06.752613 kernel: 508560 pages in range for PLT usage Sep 9 23:37:06.752619 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 23:37:06.752626 kernel: SMBIOS 3.0.0 present. Sep 9 23:37:06.752633 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 9 23:37:06.752644 kernel: DMI: Memory slots populated: 1/1 Sep 9 23:37:06.752652 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 23:37:06.752659 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 9 23:37:06.752666 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 9 23:37:06.752675 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 9 23:37:06.752682 kernel: audit: initializing netlink subsys (disabled) Sep 9 23:37:06.752689 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Sep 9 23:37:06.752696 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 23:37:06.752702 kernel: cpuidle: using governor menu Sep 9 23:37:06.752709 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 9 23:37:06.752716 kernel: ASID allocator initialised with 32768 entries Sep 9 23:37:06.752723 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 23:37:06.752729 kernel: Serial: AMBA PL011 UART driver Sep 9 23:37:06.752737 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 23:37:06.752744 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 23:37:06.752751 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 9 23:37:06.752758 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 9 23:37:06.752765 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 23:37:06.752772 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 23:37:06.752778 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 9 23:37:06.752785 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 9 23:37:06.752792 kernel: ACPI: Added _OSI(Module Device) Sep 9 23:37:06.752800 kernel: ACPI: Added _OSI(Processor Device) Sep 9 23:37:06.752807 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 23:37:06.752814 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 23:37:06.752820 kernel: ACPI: Interpreter enabled Sep 9 23:37:06.752827 kernel: ACPI: Using GIC for interrupt routing Sep 9 23:37:06.752834 kernel: ACPI: MCFG table detected, 1 entries Sep 9 23:37:06.752841 kernel: ACPI: CPU0 has been hot-added Sep 9 23:37:06.752848 kernel: ACPI: CPU1 has been hot-added Sep 9 23:37:06.752854 kernel: ACPI: CPU2 has been hot-added Sep 9 23:37:06.752861 kernel: ACPI: CPU3 has been hot-added Sep 9 23:37:06.752870 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 9 23:37:06.752877 kernel: printk: legacy console [ttyAMA0] enabled Sep 9 23:37:06.752884 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 23:37:06.753020 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 23:37:06.753088 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 9 23:37:06.753148 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 9 23:37:06.753206 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 9 23:37:06.753293 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 9 23:37:06.753303 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 9 23:37:06.753310 kernel: PCI host bridge to bus 0000:00 Sep 9 23:37:06.753384 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 9 23:37:06.753440 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 9 23:37:06.753493 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 9 23:37:06.753545 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 23:37:06.753625 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 9 23:37:06.753710 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 9 23:37:06.753773 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 9 23:37:06.753834 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 9 23:37:06.753894 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 9 23:37:06.753953 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 9 23:37:06.754014 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 9 23:37:06.754076 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 9 23:37:06.754131 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 9 23:37:06.754185 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 9 23:37:06.754239 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 9 23:37:06.754257 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 9 23:37:06.754265 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 9 23:37:06.754272 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 9 23:37:06.754281 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 9 23:37:06.754288 kernel: iommu: Default domain type: Translated Sep 9 23:37:06.754295 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 9 23:37:06.754302 kernel: efivars: Registered efivars operations Sep 9 23:37:06.754309 kernel: vgaarb: loaded Sep 9 23:37:06.754316 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 9 23:37:06.754323 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 23:37:06.754330 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 23:37:06.754336 kernel: pnp: PnP ACPI init Sep 9 23:37:06.754408 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 9 23:37:06.754419 kernel: pnp: PnP ACPI: found 1 devices Sep 9 23:37:06.754426 kernel: NET: Registered PF_INET protocol family Sep 9 23:37:06.754433 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 23:37:06.754440 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 23:37:06.754448 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 23:37:06.754455 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 23:37:06.754462 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 23:37:06.754470 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 23:37:06.754478 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 23:37:06.754485 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 23:37:06.754492 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 23:37:06.754499 kernel: PCI: CLS 0 bytes, default 64 Sep 9 23:37:06.754505 kernel: kvm [1]: HYP mode not available Sep 9 23:37:06.754512 kernel: Initialise system trusted keyrings Sep 9 23:37:06.754519 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 23:37:06.754526 kernel: Key type asymmetric registered Sep 9 23:37:06.754534 kernel: Asymmetric key parser 'x509' registered Sep 9 23:37:06.754541 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 9 23:37:06.754548 kernel: io scheduler mq-deadline registered Sep 9 23:37:06.754555 kernel: io scheduler kyber registered Sep 9 23:37:06.754562 kernel: io scheduler bfq registered Sep 9 23:37:06.754569 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 9 23:37:06.754576 kernel: ACPI: button: Power Button [PWRB] Sep 9 23:37:06.754583 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 9 23:37:06.754658 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 9 23:37:06.754670 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 23:37:06.754678 kernel: thunder_xcv, ver 1.0 Sep 9 23:37:06.754685 kernel: thunder_bgx, ver 1.0 Sep 9 23:37:06.754692 kernel: nicpf, ver 1.0 Sep 9 23:37:06.754699 kernel: nicvf, ver 1.0 Sep 9 23:37:06.754770 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 9 23:37:06.754828 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-09T23:37:06 UTC (1757461026) Sep 9 23:37:06.754838 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 9 23:37:06.754845 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 9 23:37:06.754854 kernel: watchdog: NMI not fully supported Sep 9 23:37:06.754861 kernel: watchdog: Hard watchdog permanently disabled Sep 9 23:37:06.754868 kernel: NET: Registered PF_INET6 protocol family Sep 9 23:37:06.754875 kernel: Segment Routing with IPv6 Sep 9 23:37:06.754882 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 23:37:06.754889 kernel: NET: Registered PF_PACKET protocol family Sep 9 23:37:06.754896 kernel: Key type dns_resolver registered Sep 9 23:37:06.754902 kernel: registered taskstats version 1 Sep 9 23:37:06.754909 kernel: Loading compiled-in X.509 certificates Sep 9 23:37:06.754917 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: 820dabbdbfae37dcb388874c78ed83c436750814' Sep 9 23:37:06.754924 kernel: Demotion targets for Node 0: null Sep 9 23:37:06.754931 kernel: Key type .fscrypt registered Sep 9 23:37:06.754938 kernel: Key type fscrypt-provisioning registered Sep 9 23:37:06.754945 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 23:37:06.754952 kernel: ima: Allocated hash algorithm: sha1 Sep 9 23:37:06.754959 kernel: ima: No architecture policies found Sep 9 23:37:06.754966 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 9 23:37:06.754974 kernel: clk: Disabling unused clocks Sep 9 23:37:06.754981 kernel: PM: genpd: Disabling unused power domains Sep 9 23:37:06.754988 kernel: Warning: unable to open an initial console. Sep 9 23:37:06.754995 kernel: Freeing unused kernel memory: 38976K Sep 9 23:37:06.755002 kernel: Run /init as init process Sep 9 23:37:06.755009 kernel: with arguments: Sep 9 23:37:06.755016 kernel: /init Sep 9 23:37:06.755022 kernel: with environment: Sep 9 23:37:06.755029 kernel: HOME=/ Sep 9 23:37:06.755036 kernel: TERM=linux Sep 9 23:37:06.755044 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 23:37:06.755052 systemd[1]: Successfully made /usr/ read-only. Sep 9 23:37:06.755062 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 23:37:06.755070 systemd[1]: Detected virtualization kvm. Sep 9 23:37:06.755077 systemd[1]: Detected architecture arm64. Sep 9 23:37:06.755084 systemd[1]: Running in initrd. Sep 9 23:37:06.755091 systemd[1]: No hostname configured, using default hostname. Sep 9 23:37:06.755101 systemd[1]: Hostname set to . Sep 9 23:37:06.755108 systemd[1]: Initializing machine ID from VM UUID. Sep 9 23:37:06.755116 systemd[1]: Queued start job for default target initrd.target. Sep 9 23:37:06.755124 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 23:37:06.755131 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 23:37:06.755139 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 23:37:06.755147 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 23:37:06.755154 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 23:37:06.755164 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 23:37:06.755173 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 23:37:06.755180 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 23:37:06.755188 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 23:37:06.755195 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 23:37:06.755203 systemd[1]: Reached target paths.target - Path Units. Sep 9 23:37:06.755210 systemd[1]: Reached target slices.target - Slice Units. Sep 9 23:37:06.755219 systemd[1]: Reached target swap.target - Swaps. Sep 9 23:37:06.755227 systemd[1]: Reached target timers.target - Timer Units. Sep 9 23:37:06.755234 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 23:37:06.755242 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 23:37:06.755258 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 23:37:06.755266 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 23:37:06.755274 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 23:37:06.755281 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 23:37:06.755291 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 23:37:06.755299 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 23:37:06.755307 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 23:37:06.755314 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 23:37:06.755322 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 23:37:06.755330 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 9 23:37:06.755337 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 23:37:06.755344 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 23:37:06.755352 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 23:37:06.755360 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:37:06.755368 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 23:37:06.755376 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 23:37:06.755383 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 23:37:06.755392 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 23:37:06.755416 systemd-journald[244]: Collecting audit messages is disabled. Sep 9 23:37:06.755435 systemd-journald[244]: Journal started Sep 9 23:37:06.755455 systemd-journald[244]: Runtime Journal (/run/log/journal/4e9be6ee02ab49e58a60599a5ffbd1eb) is 6M, max 48.5M, 42.4M free. Sep 9 23:37:06.762349 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 23:37:06.762378 kernel: Bridge firewalling registered Sep 9 23:37:06.746637 systemd-modules-load[247]: Inserted module 'overlay' Sep 9 23:37:06.763984 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:37:06.760601 systemd-modules-load[247]: Inserted module 'br_netfilter' Sep 9 23:37:06.767375 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 23:37:06.766828 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 23:37:06.768303 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 23:37:06.774130 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 23:37:06.776012 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 23:37:06.778089 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 23:37:06.793277 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 23:37:06.800244 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 23:37:06.802873 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:37:06.802895 systemd-tmpfiles[272]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 9 23:37:06.805649 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 23:37:06.809078 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 23:37:06.811967 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 23:37:06.824618 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 23:37:06.840058 dracut-cmdline[292]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1a0303a4c67016bd8cbb391a5d1bb2355d0bb259dfb78ea746a1288c781f86ca Sep 9 23:37:06.848677 systemd-resolved[288]: Positive Trust Anchors: Sep 9 23:37:06.848696 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 23:37:06.848728 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 23:37:06.853662 systemd-resolved[288]: Defaulting to hostname 'linux'. Sep 9 23:37:06.854701 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 23:37:06.858283 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 23:37:06.916278 kernel: SCSI subsystem initialized Sep 9 23:37:06.924288 kernel: Loading iSCSI transport class v2.0-870. Sep 9 23:37:06.929272 kernel: iscsi: registered transport (tcp) Sep 9 23:37:06.943303 kernel: iscsi: registered transport (qla4xxx) Sep 9 23:37:06.943361 kernel: QLogic iSCSI HBA Driver Sep 9 23:37:06.961262 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 23:37:06.984122 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 23:37:06.986431 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 23:37:07.033943 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 23:37:07.036109 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 23:37:07.107278 kernel: raid6: neonx8 gen() 15725 MB/s Sep 9 23:37:07.124274 kernel: raid6: neonx4 gen() 15691 MB/s Sep 9 23:37:07.141283 kernel: raid6: neonx2 gen() 13142 MB/s Sep 9 23:37:07.158284 kernel: raid6: neonx1 gen() 10425 MB/s Sep 9 23:37:07.175269 kernel: raid6: int64x8 gen() 6877 MB/s Sep 9 23:37:07.192285 kernel: raid6: int64x4 gen() 7204 MB/s Sep 9 23:37:07.209289 kernel: raid6: int64x2 gen() 6004 MB/s Sep 9 23:37:07.226291 kernel: raid6: int64x1 gen() 5043 MB/s Sep 9 23:37:07.226322 kernel: raid6: using algorithm neonx8 gen() 15725 MB/s Sep 9 23:37:07.243283 kernel: raid6: .... xor() 10085 MB/s, rmw enabled Sep 9 23:37:07.243306 kernel: raid6: using neon recovery algorithm Sep 9 23:37:07.248374 kernel: xor: measuring software checksum speed Sep 9 23:37:07.248402 kernel: 8regs : 21596 MB/sec Sep 9 23:37:07.249416 kernel: 32regs : 21693 MB/sec Sep 9 23:37:07.249429 kernel: arm64_neon : 28244 MB/sec Sep 9 23:37:07.249438 kernel: xor: using function: arm64_neon (28244 MB/sec) Sep 9 23:37:07.303280 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 23:37:07.309531 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 23:37:07.312117 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 23:37:07.343459 systemd-udevd[503]: Using default interface naming scheme 'v255'. Sep 9 23:37:07.347576 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 23:37:07.349683 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 23:37:07.376588 dracut-pre-trigger[509]: rd.md=0: removing MD RAID activation Sep 9 23:37:07.401108 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 23:37:07.403384 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 23:37:07.461111 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 23:37:07.465176 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 23:37:07.506269 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 9 23:37:07.508283 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 9 23:37:07.512586 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 23:37:07.512618 kernel: GPT:9289727 != 19775487 Sep 9 23:37:07.512629 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 23:37:07.512644 kernel: GPT:9289727 != 19775487 Sep 9 23:37:07.513295 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 23:37:07.515287 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 23:37:07.519444 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 23:37:07.519561 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:37:07.523985 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:37:07.525955 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:37:07.543925 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 9 23:37:07.557173 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 9 23:37:07.558759 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:37:07.561710 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 23:37:07.577839 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 9 23:37:07.579150 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 9 23:37:07.587775 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 23:37:07.588942 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 23:37:07.590827 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 23:37:07.592552 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 23:37:07.594843 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 23:37:07.596673 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 23:37:07.617648 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 23:37:07.620941 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 23:37:07.620965 disk-uuid[596]: Primary Header is updated. Sep 9 23:37:07.620965 disk-uuid[596]: Secondary Entries is updated. Sep 9 23:37:07.620965 disk-uuid[596]: Secondary Header is updated. Sep 9 23:37:08.636284 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 23:37:08.636560 disk-uuid[602]: The operation has completed successfully. Sep 9 23:37:08.665960 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 23:37:08.667073 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 23:37:08.692852 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 23:37:08.709513 sh[616]: Success Sep 9 23:37:08.722857 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 23:37:08.722910 kernel: device-mapper: uevent: version 1.0.3 Sep 9 23:37:08.722930 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 9 23:37:08.730274 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 9 23:37:08.757868 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 23:37:08.760815 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 23:37:08.778743 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 23:37:08.784379 kernel: BTRFS: device fsid 61baaba1-cd1f-4e69-9af9-cc1b703c9653 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (628) Sep 9 23:37:08.784420 kernel: BTRFS info (device dm-0): first mount of filesystem 61baaba1-cd1f-4e69-9af9-cc1b703c9653 Sep 9 23:37:08.784432 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:37:08.790717 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 23:37:08.790756 kernel: BTRFS info (device dm-0): enabling free space tree Sep 9 23:37:08.789819 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 23:37:08.791754 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 9 23:37:08.792716 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 23:37:08.793526 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 23:37:08.794940 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 23:37:08.827100 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (660) Sep 9 23:37:08.827157 kernel: BTRFS info (device vda6): first mount of filesystem b5f2ab98-7907-428d-a6e6-1535b41157ff Sep 9 23:37:08.827168 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:37:08.831281 kernel: BTRFS info (device vda6): turning on async discard Sep 9 23:37:08.831334 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 23:37:08.835265 kernel: BTRFS info (device vda6): last unmount of filesystem b5f2ab98-7907-428d-a6e6-1535b41157ff Sep 9 23:37:08.836220 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 23:37:08.838466 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 23:37:08.908996 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 23:37:08.912042 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 23:37:08.936950 ignition[706]: Ignition 2.21.0 Sep 9 23:37:08.936962 ignition[706]: Stage: fetch-offline Sep 9 23:37:08.936995 ignition[706]: no configs at "/usr/lib/ignition/base.d" Sep 9 23:37:08.937016 ignition[706]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 23:37:08.937225 ignition[706]: parsed url from cmdline: "" Sep 9 23:37:08.937228 ignition[706]: no config URL provided Sep 9 23:37:08.937232 ignition[706]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 23:37:08.937240 ignition[706]: no config at "/usr/lib/ignition/user.ign" Sep 9 23:37:08.937281 ignition[706]: op(1): [started] loading QEMU firmware config module Sep 9 23:37:08.937286 ignition[706]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 9 23:37:08.948125 ignition[706]: op(1): [finished] loading QEMU firmware config module Sep 9 23:37:08.954069 systemd-networkd[807]: lo: Link UP Sep 9 23:37:08.954083 systemd-networkd[807]: lo: Gained carrier Sep 9 23:37:08.954799 systemd-networkd[807]: Enumeration completed Sep 9 23:37:08.954887 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 23:37:08.956178 systemd[1]: Reached target network.target - Network. Sep 9 23:37:08.957947 systemd-networkd[807]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:37:08.957950 systemd-networkd[807]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 23:37:08.958708 systemd-networkd[807]: eth0: Link UP Sep 9 23:37:08.959096 systemd-networkd[807]: eth0: Gained carrier Sep 9 23:37:08.959106 systemd-networkd[807]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:37:08.982315 systemd-networkd[807]: eth0: DHCPv4 address 10.0.0.81/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 23:37:08.998132 ignition[706]: parsing config with SHA512: 3d0db56e6c5b9c925cdb2563677f2648daca5167e9450011ed02e8d06d357017478fb51e6b373cf4a23811a521f3b0d63caaaef434fb8860ba9f7a398f2a5700 Sep 9 23:37:09.002959 unknown[706]: fetched base config from "system" Sep 9 23:37:09.002972 unknown[706]: fetched user config from "qemu" Sep 9 23:37:09.003429 ignition[706]: fetch-offline: fetch-offline passed Sep 9 23:37:09.004805 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 23:37:09.003486 ignition[706]: Ignition finished successfully Sep 9 23:37:09.006392 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 23:37:09.007242 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 23:37:09.034419 ignition[815]: Ignition 2.21.0 Sep 9 23:37:09.034436 ignition[815]: Stage: kargs Sep 9 23:37:09.034572 ignition[815]: no configs at "/usr/lib/ignition/base.d" Sep 9 23:37:09.034581 ignition[815]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 23:37:09.035719 ignition[815]: kargs: kargs passed Sep 9 23:37:09.035774 ignition[815]: Ignition finished successfully Sep 9 23:37:09.038550 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 23:37:09.040486 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 23:37:09.063515 ignition[823]: Ignition 2.21.0 Sep 9 23:37:09.063530 ignition[823]: Stage: disks Sep 9 23:37:09.063680 ignition[823]: no configs at "/usr/lib/ignition/base.d" Sep 9 23:37:09.063690 ignition[823]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 23:37:09.064476 ignition[823]: disks: disks passed Sep 9 23:37:09.064519 ignition[823]: Ignition finished successfully Sep 9 23:37:09.067952 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 23:37:09.069326 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 23:37:09.070787 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 23:37:09.072599 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 23:37:09.074215 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 23:37:09.075842 systemd[1]: Reached target basic.target - Basic System. Sep 9 23:37:09.078101 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 23:37:09.106978 systemd-fsck[833]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 9 23:37:09.111040 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 23:37:09.113568 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 23:37:09.178265 kernel: EXT4-fs (vda9): mounted filesystem b3fb930d-58c7-4aff-a89a-67d23b38af56 r/w with ordered data mode. Quota mode: none. Sep 9 23:37:09.178718 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 23:37:09.180009 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 23:37:09.182340 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 23:37:09.200809 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 23:37:09.201845 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 23:37:09.201886 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 23:37:09.201910 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 23:37:09.213083 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (842) Sep 9 23:37:09.213105 kernel: BTRFS info (device vda6): first mount of filesystem b5f2ab98-7907-428d-a6e6-1535b41157ff Sep 9 23:37:09.213115 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:37:09.206711 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 23:37:09.216271 kernel: BTRFS info (device vda6): turning on async discard Sep 9 23:37:09.216290 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 23:37:09.212988 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 23:37:09.217776 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 23:37:09.256132 initrd-setup-root[866]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 23:37:09.259978 initrd-setup-root[873]: cut: /sysroot/etc/group: No such file or directory Sep 9 23:37:09.263660 initrd-setup-root[880]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 23:37:09.266983 initrd-setup-root[887]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 23:37:09.332951 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 23:37:09.335059 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 23:37:09.336602 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 23:37:09.355277 kernel: BTRFS info (device vda6): last unmount of filesystem b5f2ab98-7907-428d-a6e6-1535b41157ff Sep 9 23:37:09.372388 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 23:37:09.383945 ignition[956]: INFO : Ignition 2.21.0 Sep 9 23:37:09.383945 ignition[956]: INFO : Stage: mount Sep 9 23:37:09.385870 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 23:37:09.385870 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 23:37:09.387637 ignition[956]: INFO : mount: mount passed Sep 9 23:37:09.387637 ignition[956]: INFO : Ignition finished successfully Sep 9 23:37:09.390285 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 23:37:09.392194 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 23:37:09.783546 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 23:37:09.784998 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 23:37:09.804264 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (968) Sep 9 23:37:09.806263 kernel: BTRFS info (device vda6): first mount of filesystem b5f2ab98-7907-428d-a6e6-1535b41157ff Sep 9 23:37:09.806297 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:37:09.809271 kernel: BTRFS info (device vda6): turning on async discard Sep 9 23:37:09.809288 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 23:37:09.810495 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 23:37:09.840022 ignition[985]: INFO : Ignition 2.21.0 Sep 9 23:37:09.840022 ignition[985]: INFO : Stage: files Sep 9 23:37:09.842476 ignition[985]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 23:37:09.842476 ignition[985]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 23:37:09.842476 ignition[985]: DEBUG : files: compiled without relabeling support, skipping Sep 9 23:37:09.842476 ignition[985]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 23:37:09.842476 ignition[985]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 23:37:09.847760 ignition[985]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 23:37:09.847760 ignition[985]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 23:37:09.847760 ignition[985]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 23:37:09.847760 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 9 23:37:09.847760 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 9 23:37:09.845907 unknown[985]: wrote ssh authorized keys file for user: core Sep 9 23:37:09.929933 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 23:37:10.210658 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 9 23:37:10.212241 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 23:37:10.213874 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 9 23:37:10.387422 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 9 23:37:10.479008 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 23:37:10.479008 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 9 23:37:10.481953 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 23:37:10.481953 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 23:37:10.481953 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 23:37:10.481953 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 23:37:10.481953 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 23:37:10.481953 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 23:37:10.481953 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 23:37:10.491665 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 23:37:10.491665 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 23:37:10.491665 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 9 23:37:10.507265 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 9 23:37:10.507265 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 9 23:37:10.511568 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 9 23:37:10.537409 systemd-networkd[807]: eth0: Gained IPv6LL Sep 9 23:37:10.811896 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 9 23:37:11.220884 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 9 23:37:11.220884 ignition[985]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 9 23:37:11.224076 ignition[985]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 23:37:11.246191 ignition[985]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 23:37:11.246191 ignition[985]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 9 23:37:11.246191 ignition[985]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 9 23:37:11.250400 ignition[985]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 23:37:11.250400 ignition[985]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 23:37:11.250400 ignition[985]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 9 23:37:11.250400 ignition[985]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 23:37:11.262245 ignition[985]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 23:37:11.265876 ignition[985]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 23:37:11.268390 ignition[985]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 23:37:11.268390 ignition[985]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 9 23:37:11.268390 ignition[985]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 23:37:11.268390 ignition[985]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 23:37:11.268390 ignition[985]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 23:37:11.268390 ignition[985]: INFO : files: files passed Sep 9 23:37:11.268390 ignition[985]: INFO : Ignition finished successfully Sep 9 23:37:11.268779 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 23:37:11.272228 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 23:37:11.274775 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 23:37:11.288410 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 23:37:11.288549 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 23:37:11.292478 initrd-setup-root-after-ignition[1014]: grep: /sysroot/oem/oem-release: No such file or directory Sep 9 23:37:11.295348 initrd-setup-root-after-ignition[1016]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 23:37:11.295348 initrd-setup-root-after-ignition[1016]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 23:37:11.298003 initrd-setup-root-after-ignition[1020]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 23:37:11.297645 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 23:37:11.301440 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 23:37:11.303402 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 23:37:11.338930 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 23:37:11.339073 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 23:37:11.341349 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 23:37:11.343111 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 23:37:11.344855 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 23:37:11.345719 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 23:37:11.369179 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 23:37:11.372074 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 23:37:11.389797 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 23:37:11.391337 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 23:37:11.393602 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 23:37:11.395494 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 23:37:11.395684 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 23:37:11.401132 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 23:37:11.405054 systemd[1]: Stopped target basic.target - Basic System. Sep 9 23:37:11.407855 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 23:37:11.409086 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 23:37:11.411120 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 23:37:11.413151 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 9 23:37:11.415019 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 23:37:11.417171 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 23:37:11.419428 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 23:37:11.421559 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 23:37:11.423124 systemd[1]: Stopped target swap.target - Swaps. Sep 9 23:37:11.424646 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 23:37:11.424793 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 23:37:11.427666 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 23:37:11.430748 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 23:37:11.433435 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 23:37:11.434357 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 23:37:11.435828 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 23:37:11.435971 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 23:37:11.439358 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 23:37:11.439506 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 23:37:11.441654 systemd[1]: Stopped target paths.target - Path Units. Sep 9 23:37:11.443062 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 23:37:11.443164 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 23:37:11.445055 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 23:37:11.447100 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 23:37:11.448540 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 23:37:11.448674 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 23:37:11.450164 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 23:37:11.450240 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 23:37:11.452476 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 23:37:11.452602 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 23:37:11.454438 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 23:37:11.454539 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 23:37:11.457073 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 23:37:11.459479 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 23:37:11.460452 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 23:37:11.460588 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 23:37:11.462634 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 23:37:11.462739 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 23:37:11.468675 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 23:37:11.477491 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 23:37:11.486429 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 23:37:11.494116 ignition[1040]: INFO : Ignition 2.21.0 Sep 9 23:37:11.494116 ignition[1040]: INFO : Stage: umount Sep 9 23:37:11.497098 ignition[1040]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 23:37:11.497098 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 23:37:11.497098 ignition[1040]: INFO : umount: umount passed Sep 9 23:37:11.500669 ignition[1040]: INFO : Ignition finished successfully Sep 9 23:37:11.501917 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 23:37:11.502976 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 23:37:11.505643 systemd[1]: Stopped target network.target - Network. Sep 9 23:37:11.507160 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 23:37:11.507265 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 23:37:11.508062 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 23:37:11.508101 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 23:37:11.509865 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 23:37:11.509913 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 23:37:11.511634 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 23:37:11.511677 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 23:37:11.513545 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 23:37:11.514980 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 23:37:11.522488 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 23:37:11.522606 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 23:37:11.526359 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 23:37:11.526609 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 23:37:11.526727 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 23:37:11.530140 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 23:37:11.530818 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 9 23:37:11.532491 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 23:37:11.532552 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 23:37:11.535878 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 23:37:11.537866 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 23:37:11.537947 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 23:37:11.540117 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 23:37:11.540170 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:37:11.543014 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 23:37:11.543062 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 23:37:11.545036 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 23:37:11.545090 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 23:37:11.549406 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 23:37:11.554144 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 23:37:11.554214 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 23:37:11.560581 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 23:37:11.562406 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 23:37:11.563685 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 23:37:11.563751 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 23:37:11.565228 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 23:37:11.565358 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 23:37:11.571161 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 23:37:11.571388 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 23:37:11.573116 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 23:37:11.573152 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 23:37:11.574828 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 23:37:11.574861 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 23:37:11.576997 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 23:37:11.577074 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 23:37:11.580351 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 23:37:11.580406 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 23:37:11.582383 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 23:37:11.582433 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 23:37:11.586179 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 23:37:11.587414 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 9 23:37:11.587477 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 23:37:11.590493 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 23:37:11.590548 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 23:37:11.592805 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 9 23:37:11.592849 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 23:37:11.596466 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 23:37:11.596515 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 23:37:11.598758 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 23:37:11.598807 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:37:11.605317 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 9 23:37:11.605380 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 9 23:37:11.605425 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 9 23:37:11.605460 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 23:37:11.606741 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 23:37:11.606862 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 23:37:11.609205 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 23:37:11.611742 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 23:37:11.632513 systemd[1]: Switching root. Sep 9 23:37:11.661384 systemd-journald[244]: Journal stopped Sep 9 23:37:12.417092 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Sep 9 23:37:12.417139 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 23:37:12.417155 kernel: SELinux: policy capability open_perms=1 Sep 9 23:37:12.417164 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 23:37:12.417174 kernel: SELinux: policy capability always_check_network=0 Sep 9 23:37:12.417183 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 23:37:12.417192 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 23:37:12.417201 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 23:37:12.417214 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 23:37:12.417223 kernel: SELinux: policy capability userspace_initial_context=0 Sep 9 23:37:12.417232 kernel: audit: type=1403 audit(1757461031.846:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 23:37:12.417274 systemd[1]: Successfully loaded SELinux policy in 50.684ms. Sep 9 23:37:12.417298 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.739ms. Sep 9 23:37:12.417311 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 23:37:12.417321 systemd[1]: Detected virtualization kvm. Sep 9 23:37:12.417331 systemd[1]: Detected architecture arm64. Sep 9 23:37:12.417340 systemd[1]: Detected first boot. Sep 9 23:37:12.417354 systemd[1]: Initializing machine ID from VM UUID. Sep 9 23:37:12.417365 kernel: NET: Registered PF_VSOCK protocol family Sep 9 23:37:12.417374 zram_generator::config[1085]: No configuration found. Sep 9 23:37:12.417385 systemd[1]: Populated /etc with preset unit settings. Sep 9 23:37:12.417397 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 23:37:12.417407 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 23:37:12.417417 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 23:37:12.417427 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 23:37:12.417437 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 23:37:12.417447 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 23:37:12.417456 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 23:37:12.417466 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 23:37:12.417476 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 23:37:12.417489 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 23:37:12.417498 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 23:37:12.417508 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 23:37:12.417518 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 23:37:12.417528 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 23:37:12.417538 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 23:37:12.417547 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 23:37:12.417557 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 23:37:12.417569 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 23:37:12.417579 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 9 23:37:12.417589 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 23:37:12.417599 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 23:37:12.417609 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 23:37:12.417631 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 23:37:12.417641 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 23:37:12.417651 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 23:37:12.417663 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 23:37:12.417676 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 23:37:12.417687 systemd[1]: Reached target slices.target - Slice Units. Sep 9 23:37:12.417697 systemd[1]: Reached target swap.target - Swaps. Sep 9 23:37:12.417707 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 23:37:12.417716 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 23:37:12.417726 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 23:37:12.417736 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 23:37:12.417746 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 23:37:12.417757 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 23:37:12.417767 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 23:37:12.417777 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 23:37:12.417787 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 23:37:12.417796 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 23:37:12.417806 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 23:37:12.417816 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 23:37:12.417825 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 23:37:12.417836 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 23:37:12.417847 systemd[1]: Reached target machines.target - Containers. Sep 9 23:37:12.417857 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 23:37:12.417867 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 23:37:12.417877 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 23:37:12.417887 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 23:37:12.417896 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 23:37:12.417906 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 23:37:12.417916 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 23:37:12.417927 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 23:37:12.417936 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 23:37:12.417946 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 23:37:12.417957 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 23:37:12.417966 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 23:37:12.417977 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 23:37:12.417987 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 23:37:12.417997 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 23:37:12.418008 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 23:37:12.418019 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 23:37:12.418029 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 23:37:12.418039 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 23:37:12.418049 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 23:37:12.418059 kernel: fuse: init (API version 7.41) Sep 9 23:37:12.418067 kernel: ACPI: bus type drm_connector registered Sep 9 23:37:12.418077 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 23:37:12.418089 kernel: loop: module loaded Sep 9 23:37:12.418099 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 23:37:12.418109 systemd[1]: Stopped verity-setup.service. Sep 9 23:37:12.418118 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 23:37:12.418128 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 23:37:12.418138 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 23:37:12.418149 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 23:37:12.418179 systemd-journald[1161]: Collecting audit messages is disabled. Sep 9 23:37:12.418200 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 23:37:12.418210 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 23:37:12.418220 systemd-journald[1161]: Journal started Sep 9 23:37:12.418242 systemd-journald[1161]: Runtime Journal (/run/log/journal/4e9be6ee02ab49e58a60599a5ffbd1eb) is 6M, max 48.5M, 42.4M free. Sep 9 23:37:12.221411 systemd[1]: Queued start job for default target multi-user.target. Sep 9 23:37:12.239202 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 9 23:37:12.239591 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 23:37:12.421078 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 23:37:12.421977 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 23:37:12.424006 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 23:37:12.425542 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 23:37:12.425714 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 23:37:12.426963 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 23:37:12.427111 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 23:37:12.428357 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 23:37:12.428526 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 23:37:12.429706 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 23:37:12.429850 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 23:37:12.432573 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 23:37:12.432742 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 23:37:12.434041 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 23:37:12.434201 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 23:37:12.435535 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 23:37:12.436788 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 23:37:12.438040 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 23:37:12.439394 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 23:37:12.449801 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 23:37:12.452056 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 23:37:12.453886 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 23:37:12.454827 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 23:37:12.454856 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 23:37:12.456537 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 23:37:12.465858 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 23:37:12.466776 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 23:37:12.467753 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 23:37:12.469736 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 23:37:12.470973 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 23:37:12.472314 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 23:37:12.473137 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 23:37:12.474020 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 23:37:12.478977 systemd-journald[1161]: Time spent on flushing to /var/log/journal/4e9be6ee02ab49e58a60599a5ffbd1eb is 27.856ms for 893 entries. Sep 9 23:37:12.478977 systemd-journald[1161]: System Journal (/var/log/journal/4e9be6ee02ab49e58a60599a5ffbd1eb) is 8M, max 195.6M, 187.6M free. Sep 9 23:37:12.531865 systemd-journald[1161]: Received client request to flush runtime journal. Sep 9 23:37:12.531938 kernel: loop0: detected capacity change from 0 to 107312 Sep 9 23:37:12.531957 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 23:37:12.477510 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 23:37:12.479344 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 23:37:12.483603 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 23:37:12.485101 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 23:37:12.486585 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 23:37:12.489381 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 23:37:12.492785 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 23:37:12.497186 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 23:37:12.537298 kernel: loop1: detected capacity change from 0 to 203944 Sep 9 23:37:12.509992 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Sep 9 23:37:12.510003 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Sep 9 23:37:12.513859 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 23:37:12.517854 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 23:37:12.532487 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:37:12.534301 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 23:37:12.544732 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 23:37:12.552445 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 23:37:12.556395 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 23:37:12.574266 kernel: loop2: detected capacity change from 0 to 138376 Sep 9 23:37:12.578770 systemd-tmpfiles[1224]: ACLs are not supported, ignoring. Sep 9 23:37:12.578787 systemd-tmpfiles[1224]: ACLs are not supported, ignoring. Sep 9 23:37:12.583017 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 23:37:12.606294 kernel: loop3: detected capacity change from 0 to 107312 Sep 9 23:37:12.613267 kernel: loop4: detected capacity change from 0 to 203944 Sep 9 23:37:12.621273 kernel: loop5: detected capacity change from 0 to 138376 Sep 9 23:37:12.627218 (sd-merge)[1228]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 9 23:37:12.627663 (sd-merge)[1228]: Merged extensions into '/usr'. Sep 9 23:37:12.630943 systemd[1]: Reload requested from client PID 1202 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 23:37:12.630958 systemd[1]: Reloading... Sep 9 23:37:12.695873 zram_generator::config[1257]: No configuration found. Sep 9 23:37:12.771523 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 23:37:12.784454 ldconfig[1197]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 23:37:12.835939 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 23:37:12.836361 systemd[1]: Reloading finished in 204 ms. Sep 9 23:37:12.860283 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 23:37:12.861896 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 23:37:12.881530 systemd[1]: Starting ensure-sysext.service... Sep 9 23:37:12.883242 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 23:37:12.892355 systemd[1]: Reload requested from client PID 1288 ('systemctl') (unit ensure-sysext.service)... Sep 9 23:37:12.892370 systemd[1]: Reloading... Sep 9 23:37:12.902203 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 9 23:37:12.902244 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 9 23:37:12.902479 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 23:37:12.902680 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 23:37:12.903302 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 23:37:12.903510 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Sep 9 23:37:12.903559 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Sep 9 23:37:12.906355 systemd-tmpfiles[1289]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 23:37:12.906367 systemd-tmpfiles[1289]: Skipping /boot Sep 9 23:37:12.925853 systemd-tmpfiles[1289]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 23:37:12.925869 systemd-tmpfiles[1289]: Skipping /boot Sep 9 23:37:12.946291 zram_generator::config[1319]: No configuration found. Sep 9 23:37:13.008418 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 23:37:13.070647 systemd[1]: Reloading finished in 177 ms. Sep 9 23:37:13.096823 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 23:37:13.102385 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 23:37:13.109331 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 23:37:13.111708 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 23:37:13.114046 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 23:37:13.116881 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 23:37:13.120383 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 23:37:13.123327 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 23:37:13.135944 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 23:37:13.139965 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 23:37:13.141136 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 23:37:13.143653 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 23:37:13.146482 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 23:37:13.149383 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 23:37:13.149557 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 23:37:13.150929 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 23:37:13.152724 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 23:37:13.152866 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 23:37:13.158350 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 23:37:13.163003 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 23:37:13.169212 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 23:37:13.169652 systemd-udevd[1357]: Using default interface naming scheme 'v255'. Sep 9 23:37:13.170445 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 23:37:13.170591 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 23:37:13.171860 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 23:37:13.173979 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 23:37:13.175647 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 23:37:13.175794 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 23:37:13.177679 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 23:37:13.177820 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 23:37:13.184952 augenrules[1386]: No rules Sep 9 23:37:13.186550 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 23:37:13.188381 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 23:37:13.188576 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 23:37:13.190192 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 23:37:13.190382 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 23:37:13.194255 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 23:37:13.195883 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 23:37:13.200448 systemd[1]: Finished ensure-sysext.service. Sep 9 23:37:13.204125 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 23:37:13.206428 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 23:37:13.208518 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 23:37:13.211240 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 23:37:13.214396 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 23:37:13.214442 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 23:37:13.221561 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 23:37:13.226420 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 23:37:13.228416 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 23:37:13.228826 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 23:37:13.230272 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 23:37:13.234228 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 23:37:13.235586 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 23:37:13.237830 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 23:37:13.240115 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 23:37:13.240274 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 23:37:13.253538 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 23:37:13.282611 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 9 23:37:13.327805 systemd-resolved[1355]: Positive Trust Anchors: Sep 9 23:37:13.327830 systemd-resolved[1355]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 23:37:13.327862 systemd-resolved[1355]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 23:37:13.329984 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 23:37:13.331442 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 23:37:13.335370 systemd-resolved[1355]: Defaulting to hostname 'linux'. Sep 9 23:37:13.336609 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 23:37:13.344109 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 23:37:13.345438 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 23:37:13.347725 systemd-networkd[1431]: lo: Link UP Sep 9 23:37:13.347732 systemd-networkd[1431]: lo: Gained carrier Sep 9 23:37:13.348368 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 23:37:13.349378 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 23:37:13.350534 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 23:37:13.351007 systemd-networkd[1431]: Enumeration completed Sep 9 23:37:13.351677 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 23:37:13.351927 systemd-networkd[1431]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:37:13.351938 systemd-networkd[1431]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 23:37:13.352817 systemd-networkd[1431]: eth0: Link UP Sep 9 23:37:13.353012 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 23:37:13.353367 systemd-networkd[1431]: eth0: Gained carrier Sep 9 23:37:13.353388 systemd-networkd[1431]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:37:13.354315 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 23:37:13.354343 systemd[1]: Reached target paths.target - Path Units. Sep 9 23:37:13.355208 systemd[1]: Reached target timers.target - Timer Units. Sep 9 23:37:13.357077 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 23:37:13.359693 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 23:37:13.362795 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 23:37:13.364938 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 23:37:13.366120 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 23:37:13.369595 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 23:37:13.370297 systemd-networkd[1431]: eth0: DHCPv4 address 10.0.0.81/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 23:37:13.370955 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 23:37:13.371282 systemd-timesyncd[1433]: Network configuration changed, trying to establish connection. Sep 9 23:37:13.372036 systemd-timesyncd[1433]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 9 23:37:13.372088 systemd-timesyncd[1433]: Initial clock synchronization to Tue 2025-09-09 23:37:13.060797 UTC. Sep 9 23:37:13.372854 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 23:37:13.373956 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 23:37:13.377480 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 23:37:13.380416 systemd[1]: Reached target network.target - Network. Sep 9 23:37:13.381089 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 23:37:13.381960 systemd[1]: Reached target basic.target - Basic System. Sep 9 23:37:13.382841 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 23:37:13.382866 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 23:37:13.383850 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 23:37:13.385545 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 23:37:13.393415 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 23:37:13.395837 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 23:37:13.398008 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 23:37:13.399377 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 23:37:13.400264 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 23:37:13.402672 jq[1454]: false Sep 9 23:37:13.403387 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 23:37:13.405462 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 23:37:13.414021 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 23:37:13.415781 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 23:37:13.419965 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 23:37:13.421800 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 23:37:13.424142 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 23:37:13.426189 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 23:37:13.426552 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 23:37:13.428478 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 23:37:13.434327 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 23:37:13.440901 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 23:37:13.442487 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 23:37:13.442661 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 23:37:13.442891 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 23:37:13.443039 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 23:37:13.444933 extend-filesystems[1455]: Found /dev/vda6 Sep 9 23:37:13.445790 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 23:37:13.446161 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 23:37:13.447519 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 23:37:13.450338 jq[1479]: true Sep 9 23:37:13.452750 update_engine[1476]: I20250909 23:37:13.452592 1476 main.cc:92] Flatcar Update Engine starting Sep 9 23:37:13.457260 extend-filesystems[1455]: Found /dev/vda9 Sep 9 23:37:13.461914 jq[1489]: true Sep 9 23:37:13.462448 extend-filesystems[1455]: Checking size of /dev/vda9 Sep 9 23:37:13.464419 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 23:37:13.472715 (ntainerd)[1499]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 23:37:13.479302 extend-filesystems[1455]: Resized partition /dev/vda9 Sep 9 23:37:13.481058 extend-filesystems[1514]: resize2fs 1.47.2 (1-Jan-2025) Sep 9 23:37:13.488088 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 9 23:37:13.487962 dbus-daemon[1452]: [system] SELinux support is enabled Sep 9 23:37:13.488119 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 23:37:13.492130 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 23:37:13.492162 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 23:37:13.494756 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 23:37:13.494777 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 23:37:13.496595 systemd[1]: Started update-engine.service - Update Engine. Sep 9 23:37:13.496822 tar[1486]: linux-arm64/helm Sep 9 23:37:13.503378 update_engine[1476]: I20250909 23:37:13.503324 1476 update_check_scheduler.cc:74] Next update check in 9m11s Sep 9 23:37:13.506543 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 23:37:13.524300 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 9 23:37:13.536851 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:37:13.543973 extend-filesystems[1514]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 23:37:13.543973 extend-filesystems[1514]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 23:37:13.543973 extend-filesystems[1514]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 9 23:37:13.551440 extend-filesystems[1455]: Resized filesystem in /dev/vda9 Sep 9 23:37:13.545467 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 23:37:13.553922 bash[1530]: Updated "/home/core/.ssh/authorized_keys" Sep 9 23:37:13.547283 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 23:37:13.556299 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 23:37:13.565604 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 23:37:13.642687 systemd-logind[1470]: Watching system buttons on /dev/input/event0 (Power Button) Sep 9 23:37:13.644654 systemd-logind[1470]: New seat seat0. Sep 9 23:37:13.646442 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 23:37:13.652706 locksmithd[1529]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 23:37:13.658307 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:37:13.675391 containerd[1499]: time="2025-09-09T23:37:13Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 9 23:37:13.675628 containerd[1499]: time="2025-09-09T23:37:13.675486960Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 9 23:37:13.685692 containerd[1499]: time="2025-09-09T23:37:13.685650360Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.72µs" Sep 9 23:37:13.685692 containerd[1499]: time="2025-09-09T23:37:13.685684680Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 9 23:37:13.685809 containerd[1499]: time="2025-09-09T23:37:13.685703240Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 9 23:37:13.685857 containerd[1499]: time="2025-09-09T23:37:13.685839840Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 9 23:37:13.685887 containerd[1499]: time="2025-09-09T23:37:13.685859360Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 9 23:37:13.685887 containerd[1499]: time="2025-09-09T23:37:13.685881920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 23:37:13.685944 containerd[1499]: time="2025-09-09T23:37:13.685927200Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 23:37:13.685944 containerd[1499]: time="2025-09-09T23:37:13.685941520Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 23:37:13.686226 containerd[1499]: time="2025-09-09T23:37:13.686201160Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 23:37:13.686261 containerd[1499]: time="2025-09-09T23:37:13.686223680Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 23:37:13.686261 containerd[1499]: time="2025-09-09T23:37:13.686237040Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 23:37:13.686261 containerd[1499]: time="2025-09-09T23:37:13.686244680Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 9 23:37:13.686344 containerd[1499]: time="2025-09-09T23:37:13.686326240Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 9 23:37:13.686603 containerd[1499]: time="2025-09-09T23:37:13.686579880Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 23:37:13.686696 containerd[1499]: time="2025-09-09T23:37:13.686675280Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 23:37:13.686730 containerd[1499]: time="2025-09-09T23:37:13.686695480Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 9 23:37:13.686748 containerd[1499]: time="2025-09-09T23:37:13.686731760Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 9 23:37:13.687184 containerd[1499]: time="2025-09-09T23:37:13.687162360Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 9 23:37:13.687283 containerd[1499]: time="2025-09-09T23:37:13.687243240Z" level=info msg="metadata content store policy set" policy=shared Sep 9 23:37:13.691976 containerd[1499]: time="2025-09-09T23:37:13.691943800Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 9 23:37:13.692130 containerd[1499]: time="2025-09-09T23:37:13.691997440Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 9 23:37:13.692130 containerd[1499]: time="2025-09-09T23:37:13.692013280Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 9 23:37:13.692130 containerd[1499]: time="2025-09-09T23:37:13.692032000Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 9 23:37:13.692130 containerd[1499]: time="2025-09-09T23:37:13.692048920Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 9 23:37:13.692130 containerd[1499]: time="2025-09-09T23:37:13.692061960Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 9 23:37:13.692130 containerd[1499]: time="2025-09-09T23:37:13.692118800Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 9 23:37:13.692253 containerd[1499]: time="2025-09-09T23:37:13.692135640Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 9 23:37:13.692253 containerd[1499]: time="2025-09-09T23:37:13.692147320Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 9 23:37:13.692253 containerd[1499]: time="2025-09-09T23:37:13.692157440Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 9 23:37:13.692253 containerd[1499]: time="2025-09-09T23:37:13.692166200Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 9 23:37:13.692253 containerd[1499]: time="2025-09-09T23:37:13.692177960Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 9 23:37:13.692329 containerd[1499]: time="2025-09-09T23:37:13.692305120Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 9 23:37:13.692329 containerd[1499]: time="2025-09-09T23:37:13.692325800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 9 23:37:13.692366 containerd[1499]: time="2025-09-09T23:37:13.692341240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 9 23:37:13.692366 containerd[1499]: time="2025-09-09T23:37:13.692352000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 9 23:37:13.692366 containerd[1499]: time="2025-09-09T23:37:13.692362160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 9 23:37:13.692411 containerd[1499]: time="2025-09-09T23:37:13.692372640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 9 23:37:13.692411 containerd[1499]: time="2025-09-09T23:37:13.692382800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 9 23:37:13.692411 containerd[1499]: time="2025-09-09T23:37:13.692392200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 9 23:37:13.692462 containerd[1499]: time="2025-09-09T23:37:13.692409760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 9 23:37:13.692462 containerd[1499]: time="2025-09-09T23:37:13.692421000Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 9 23:37:13.692462 containerd[1499]: time="2025-09-09T23:37:13.692430480Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 9 23:37:13.693128 containerd[1499]: time="2025-09-09T23:37:13.692605640Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 9 23:37:13.693128 containerd[1499]: time="2025-09-09T23:37:13.692637680Z" level=info msg="Start snapshots syncer" Sep 9 23:37:13.693128 containerd[1499]: time="2025-09-09T23:37:13.692665600Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 9 23:37:13.693215 containerd[1499]: time="2025-09-09T23:37:13.692949000Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 9 23:37:13.693215 containerd[1499]: time="2025-09-09T23:37:13.693047600Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 9 23:37:13.693215 containerd[1499]: time="2025-09-09T23:37:13.693134440Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 9 23:37:13.693348 containerd[1499]: time="2025-09-09T23:37:13.693232560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 9 23:37:13.693348 containerd[1499]: time="2025-09-09T23:37:13.693278560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 9 23:37:13.693348 containerd[1499]: time="2025-09-09T23:37:13.693295920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 9 23:37:13.693348 containerd[1499]: time="2025-09-09T23:37:13.693307760Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 9 23:37:13.693348 containerd[1499]: time="2025-09-09T23:37:13.693319880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 9 23:37:13.693348 containerd[1499]: time="2025-09-09T23:37:13.693330680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 9 23:37:13.693348 containerd[1499]: time="2025-09-09T23:37:13.693341000Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 9 23:37:13.693484 containerd[1499]: time="2025-09-09T23:37:13.693364800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 9 23:37:13.693484 containerd[1499]: time="2025-09-09T23:37:13.693376360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 9 23:37:13.693484 containerd[1499]: time="2025-09-09T23:37:13.693386760Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 9 23:37:13.693484 containerd[1499]: time="2025-09-09T23:37:13.693428600Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 23:37:13.693484 containerd[1499]: time="2025-09-09T23:37:13.693442720Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 23:37:13.693484 containerd[1499]: time="2025-09-09T23:37:13.693450680Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 23:37:13.693484 containerd[1499]: time="2025-09-09T23:37:13.693462800Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 23:37:13.693484 containerd[1499]: time="2025-09-09T23:37:13.693470400Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 9 23:37:13.693484 containerd[1499]: time="2025-09-09T23:37:13.693479640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 9 23:37:13.693484 containerd[1499]: time="2025-09-09T23:37:13.693489600Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 9 23:37:13.693904 containerd[1499]: time="2025-09-09T23:37:13.693564160Z" level=info msg="runtime interface created" Sep 9 23:37:13.693904 containerd[1499]: time="2025-09-09T23:37:13.693569360Z" level=info msg="created NRI interface" Sep 9 23:37:13.693904 containerd[1499]: time="2025-09-09T23:37:13.693578560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 9 23:37:13.693904 containerd[1499]: time="2025-09-09T23:37:13.693589760Z" level=info msg="Connect containerd service" Sep 9 23:37:13.693904 containerd[1499]: time="2025-09-09T23:37:13.693629320Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 23:37:13.694469 containerd[1499]: time="2025-09-09T23:37:13.694443160Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 23:37:13.775482 containerd[1499]: time="2025-09-09T23:37:13.774912080Z" level=info msg="Start subscribing containerd event" Sep 9 23:37:13.775776 containerd[1499]: time="2025-09-09T23:37:13.775752680Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 23:37:13.775843 containerd[1499]: time="2025-09-09T23:37:13.775825600Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 23:37:13.776431 containerd[1499]: time="2025-09-09T23:37:13.776402880Z" level=info msg="Start recovering state" Sep 9 23:37:13.776547 containerd[1499]: time="2025-09-09T23:37:13.776529120Z" level=info msg="Start event monitor" Sep 9 23:37:13.776573 containerd[1499]: time="2025-09-09T23:37:13.776550840Z" level=info msg="Start cni network conf syncer for default" Sep 9 23:37:13.776573 containerd[1499]: time="2025-09-09T23:37:13.776564600Z" level=info msg="Start streaming server" Sep 9 23:37:13.776605 containerd[1499]: time="2025-09-09T23:37:13.776574520Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 9 23:37:13.776605 containerd[1499]: time="2025-09-09T23:37:13.776586480Z" level=info msg="runtime interface starting up..." Sep 9 23:37:13.776605 containerd[1499]: time="2025-09-09T23:37:13.776592520Z" level=info msg="starting plugins..." Sep 9 23:37:13.776679 containerd[1499]: time="2025-09-09T23:37:13.776606160Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 9 23:37:13.776854 containerd[1499]: time="2025-09-09T23:37:13.776839880Z" level=info msg="containerd successfully booted in 0.102380s" Sep 9 23:37:13.776939 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 23:37:13.932432 tar[1486]: linux-arm64/LICENSE Sep 9 23:37:13.932511 tar[1486]: linux-arm64/README.md Sep 9 23:37:13.954475 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 23:37:14.394651 sshd_keygen[1485]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 23:37:14.414292 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 23:37:14.416841 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 23:37:14.435537 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 23:37:14.435748 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 23:37:14.438345 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 23:37:14.462750 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 23:37:14.465513 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 23:37:14.467544 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 9 23:37:14.468529 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 23:37:14.953359 systemd-networkd[1431]: eth0: Gained IPv6LL Sep 9 23:37:14.956199 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 23:37:14.957592 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 23:37:14.961547 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 9 23:37:14.963728 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:37:14.965567 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 23:37:14.989693 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 23:37:14.991747 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 9 23:37:14.992018 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 9 23:37:14.994889 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 23:37:15.533877 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:37:15.535612 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 23:37:15.537531 systemd[1]: Startup finished in 1.994s (kernel) + 5.245s (initrd) + 3.742s (userspace) = 10.983s. Sep 9 23:37:15.537814 (kubelet)[1609]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 23:37:15.902022 kubelet[1609]: E0909 23:37:15.901653 1609 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 23:37:15.906849 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 23:37:15.906978 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 23:37:15.908349 systemd[1]: kubelet.service: Consumed 766ms CPU time, 256.4M memory peak. Sep 9 23:37:19.619891 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 23:37:19.623454 systemd[1]: Started sshd@0-10.0.0.81:22-10.0.0.1:57526.service - OpenSSH per-connection server daemon (10.0.0.1:57526). Sep 9 23:37:19.699982 sshd[1622]: Accepted publickey for core from 10.0.0.1 port 57526 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:37:19.702064 sshd-session[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:37:19.709160 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 23:37:19.711722 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 23:37:19.718966 systemd-logind[1470]: New session 1 of user core. Sep 9 23:37:19.733333 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 23:37:19.739072 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 23:37:19.764629 (systemd)[1626]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 23:37:19.767793 systemd-logind[1470]: New session c1 of user core. Sep 9 23:37:19.886050 systemd[1626]: Queued start job for default target default.target. Sep 9 23:37:19.909494 systemd[1626]: Created slice app.slice - User Application Slice. Sep 9 23:37:19.909532 systemd[1626]: Reached target paths.target - Paths. Sep 9 23:37:19.909573 systemd[1626]: Reached target timers.target - Timers. Sep 9 23:37:19.910919 systemd[1626]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 23:37:19.921095 systemd[1626]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 23:37:19.921176 systemd[1626]: Reached target sockets.target - Sockets. Sep 9 23:37:19.921219 systemd[1626]: Reached target basic.target - Basic System. Sep 9 23:37:19.921271 systemd[1626]: Reached target default.target - Main User Target. Sep 9 23:37:19.921299 systemd[1626]: Startup finished in 145ms. Sep 9 23:37:19.921874 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 23:37:19.924821 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 23:37:19.983138 systemd[1]: Started sshd@1-10.0.0.81:22-10.0.0.1:57534.service - OpenSSH per-connection server daemon (10.0.0.1:57534). Sep 9 23:37:20.037743 sshd[1637]: Accepted publickey for core from 10.0.0.1 port 57534 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:37:20.039256 sshd-session[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:37:20.045632 systemd-logind[1470]: New session 2 of user core. Sep 9 23:37:20.055513 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 23:37:20.108278 sshd[1639]: Connection closed by 10.0.0.1 port 57534 Sep 9 23:37:20.108734 sshd-session[1637]: pam_unix(sshd:session): session closed for user core Sep 9 23:37:20.118682 systemd[1]: sshd@1-10.0.0.81:22-10.0.0.1:57534.service: Deactivated successfully. Sep 9 23:37:20.121432 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 23:37:20.122342 systemd-logind[1470]: Session 2 logged out. Waiting for processes to exit. Sep 9 23:37:20.127475 systemd[1]: Started sshd@2-10.0.0.81:22-10.0.0.1:53794.service - OpenSSH per-connection server daemon (10.0.0.1:53794). Sep 9 23:37:20.129191 systemd-logind[1470]: Removed session 2. Sep 9 23:37:20.180403 sshd[1645]: Accepted publickey for core from 10.0.0.1 port 53794 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:37:20.183090 sshd-session[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:37:20.188926 systemd-logind[1470]: New session 3 of user core. Sep 9 23:37:20.202464 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 23:37:20.249727 sshd[1647]: Connection closed by 10.0.0.1 port 53794 Sep 9 23:37:20.250070 sshd-session[1645]: pam_unix(sshd:session): session closed for user core Sep 9 23:37:20.263687 systemd[1]: sshd@2-10.0.0.81:22-10.0.0.1:53794.service: Deactivated successfully. Sep 9 23:37:20.267493 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 23:37:20.268389 systemd-logind[1470]: Session 3 logged out. Waiting for processes to exit. Sep 9 23:37:20.271364 systemd[1]: Started sshd@3-10.0.0.81:22-10.0.0.1:53806.service - OpenSSH per-connection server daemon (10.0.0.1:53806). Sep 9 23:37:20.272050 systemd-logind[1470]: Removed session 3. Sep 9 23:37:20.321163 sshd[1653]: Accepted publickey for core from 10.0.0.1 port 53806 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:37:20.322808 sshd-session[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:37:20.326988 systemd-logind[1470]: New session 4 of user core. Sep 9 23:37:20.335457 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 23:37:20.387403 sshd[1655]: Connection closed by 10.0.0.1 port 53806 Sep 9 23:37:20.387822 sshd-session[1653]: pam_unix(sshd:session): session closed for user core Sep 9 23:37:20.401561 systemd[1]: sshd@3-10.0.0.81:22-10.0.0.1:53806.service: Deactivated successfully. Sep 9 23:37:20.408469 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 23:37:20.411589 systemd-logind[1470]: Session 4 logged out. Waiting for processes to exit. Sep 9 23:37:20.416122 systemd[1]: Started sshd@4-10.0.0.81:22-10.0.0.1:53812.service - OpenSSH per-connection server daemon (10.0.0.1:53812). Sep 9 23:37:20.418503 systemd-logind[1470]: Removed session 4. Sep 9 23:37:20.472964 sshd[1661]: Accepted publickey for core from 10.0.0.1 port 53812 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:37:20.474740 sshd-session[1661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:37:20.479910 systemd-logind[1470]: New session 5 of user core. Sep 9 23:37:20.495558 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 23:37:20.555863 sudo[1664]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 23:37:20.556134 sudo[1664]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:37:20.569897 sudo[1664]: pam_unix(sudo:session): session closed for user root Sep 9 23:37:20.571984 sshd[1663]: Connection closed by 10.0.0.1 port 53812 Sep 9 23:37:20.572393 sshd-session[1661]: pam_unix(sshd:session): session closed for user core Sep 9 23:37:20.583083 systemd[1]: sshd@4-10.0.0.81:22-10.0.0.1:53812.service: Deactivated successfully. Sep 9 23:37:20.585726 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 23:37:20.586512 systemd-logind[1470]: Session 5 logged out. Waiting for processes to exit. Sep 9 23:37:20.588869 systemd[1]: Started sshd@5-10.0.0.81:22-10.0.0.1:53828.service - OpenSSH per-connection server daemon (10.0.0.1:53828). Sep 9 23:37:20.590207 systemd-logind[1470]: Removed session 5. Sep 9 23:37:20.655759 sshd[1670]: Accepted publickey for core from 10.0.0.1 port 53828 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:37:20.657273 sshd-session[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:37:20.666353 systemd-logind[1470]: New session 6 of user core. Sep 9 23:37:20.678511 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 23:37:20.736332 sudo[1674]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 23:37:20.736645 sudo[1674]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:37:20.815977 sudo[1674]: pam_unix(sudo:session): session closed for user root Sep 9 23:37:20.821567 sudo[1673]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 23:37:20.821874 sudo[1673]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:37:20.831041 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 23:37:20.876660 augenrules[1696]: No rules Sep 9 23:37:20.877924 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 23:37:20.878143 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 23:37:20.880395 sudo[1673]: pam_unix(sudo:session): session closed for user root Sep 9 23:37:20.881698 sshd[1672]: Connection closed by 10.0.0.1 port 53828 Sep 9 23:37:20.882079 sshd-session[1670]: pam_unix(sshd:session): session closed for user core Sep 9 23:37:20.894332 systemd[1]: sshd@5-10.0.0.81:22-10.0.0.1:53828.service: Deactivated successfully. Sep 9 23:37:20.896004 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 23:37:20.896836 systemd-logind[1470]: Session 6 logged out. Waiting for processes to exit. Sep 9 23:37:20.899277 systemd[1]: Started sshd@6-10.0.0.81:22-10.0.0.1:53830.service - OpenSSH per-connection server daemon (10.0.0.1:53830). Sep 9 23:37:20.899765 systemd-logind[1470]: Removed session 6. Sep 9 23:37:20.964396 sshd[1705]: Accepted publickey for core from 10.0.0.1 port 53830 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:37:20.965766 sshd-session[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:37:20.970427 systemd-logind[1470]: New session 7 of user core. Sep 9 23:37:20.977522 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 23:37:21.029283 sudo[1708]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 23:37:21.029554 sudo[1708]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:37:21.353532 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 23:37:21.371608 (dockerd)[1728]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 23:37:21.587417 dockerd[1728]: time="2025-09-09T23:37:21.587367265Z" level=info msg="Starting up" Sep 9 23:37:21.588817 dockerd[1728]: time="2025-09-09T23:37:21.588792832Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 9 23:37:21.631467 dockerd[1728]: time="2025-09-09T23:37:21.631372479Z" level=info msg="Loading containers: start." Sep 9 23:37:21.640278 kernel: Initializing XFRM netlink socket Sep 9 23:37:21.832671 systemd-networkd[1431]: docker0: Link UP Sep 9 23:37:21.836806 dockerd[1728]: time="2025-09-09T23:37:21.836762022Z" level=info msg="Loading containers: done." Sep 9 23:37:21.852706 dockerd[1728]: time="2025-09-09T23:37:21.852625988Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 23:37:21.852854 dockerd[1728]: time="2025-09-09T23:37:21.852719931Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 9 23:37:21.852854 dockerd[1728]: time="2025-09-09T23:37:21.852826793Z" level=info msg="Initializing buildkit" Sep 9 23:37:21.873836 dockerd[1728]: time="2025-09-09T23:37:21.873783169Z" level=info msg="Completed buildkit initialization" Sep 9 23:37:21.880050 dockerd[1728]: time="2025-09-09T23:37:21.879993635Z" level=info msg="Daemon has completed initialization" Sep 9 23:37:21.880190 dockerd[1728]: time="2025-09-09T23:37:21.880072295Z" level=info msg="API listen on /run/docker.sock" Sep 9 23:37:21.880395 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 23:37:22.604265 containerd[1499]: time="2025-09-09T23:37:22.604151576Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 9 23:37:23.176775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3842466666.mount: Deactivated successfully. Sep 9 23:37:24.108640 containerd[1499]: time="2025-09-09T23:37:24.108589097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:37:24.109609 containerd[1499]: time="2025-09-09T23:37:24.109573164Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=25652443" Sep 9 23:37:24.111279 containerd[1499]: time="2025-09-09T23:37:24.110315053Z" level=info msg="ImageCreate event name:\"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:37:24.113206 containerd[1499]: time="2025-09-09T23:37:24.113173501Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:37:24.114097 containerd[1499]: time="2025-09-09T23:37:24.114058432Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"25649241\" in 1.509866572s" Sep 9 23:37:24.114133 containerd[1499]: time="2025-09-09T23:37:24.114097944Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\"" Sep 9 23:37:24.115461 containerd[1499]: time="2025-09-09T23:37:24.115411919Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 9 23:37:25.406825 containerd[1499]: time="2025-09-09T23:37:25.406759058Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:37:25.408075 containerd[1499]: time="2025-09-09T23:37:25.407858089Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=22460311" Sep 9 23:37:25.408857 containerd[1499]: time="2025-09-09T23:37:25.408826896Z" level=info msg="ImageCreate event name:\"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:37:25.411385 containerd[1499]: time="2025-09-09T23:37:25.411347523Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:37:25.412855 containerd[1499]: time="2025-09-09T23:37:25.412805352Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"23997423\" in 1.297246622s" Sep 9 23:37:25.412855 containerd[1499]: time="2025-09-09T23:37:25.412839920Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\"" Sep 9 23:37:25.413634 containerd[1499]: time="2025-09-09T23:37:25.413613135Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 9 23:37:26.157362 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 23:37:26.158719 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:37:26.303421 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:37:26.306562 (kubelet)[2009]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 23:37:26.360101 kubelet[2009]: E0909 23:37:26.360051 2009 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 23:37:26.363998 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 23:37:26.364116 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 23:37:26.364419 systemd[1]: kubelet.service: Consumed 157ms CPU time, 106M memory peak. Sep 9 23:37:26.699011 containerd[1499]: time="2025-09-09T23:37:26.698383077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:37:26.699011 containerd[1499]: time="2025-09-09T23:37:26.698928808Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=17125905" Sep 9 23:37:26.699929 containerd[1499]: time="2025-09-09T23:37:26.699876207Z" level=info msg="ImageCreate event name:\"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:37:26.702096 containerd[1499]: time="2025-09-09T23:37:26.702064924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:37:26.703957 containerd[1499]: time="2025-09-09T23:37:26.703921703Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"18663035\" in 1.290281045s" Sep 9 23:37:26.703957 containerd[1499]: time="2025-09-09T23:37:26.703957897Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\"" Sep 9 23:37:26.704379 containerd[1499]: time="2025-09-09T23:37:26.704343730Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 9 23:37:27.715475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2313662721.mount: Deactivated successfully. Sep 9 23:37:27.975828 containerd[1499]: time="2025-09-09T23:37:27.975670735Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:37:27.976532 containerd[1499]: time="2025-09-09T23:37:27.976430928Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=26916097" Sep 9 23:37:27.977456 containerd[1499]: time="2025-09-09T23:37:27.977423954Z" level=info msg="ImageCreate event name:\"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:37:27.979433 containerd[1499]: time="2025-09-09T23:37:27.979396101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:37:27.980321 containerd[1499]: time="2025-09-09T23:37:27.980290488Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"26915114\" in 1.275908649s" Sep 9 23:37:27.980371 containerd[1499]: time="2025-09-09T23:37:27.980327631Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\"" Sep 9 23:37:27.980838 containerd[1499]: time="2025-09-09T23:37:27.980811968Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 9 23:37:28.586419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2106476322.mount: Deactivated successfully. Sep 9 23:37:29.446012 containerd[1499]: time="2025-09-09T23:37:29.445955939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:37:29.459459 containerd[1499]: time="2025-09-09T23:37:29.459369486Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 9 23:37:29.512846 containerd[1499]: time="2025-09-09T23:37:29.512799779Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:37:29.527732 containerd[1499]: time="2025-09-09T23:37:29.527611630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:37:29.528865 containerd[1499]: time="2025-09-09T23:37:29.528817149Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.547972151s" Sep 9 23:37:29.528865 containerd[1499]: time="2025-09-09T23:37:29.528869393Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 9 23:37:29.529300 containerd[1499]: time="2025-09-09T23:37:29.529275491Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 23:37:30.073044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2945348859.mount: Deactivated successfully. Sep 9 23:37:30.078364 containerd[1499]: time="2025-09-09T23:37:30.078310828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 23:37:30.079676 containerd[1499]: time="2025-09-09T23:37:30.079628150Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 9 23:37:30.082698 containerd[1499]: time="2025-09-09T23:37:30.082282424Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 23:37:30.084590 containerd[1499]: time="2025-09-09T23:37:30.084513372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 23:37:30.085272 containerd[1499]: time="2025-09-09T23:37:30.085066261Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 555.756862ms" Sep 9 23:37:30.085272 containerd[1499]: time="2025-09-09T23:37:30.085103967Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 9 23:37:30.085632 containerd[1499]: time="2025-09-09T23:37:30.085556320Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 9 23:37:30.617221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1901687377.mount: Deactivated successfully. Sep 9 23:37:32.352737 containerd[1499]: time="2025-09-09T23:37:32.352687185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:37:32.353425 containerd[1499]: time="2025-09-09T23:37:32.353386956Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537163" Sep 9 23:37:32.354280 containerd[1499]: time="2025-09-09T23:37:32.354232293Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:37:32.357389 containerd[1499]: time="2025-09-09T23:37:32.357346664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:37:32.358511 containerd[1499]: time="2025-09-09T23:37:32.358471017Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.272875109s" Sep 9 23:37:32.358629 containerd[1499]: time="2025-09-09T23:37:32.358612438Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 9 23:37:36.614558 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 23:37:36.616012 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:37:36.632369 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 23:37:36.632464 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 23:37:36.632714 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:37:36.634953 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:37:36.662924 systemd[1]: Reload requested from client PID 2171 ('systemctl') (unit session-7.scope)... Sep 9 23:37:36.662944 systemd[1]: Reloading... Sep 9 23:37:36.745299 zram_generator::config[2220]: No configuration found. Sep 9 23:37:36.809414 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 23:37:36.894313 systemd[1]: Reloading finished in 231 ms. Sep 9 23:37:36.960794 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 23:37:36.960880 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 23:37:36.961117 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:37:36.961168 systemd[1]: kubelet.service: Consumed 88ms CPU time, 95M memory peak. Sep 9 23:37:36.962788 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:37:37.090968 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:37:37.103649 (kubelet)[2259]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 23:37:37.140312 kubelet[2259]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 23:37:37.140312 kubelet[2259]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 9 23:37:37.140312 kubelet[2259]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 23:37:37.140312 kubelet[2259]: I0909 23:37:37.139806 2259 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 23:37:37.731303 kubelet[2259]: I0909 23:37:37.731264 2259 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 9 23:37:37.731303 kubelet[2259]: I0909 23:37:37.731295 2259 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 23:37:37.731585 kubelet[2259]: I0909 23:37:37.731556 2259 server.go:934] "Client rotation is on, will bootstrap in background" Sep 9 23:37:37.753546 kubelet[2259]: I0909 23:37:37.753366 2259 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 23:37:37.753546 kubelet[2259]: E0909 23:37:37.753485 2259 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.81:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:37:37.760244 kubelet[2259]: I0909 23:37:37.760207 2259 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 23:37:37.763836 kubelet[2259]: I0909 23:37:37.763801 2259 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 23:37:37.764608 kubelet[2259]: I0909 23:37:37.764573 2259 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 9 23:37:37.764766 kubelet[2259]: I0909 23:37:37.764728 2259 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 23:37:37.764932 kubelet[2259]: I0909 23:37:37.764755 2259 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 23:37:37.765021 kubelet[2259]: I0909 23:37:37.764989 2259 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 23:37:37.765021 kubelet[2259]: I0909 23:37:37.764998 2259 container_manager_linux.go:300] "Creating device plugin manager" Sep 9 23:37:37.765305 kubelet[2259]: I0909 23:37:37.765269 2259 state_mem.go:36] "Initialized new in-memory state store" Sep 9 23:37:37.767207 kubelet[2259]: I0909 23:37:37.767177 2259 kubelet.go:408] "Attempting to sync node with API server" Sep 9 23:37:37.767254 kubelet[2259]: I0909 23:37:37.767209 2259 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 23:37:37.767254 kubelet[2259]: I0909 23:37:37.767235 2259 kubelet.go:314] "Adding apiserver pod source" Sep 9 23:37:37.767312 kubelet[2259]: I0909 23:37:37.767267 2259 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 23:37:37.771672 kubelet[2259]: W0909 23:37:37.771606 2259 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Sep 9 23:37:37.771672 kubelet[2259]: E0909 23:37:37.771671 2259 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:37:37.771916 kubelet[2259]: I0909 23:37:37.771876 2259 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 9 23:37:37.772118 kubelet[2259]: W0909 23:37:37.772069 2259 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Sep 9 23:37:37.772150 kubelet[2259]: E0909 23:37:37.772121 2259 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:37:37.772611 kubelet[2259]: I0909 23:37:37.772595 2259 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 23:37:37.772825 kubelet[2259]: W0909 23:37:37.772811 2259 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 23:37:37.774008 kubelet[2259]: I0909 23:37:37.773845 2259 server.go:1274] "Started kubelet" Sep 9 23:37:37.774482 kubelet[2259]: I0909 23:37:37.774448 2259 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 23:37:37.775749 kubelet[2259]: I0909 23:37:37.775721 2259 server.go:449] "Adding debug handlers to kubelet server" Sep 9 23:37:37.778988 kubelet[2259]: I0909 23:37:37.778928 2259 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 23:37:37.779498 kubelet[2259]: I0909 23:37:37.779428 2259 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 23:37:37.782240 kubelet[2259]: E0909 23:37:37.779521 2259 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.81:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.81:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863c1854730c35d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 23:37:37.773814621 +0000 UTC m=+0.667142887,LastTimestamp:2025-09-09 23:37:37.773814621 +0000 UTC m=+0.667142887,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 23:37:37.782703 kubelet[2259]: E0909 23:37:37.782671 2259 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 23:37:37.783236 kubelet[2259]: I0909 23:37:37.783216 2259 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 23:37:37.783323 kubelet[2259]: I0909 23:37:37.783216 2259 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 23:37:37.783446 kubelet[2259]: I0909 23:37:37.783433 2259 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 9 23:37:37.783987 kubelet[2259]: E0909 23:37:37.783964 2259 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 23:37:37.784310 kubelet[2259]: I0909 23:37:37.784169 2259 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 9 23:37:37.784310 kubelet[2259]: I0909 23:37:37.784222 2259 reconciler.go:26] "Reconciler: start to sync state" Sep 9 23:37:37.784788 kubelet[2259]: W0909 23:37:37.784741 2259 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Sep 9 23:37:37.784916 kubelet[2259]: E0909 23:37:37.784897 2259 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:37:37.784964 kubelet[2259]: I0909 23:37:37.784942 2259 factory.go:221] Registration of the systemd container factory successfully Sep 9 23:37:37.785309 kubelet[2259]: E0909 23:37:37.785237 2259 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="200ms" Sep 9 23:37:37.785473 kubelet[2259]: I0909 23:37:37.785442 2259 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 23:37:37.786521 kubelet[2259]: I0909 23:37:37.786495 2259 factory.go:221] Registration of the containerd container factory successfully Sep 9 23:37:37.799449 kubelet[2259]: I0909 23:37:37.799055 2259 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 23:37:37.799724 kubelet[2259]: I0909 23:37:37.799707 2259 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 9 23:37:37.799724 kubelet[2259]: I0909 23:37:37.799724 2259 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 9 23:37:37.800657 kubelet[2259]: I0909 23:37:37.799740 2259 state_mem.go:36] "Initialized new in-memory state store" Sep 9 23:37:37.801889 kubelet[2259]: I0909 23:37:37.801426 2259 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 23:37:37.801889 kubelet[2259]: I0909 23:37:37.801479 2259 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 9 23:37:37.801889 kubelet[2259]: I0909 23:37:37.801498 2259 kubelet.go:2321] "Starting kubelet main sync loop" Sep 9 23:37:37.801889 kubelet[2259]: E0909 23:37:37.801542 2259 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 23:37:37.807024 kubelet[2259]: W0909 23:37:37.802317 2259 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Sep 9 23:37:37.807024 kubelet[2259]: E0909 23:37:37.802391 2259 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:37:37.824384 kubelet[2259]: I0909 23:37:37.824354 2259 policy_none.go:49] "None policy: Start" Sep 9 23:37:37.825165 kubelet[2259]: I0909 23:37:37.825127 2259 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 9 23:37:37.825165 kubelet[2259]: I0909 23:37:37.825157 2259 state_mem.go:35] "Initializing new in-memory state store" Sep 9 23:37:37.834733 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 23:37:37.849501 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 23:37:37.852366 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 23:37:37.864132 kubelet[2259]: I0909 23:37:37.864083 2259 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 23:37:37.864353 kubelet[2259]: I0909 23:37:37.864323 2259 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 23:37:37.864389 kubelet[2259]: I0909 23:37:37.864344 2259 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 23:37:37.864684 kubelet[2259]: I0909 23:37:37.864658 2259 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 23:37:37.865741 kubelet[2259]: E0909 23:37:37.865714 2259 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 23:37:37.910567 systemd[1]: Created slice kubepods-burstable-pod4ca0217def61d756028d9c8bd727f020.slice - libcontainer container kubepods-burstable-pod4ca0217def61d756028d9c8bd727f020.slice. Sep 9 23:37:37.931932 systemd[1]: Created slice kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice - libcontainer container kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice. Sep 9 23:37:37.954969 systemd[1]: Created slice kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice - libcontainer container kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice. Sep 9 23:37:37.965738 kubelet[2259]: I0909 23:37:37.965710 2259 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 23:37:37.966278 kubelet[2259]: E0909 23:37:37.966233 2259 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" Sep 9 23:37:37.985955 kubelet[2259]: E0909 23:37:37.985843 2259 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="400ms" Sep 9 23:37:38.085111 kubelet[2259]: I0909 23:37:38.085043 2259 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4ca0217def61d756028d9c8bd727f020-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4ca0217def61d756028d9c8bd727f020\") " pod="kube-system/kube-apiserver-localhost" Sep 9 23:37:38.085111 kubelet[2259]: I0909 23:37:38.085108 2259 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:37:38.085201 kubelet[2259]: I0909 23:37:38.085155 2259 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4ca0217def61d756028d9c8bd727f020-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4ca0217def61d756028d9c8bd727f020\") " pod="kube-system/kube-apiserver-localhost" Sep 9 23:37:38.085226 kubelet[2259]: I0909 23:37:38.085199 2259 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4ca0217def61d756028d9c8bd727f020-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4ca0217def61d756028d9c8bd727f020\") " pod="kube-system/kube-apiserver-localhost" Sep 9 23:37:38.085226 kubelet[2259]: I0909 23:37:38.085219 2259 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:37:38.085868 kubelet[2259]: I0909 23:37:38.085241 2259 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:37:38.085999 kubelet[2259]: I0909 23:37:38.085932 2259 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 9 23:37:38.087806 kubelet[2259]: I0909 23:37:38.086023 2259 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:37:38.087806 kubelet[2259]: I0909 23:37:38.086070 2259 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:37:38.168261 kubelet[2259]: I0909 23:37:38.168214 2259 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 23:37:38.168808 kubelet[2259]: E0909 23:37:38.168767 2259 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" Sep 9 23:37:38.232237 containerd[1499]: time="2025-09-09T23:37:38.232151301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4ca0217def61d756028d9c8bd727f020,Namespace:kube-system,Attempt:0,}" Sep 9 23:37:38.252922 containerd[1499]: time="2025-09-09T23:37:38.252793236Z" level=info msg="connecting to shim 35f67e7ba60ba9ddf3237a9e26ce39da75263e0f64bbec0df013178a6ad12537" address="unix:///run/containerd/s/553f6ec5b64dbe58f82ddb57bd810563d7ce40bdb05bcfb8032f87089e673ccd" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:37:38.253995 containerd[1499]: time="2025-09-09T23:37:38.253946334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 9 23:37:38.258418 containerd[1499]: time="2025-09-09T23:37:38.258176009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 9 23:37:38.283451 containerd[1499]: time="2025-09-09T23:37:38.283388361Z" level=info msg="connecting to shim 097d33caa850643ec1afd1eb75bb7b7dbc57a4deb9016d6890cdc24fb30259d5" address="unix:///run/containerd/s/4b8d297fa1062cff75a09f5fe127e8cba643085b4accf3cf8c385f3fa2d82ef0" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:37:38.284539 systemd[1]: Started cri-containerd-35f67e7ba60ba9ddf3237a9e26ce39da75263e0f64bbec0df013178a6ad12537.scope - libcontainer container 35f67e7ba60ba9ddf3237a9e26ce39da75263e0f64bbec0df013178a6ad12537. Sep 9 23:37:38.295761 containerd[1499]: time="2025-09-09T23:37:38.295477694Z" level=info msg="connecting to shim 96c6d5c8902e04643810d466c1d3a690baa9fec5519bef7f83991285fa7f18a7" address="unix:///run/containerd/s/be99823a3aa38fe039063c37cdc442953afd861bf45f01c87dee18d6c7036b6b" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:37:38.315438 systemd[1]: Started cri-containerd-097d33caa850643ec1afd1eb75bb7b7dbc57a4deb9016d6890cdc24fb30259d5.scope - libcontainer container 097d33caa850643ec1afd1eb75bb7b7dbc57a4deb9016d6890cdc24fb30259d5. Sep 9 23:37:38.319482 systemd[1]: Started cri-containerd-96c6d5c8902e04643810d466c1d3a690baa9fec5519bef7f83991285fa7f18a7.scope - libcontainer container 96c6d5c8902e04643810d466c1d3a690baa9fec5519bef7f83991285fa7f18a7. Sep 9 23:37:38.334346 containerd[1499]: time="2025-09-09T23:37:38.334131362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4ca0217def61d756028d9c8bd727f020,Namespace:kube-system,Attempt:0,} returns sandbox id \"35f67e7ba60ba9ddf3237a9e26ce39da75263e0f64bbec0df013178a6ad12537\"" Sep 9 23:37:38.339851 containerd[1499]: time="2025-09-09T23:37:38.339776400Z" level=info msg="CreateContainer within sandbox \"35f67e7ba60ba9ddf3237a9e26ce39da75263e0f64bbec0df013178a6ad12537\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 23:37:38.349042 containerd[1499]: time="2025-09-09T23:37:38.348884285Z" level=info msg="Container caab2788a4354cbee5e10d7a167818e38131be43383e2bf67c93fb5e155f3e19: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:37:38.359032 containerd[1499]: time="2025-09-09T23:37:38.358984402Z" level=info msg="CreateContainer within sandbox \"35f67e7ba60ba9ddf3237a9e26ce39da75263e0f64bbec0df013178a6ad12537\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"caab2788a4354cbee5e10d7a167818e38131be43383e2bf67c93fb5e155f3e19\"" Sep 9 23:37:38.359768 containerd[1499]: time="2025-09-09T23:37:38.359730942Z" level=info msg="StartContainer for \"caab2788a4354cbee5e10d7a167818e38131be43383e2bf67c93fb5e155f3e19\"" Sep 9 23:37:38.362387 containerd[1499]: time="2025-09-09T23:37:38.362357791Z" level=info msg="connecting to shim caab2788a4354cbee5e10d7a167818e38131be43383e2bf67c93fb5e155f3e19" address="unix:///run/containerd/s/553f6ec5b64dbe58f82ddb57bd810563d7ce40bdb05bcfb8032f87089e673ccd" protocol=ttrpc version=3 Sep 9 23:37:38.363288 containerd[1499]: time="2025-09-09T23:37:38.363262521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"96c6d5c8902e04643810d466c1d3a690baa9fec5519bef7f83991285fa7f18a7\"" Sep 9 23:37:38.363828 containerd[1499]: time="2025-09-09T23:37:38.363693759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"097d33caa850643ec1afd1eb75bb7b7dbc57a4deb9016d6890cdc24fb30259d5\"" Sep 9 23:37:38.366711 containerd[1499]: time="2025-09-09T23:37:38.366673929Z" level=info msg="CreateContainer within sandbox \"96c6d5c8902e04643810d466c1d3a690baa9fec5519bef7f83991285fa7f18a7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 23:37:38.366840 containerd[1499]: time="2025-09-09T23:37:38.366675087Z" level=info msg="CreateContainer within sandbox \"097d33caa850643ec1afd1eb75bb7b7dbc57a4deb9016d6890cdc24fb30259d5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 23:37:38.374672 containerd[1499]: time="2025-09-09T23:37:38.374408385Z" level=info msg="Container 4227bcaf006d9311214767d47500e13a996b1643a2f17e8e51d618f9b224f22f: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:37:38.378386 containerd[1499]: time="2025-09-09T23:37:38.378348438Z" level=info msg="Container 0280671a6b917a690ebda2305b62698bc3f211a902c990f9ca81b78d5d0d957e: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:37:38.383351 containerd[1499]: time="2025-09-09T23:37:38.383314429Z" level=info msg="CreateContainer within sandbox \"96c6d5c8902e04643810d466c1d3a690baa9fec5519bef7f83991285fa7f18a7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4227bcaf006d9311214767d47500e13a996b1643a2f17e8e51d618f9b224f22f\"" Sep 9 23:37:38.383903 containerd[1499]: time="2025-09-09T23:37:38.383880734Z" level=info msg="StartContainer for \"4227bcaf006d9311214767d47500e13a996b1643a2f17e8e51d618f9b224f22f\"" Sep 9 23:37:38.385451 containerd[1499]: time="2025-09-09T23:37:38.385423695Z" level=info msg="connecting to shim 4227bcaf006d9311214767d47500e13a996b1643a2f17e8e51d618f9b224f22f" address="unix:///run/containerd/s/be99823a3aa38fe039063c37cdc442953afd861bf45f01c87dee18d6c7036b6b" protocol=ttrpc version=3 Sep 9 23:37:38.385584 systemd[1]: Started cri-containerd-caab2788a4354cbee5e10d7a167818e38131be43383e2bf67c93fb5e155f3e19.scope - libcontainer container caab2788a4354cbee5e10d7a167818e38131be43383e2bf67c93fb5e155f3e19. Sep 9 23:37:38.386265 containerd[1499]: time="2025-09-09T23:37:38.386172512Z" level=info msg="CreateContainer within sandbox \"097d33caa850643ec1afd1eb75bb7b7dbc57a4deb9016d6890cdc24fb30259d5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0280671a6b917a690ebda2305b62698bc3f211a902c990f9ca81b78d5d0d957e\"" Sep 9 23:37:38.386564 containerd[1499]: time="2025-09-09T23:37:38.386541049Z" level=info msg="StartContainer for \"0280671a6b917a690ebda2305b62698bc3f211a902c990f9ca81b78d5d0d957e\"" Sep 9 23:37:38.387505 kubelet[2259]: E0909 23:37:38.387470 2259 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="800ms" Sep 9 23:37:38.391675 containerd[1499]: time="2025-09-09T23:37:38.391607881Z" level=info msg="connecting to shim 0280671a6b917a690ebda2305b62698bc3f211a902c990f9ca81b78d5d0d957e" address="unix:///run/containerd/s/4b8d297fa1062cff75a09f5fe127e8cba643085b4accf3cf8c385f3fa2d82ef0" protocol=ttrpc version=3 Sep 9 23:37:38.408550 systemd[1]: Started cri-containerd-4227bcaf006d9311214767d47500e13a996b1643a2f17e8e51d618f9b224f22f.scope - libcontainer container 4227bcaf006d9311214767d47500e13a996b1643a2f17e8e51d618f9b224f22f. Sep 9 23:37:38.419418 systemd[1]: Started cri-containerd-0280671a6b917a690ebda2305b62698bc3f211a902c990f9ca81b78d5d0d957e.scope - libcontainer container 0280671a6b917a690ebda2305b62698bc3f211a902c990f9ca81b78d5d0d957e. Sep 9 23:37:38.437652 containerd[1499]: time="2025-09-09T23:37:38.437611094Z" level=info msg="StartContainer for \"caab2788a4354cbee5e10d7a167818e38131be43383e2bf67c93fb5e155f3e19\" returns successfully" Sep 9 23:37:38.466329 containerd[1499]: time="2025-09-09T23:37:38.466285854Z" level=info msg="StartContainer for \"4227bcaf006d9311214767d47500e13a996b1643a2f17e8e51d618f9b224f22f\" returns successfully" Sep 9 23:37:38.470543 containerd[1499]: time="2025-09-09T23:37:38.470443642Z" level=info msg="StartContainer for \"0280671a6b917a690ebda2305b62698bc3f211a902c990f9ca81b78d5d0d957e\" returns successfully" Sep 9 23:37:38.570134 kubelet[2259]: I0909 23:37:38.570100 2259 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 23:37:39.854490 kubelet[2259]: E0909 23:37:39.854446 2259 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 9 23:37:39.930270 kubelet[2259]: I0909 23:37:39.930173 2259 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 9 23:37:39.930270 kubelet[2259]: E0909 23:37:39.930214 2259 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 9 23:37:40.773749 kubelet[2259]: I0909 23:37:40.773675 2259 apiserver.go:52] "Watching apiserver" Sep 9 23:37:40.785509 kubelet[2259]: I0909 23:37:40.785464 2259 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 9 23:37:42.031430 systemd[1]: Reload requested from client PID 2531 ('systemctl') (unit session-7.scope)... Sep 9 23:37:42.031719 systemd[1]: Reloading... Sep 9 23:37:42.101291 zram_generator::config[2574]: No configuration found. Sep 9 23:37:42.201800 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 23:37:42.304900 systemd[1]: Reloading finished in 272 ms. Sep 9 23:37:42.332866 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:37:42.349316 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 23:37:42.349590 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:37:42.349654 systemd[1]: kubelet.service: Consumed 798ms CPU time, 129.2M memory peak. Sep 9 23:37:42.351560 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:37:42.512694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:37:42.523554 (kubelet)[2616]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 23:37:42.565546 kubelet[2616]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 23:37:42.565546 kubelet[2616]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 9 23:37:42.565546 kubelet[2616]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 23:37:42.565546 kubelet[2616]: I0909 23:37:42.565322 2616 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 23:37:42.571120 kubelet[2616]: I0909 23:37:42.570525 2616 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 9 23:37:42.571120 kubelet[2616]: I0909 23:37:42.570552 2616 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 23:37:42.571120 kubelet[2616]: I0909 23:37:42.570754 2616 server.go:934] "Client rotation is on, will bootstrap in background" Sep 9 23:37:42.573020 kubelet[2616]: I0909 23:37:42.572997 2616 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 9 23:37:42.575117 kubelet[2616]: I0909 23:37:42.575094 2616 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 23:37:42.578234 kubelet[2616]: I0909 23:37:42.578216 2616 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 23:37:42.580884 kubelet[2616]: I0909 23:37:42.580858 2616 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 23:37:42.580998 kubelet[2616]: I0909 23:37:42.580984 2616 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 9 23:37:42.581119 kubelet[2616]: I0909 23:37:42.581089 2616 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 23:37:42.581312 kubelet[2616]: I0909 23:37:42.581118 2616 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 23:37:42.581386 kubelet[2616]: I0909 23:37:42.581323 2616 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 23:37:42.581386 kubelet[2616]: I0909 23:37:42.581333 2616 container_manager_linux.go:300] "Creating device plugin manager" Sep 9 23:37:42.581386 kubelet[2616]: I0909 23:37:42.581367 2616 state_mem.go:36] "Initialized new in-memory state store" Sep 9 23:37:42.581465 kubelet[2616]: I0909 23:37:42.581459 2616 kubelet.go:408] "Attempting to sync node with API server" Sep 9 23:37:42.581488 kubelet[2616]: I0909 23:37:42.581472 2616 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 23:37:42.581506 kubelet[2616]: I0909 23:37:42.581495 2616 kubelet.go:314] "Adding apiserver pod source" Sep 9 23:37:42.582133 kubelet[2616]: I0909 23:37:42.581511 2616 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 23:37:42.582133 kubelet[2616]: I0909 23:37:42.582006 2616 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 9 23:37:42.582522 kubelet[2616]: I0909 23:37:42.582495 2616 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 23:37:42.582928 kubelet[2616]: I0909 23:37:42.582906 2616 server.go:1274] "Started kubelet" Sep 9 23:37:42.584775 kubelet[2616]: I0909 23:37:42.584735 2616 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 23:37:42.585931 kubelet[2616]: I0909 23:37:42.585909 2616 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 23:37:42.586527 kubelet[2616]: I0909 23:37:42.586503 2616 server.go:449] "Adding debug handlers to kubelet server" Sep 9 23:37:42.587379 kubelet[2616]: I0909 23:37:42.587320 2616 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 23:37:42.587624 kubelet[2616]: I0909 23:37:42.587603 2616 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 23:37:42.589382 kubelet[2616]: I0909 23:37:42.589350 2616 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 9 23:37:42.589666 kubelet[2616]: E0909 23:37:42.589617 2616 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 23:37:42.589928 kubelet[2616]: I0909 23:37:42.589904 2616 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 9 23:37:42.590061 kubelet[2616]: I0909 23:37:42.590042 2616 reconciler.go:26] "Reconciler: start to sync state" Sep 9 23:37:42.594598 kubelet[2616]: I0909 23:37:42.594569 2616 factory.go:221] Registration of the systemd container factory successfully Sep 9 23:37:42.594731 kubelet[2616]: I0909 23:37:42.594704 2616 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 23:37:42.595024 kubelet[2616]: I0909 23:37:42.595003 2616 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 23:37:42.602010 kubelet[2616]: I0909 23:37:42.601972 2616 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 23:37:42.602905 kubelet[2616]: I0909 23:37:42.602886 2616 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 23:37:42.602905 kubelet[2616]: I0909 23:37:42.602906 2616 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 9 23:37:42.602979 kubelet[2616]: I0909 23:37:42.602924 2616 kubelet.go:2321] "Starting kubelet main sync loop" Sep 9 23:37:42.603004 kubelet[2616]: E0909 23:37:42.602963 2616 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 23:37:42.604730 kubelet[2616]: I0909 23:37:42.604702 2616 factory.go:221] Registration of the containerd container factory successfully Sep 9 23:37:42.647950 kubelet[2616]: I0909 23:37:42.647917 2616 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 9 23:37:42.647950 kubelet[2616]: I0909 23:37:42.647933 2616 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 9 23:37:42.647950 kubelet[2616]: I0909 23:37:42.647953 2616 state_mem.go:36] "Initialized new in-memory state store" Sep 9 23:37:42.648123 kubelet[2616]: I0909 23:37:42.648107 2616 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 23:37:42.648146 kubelet[2616]: I0909 23:37:42.648122 2616 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 23:37:42.648146 kubelet[2616]: I0909 23:37:42.648140 2616 policy_none.go:49] "None policy: Start" Sep 9 23:37:42.648697 kubelet[2616]: I0909 23:37:42.648667 2616 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 9 23:37:42.648770 kubelet[2616]: I0909 23:37:42.648705 2616 state_mem.go:35] "Initializing new in-memory state store" Sep 9 23:37:42.648866 kubelet[2616]: I0909 23:37:42.648810 2616 state_mem.go:75] "Updated machine memory state" Sep 9 23:37:42.653424 kubelet[2616]: I0909 23:37:42.653315 2616 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 23:37:42.653989 kubelet[2616]: I0909 23:37:42.653846 2616 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 23:37:42.654436 kubelet[2616]: I0909 23:37:42.654400 2616 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 23:37:42.654675 kubelet[2616]: I0909 23:37:42.654650 2616 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 23:37:42.710372 kubelet[2616]: E0909 23:37:42.710313 2616 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 23:37:42.756716 kubelet[2616]: I0909 23:37:42.756674 2616 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 23:37:42.763454 kubelet[2616]: I0909 23:37:42.763417 2616 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 9 23:37:42.763588 kubelet[2616]: I0909 23:37:42.763576 2616 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 9 23:37:42.790591 kubelet[2616]: I0909 23:37:42.790557 2616 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4ca0217def61d756028d9c8bd727f020-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4ca0217def61d756028d9c8bd727f020\") " pod="kube-system/kube-apiserver-localhost" Sep 9 23:37:42.790591 kubelet[2616]: I0909 23:37:42.790604 2616 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4ca0217def61d756028d9c8bd727f020-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4ca0217def61d756028d9c8bd727f020\") " pod="kube-system/kube-apiserver-localhost" Sep 9 23:37:42.790746 kubelet[2616]: I0909 23:37:42.790628 2616 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4ca0217def61d756028d9c8bd727f020-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4ca0217def61d756028d9c8bd727f020\") " pod="kube-system/kube-apiserver-localhost" Sep 9 23:37:42.790746 kubelet[2616]: I0909 23:37:42.790646 2616 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:37:42.790746 kubelet[2616]: I0909 23:37:42.790675 2616 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:37:42.790746 kubelet[2616]: I0909 23:37:42.790692 2616 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 9 23:37:42.790746 kubelet[2616]: I0909 23:37:42.790707 2616 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:37:42.790847 kubelet[2616]: I0909 23:37:42.790722 2616 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:37:42.790847 kubelet[2616]: I0909 23:37:42.790752 2616 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:37:43.031358 sudo[2650]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 9 23:37:43.031627 sudo[2650]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 9 23:37:43.482535 sudo[2650]: pam_unix(sudo:session): session closed for user root Sep 9 23:37:43.582648 kubelet[2616]: I0909 23:37:43.582584 2616 apiserver.go:52] "Watching apiserver" Sep 9 23:37:43.590433 kubelet[2616]: I0909 23:37:43.590406 2616 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 9 23:37:43.636257 kubelet[2616]: E0909 23:37:43.635839 2616 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 23:37:43.653296 kubelet[2616]: I0909 23:37:43.651912 2616 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.651430697 podStartE2EDuration="1.651430697s" podCreationTimestamp="2025-09-09 23:37:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:37:43.65057447 +0000 UTC m=+1.123987437" watchObservedRunningTime="2025-09-09 23:37:43.651430697 +0000 UTC m=+1.124843624" Sep 9 23:37:43.669636 kubelet[2616]: I0909 23:37:43.669577 2616 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.669560092 podStartE2EDuration="3.669560092s" podCreationTimestamp="2025-09-09 23:37:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:37:43.659822219 +0000 UTC m=+1.133235226" watchObservedRunningTime="2025-09-09 23:37:43.669560092 +0000 UTC m=+1.142973019" Sep 9 23:37:43.680674 kubelet[2616]: I0909 23:37:43.680602 2616 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.680587399 podStartE2EDuration="1.680587399s" podCreationTimestamp="2025-09-09 23:37:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:37:43.669885908 +0000 UTC m=+1.143298875" watchObservedRunningTime="2025-09-09 23:37:43.680587399 +0000 UTC m=+1.154000366" Sep 9 23:37:44.761022 sudo[1708]: pam_unix(sudo:session): session closed for user root Sep 9 23:37:44.762422 sshd[1707]: Connection closed by 10.0.0.1 port 53830 Sep 9 23:37:44.762909 sshd-session[1705]: pam_unix(sshd:session): session closed for user core Sep 9 23:37:44.767067 systemd[1]: sshd@6-10.0.0.81:22-10.0.0.1:53830.service: Deactivated successfully. Sep 9 23:37:44.769419 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 23:37:44.769750 systemd[1]: session-7.scope: Consumed 5.973s CPU time, 264M memory peak. Sep 9 23:37:44.770848 systemd-logind[1470]: Session 7 logged out. Waiting for processes to exit. Sep 9 23:37:44.772047 systemd-logind[1470]: Removed session 7. Sep 9 23:37:48.321343 kubelet[2616]: I0909 23:37:48.321309 2616 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 23:37:48.322196 containerd[1499]: time="2025-09-09T23:37:48.322095849Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 23:37:48.322464 kubelet[2616]: I0909 23:37:48.322324 2616 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 23:37:49.409127 systemd[1]: Created slice kubepods-besteffort-pod9429238f_5d82_4efe_9fe6_967ac28d19c7.slice - libcontainer container kubepods-besteffort-pod9429238f_5d82_4efe_9fe6_967ac28d19c7.slice. Sep 9 23:37:49.430682 systemd[1]: Created slice kubepods-burstable-pod79e730d5_0043_4884_b0d8_8982fce1f5f5.slice - libcontainer container kubepods-burstable-pod79e730d5_0043_4884_b0d8_8982fce1f5f5.slice. Sep 9 23:37:49.434585 kubelet[2616]: I0909 23:37:49.434551 2616 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/79e730d5-0043-4884-b0d8-8982fce1f5f5-lib-modules\") pod \"cilium-t9xsm\" (UID: \"79e730d5-0043-4884-b0d8-8982fce1f5f5\") " pod="kube-system/cilium-t9xsm" Sep 9 23:37:49.436320 kubelet[2616]: I0909 23:37:49.434592 2616 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/79e730d5-0043-4884-b0d8-8982fce1f5f5-cilium-cgroup\") pod \"cilium-t9xsm\" (UID: \"79e730d5-0043-4884-b0d8-8982fce1f5f5\") " pod="kube-system/cilium-t9xsm" Sep 9 23:37:49.436320 kubelet[2616]: I0909 23:37:49.434613 2616 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/79e730d5-0043-4884-b0d8-8982fce1f5f5-xtables-lock\") pod \"cilium-t9xsm\" (UID: \"79e730d5-0043-4884-b0d8-8982fce1f5f5\") " pod="kube-system/cilium-t9xsm" Sep 9 23:37:49.436320 kubelet[2616]: I0909 23:37:49.434627 2616 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/79e730d5-0043-4884-b0d8-8982fce1f5f5-etc-cni-netd\") pod \"cilium-t9xsm\" (UID: \"79e730d5-0043-4884-b0d8-8982fce1f5f5\") " pod="kube-system/cilium-t9xsm" Sep 9 23:37:49.436320 kubelet[2616]: I0909 23:37:49.434644 2616 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt8nk\" (UniqueName: \"kubernetes.io/projected/9429238f-5d82-4efe-9fe6-967ac28d19c7-kube-api-access-lt8nk\") pod \"kube-proxy-stm62\" (UID: \"9429238f-5d82-4efe-9fe6-967ac28d19c7\") " pod="kube-system/kube-proxy-stm62" Sep 9 23:37:49.436320 kubelet[2616]: I0909 23:37:49.434671 2616 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/79e730d5-0043-4884-b0d8-8982fce1f5f5-host-proc-sys-kernel\") pod \"cilium-t9xsm\" (UID: \"79e730d5-0043-4884-b0d8-8982fce1f5f5\") " pod="kube-system/cilium-t9xsm" Sep 9 23:37:49.436464 kubelet[2616]: I0909 23:37:49.434684 2616 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/79e730d5-0043-4884-b0d8-8982fce1f5f5-hubble-tls\") pod \"cilium-t9xsm\" (UID: \"79e730d5-0043-4884-b0d8-8982fce1f5f5\") " pod="kube-system/cilium-t9xsm" Sep 9 23:37:49.436464 kubelet[2616]: I0909 23:37:49.434697 2616 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/79e730d5-0043-4884-b0d8-8982fce1f5f5-cni-path\") pod \"cilium-t9xsm\" (UID: \"79e730d5-0043-4884-b0d8-8982fce1f5f5\") " pod="kube-system/cilium-t9xsm" Sep 9 23:37:49.436464 kubelet[2616]: I0909 23:37:49.434711 2616 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj44j\" (UniqueName: \"kubernetes.io/projected/79e730d5-0043-4884-b0d8-8982fce1f5f5-kube-api-access-zj44j\") pod \"cilium-t9xsm\" (UID: \"79e730d5-0043-4884-b0d8-8982fce1f5f5\") " pod="kube-system/cilium-t9xsm" Sep 9 23:37:49.436464 kubelet[2616]: I0909 23:37:49.434734 2616 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9429238f-5d82-4efe-9fe6-967ac28d19c7-kube-proxy\") pod \"kube-proxy-stm62\" (UID: \"9429238f-5d82-4efe-9fe6-967ac28d19c7\") " pod="kube-system/kube-proxy-stm62" Sep 9 23:37:49.436464 kubelet[2616]: I0909 23:37:49.434750 2616 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9429238f-5d82-4efe-9fe6-967ac28d19c7-lib-modules\") pod \"kube-proxy-stm62\" (UID: \"9429238f-5d82-4efe-9fe6-967ac28d19c7\") " pod="kube-system/kube-proxy-stm62" Sep 9 23:37:49.436464 kubelet[2616]: I0909 23:37:49.434764 2616 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/79e730d5-0043-4884-b0d8-8982fce1f5f5-clustermesh-secrets\") pod \"cilium-t9xsm\" (UID: \"79e730d5-0043-4884-b0d8-8982fce1f5f5\") " pod="kube-system/cilium-t9xsm" Sep 9 23:37:49.436579 kubelet[2616]: I0909 23:37:49.434780 2616 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/79e730d5-0043-4884-b0d8-8982fce1f5f5-cilium-config-path\") pod \"cilium-t9xsm\" (UID: \"79e730d5-0043-4884-b0d8-8982fce1f5f5\") " pod="kube-system/cilium-t9xsm" Sep 9 23:37:49.436579 kubelet[2616]: I0909 23:37:49.434811 2616 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/79e730d5-0043-4884-b0d8-8982fce1f5f5-cilium-run\") pod \"cilium-t9xsm\" (UID: \"79e730d5-0043-4884-b0d8-8982fce1f5f5\") " pod="kube-system/cilium-t9xsm" Sep 9 23:37:49.436579 kubelet[2616]: I0909 23:37:49.434839 2616 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/79e730d5-0043-4884-b0d8-8982fce1f5f5-bpf-maps\") pod \"cilium-t9xsm\" (UID: \"79e730d5-0043-4884-b0d8-8982fce1f5f5\") " pod="kube-system/cilium-t9xsm" Sep 9 23:37:49.436579 kubelet[2616]: I0909 23:37:49.434858 2616 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9429238f-5d82-4efe-9fe6-967ac28d19c7-xtables-lock\") pod \"kube-proxy-stm62\" (UID: \"9429238f-5d82-4efe-9fe6-967ac28d19c7\") " pod="kube-system/kube-proxy-stm62" Sep 9 23:37:49.436579 kubelet[2616]: I0909 23:37:49.434886 2616 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/79e730d5-0043-4884-b0d8-8982fce1f5f5-hostproc\") pod \"cilium-t9xsm\" (UID: \"79e730d5-0043-4884-b0d8-8982fce1f5f5\") " pod="kube-system/cilium-t9xsm" Sep 9 23:37:49.436579 kubelet[2616]: I0909 23:37:49.434903 2616 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/79e730d5-0043-4884-b0d8-8982fce1f5f5-host-proc-sys-net\") pod \"cilium-t9xsm\" (UID: \"79e730d5-0043-4884-b0d8-8982fce1f5f5\") " pod="kube-system/cilium-t9xsm" Sep 9 23:37:49.462118 systemd[1]: Created slice kubepods-besteffort-pod338b1ff6_be49_48be_ae1e_09451dbffc93.slice - libcontainer container kubepods-besteffort-pod338b1ff6_be49_48be_ae1e_09451dbffc93.slice. Sep 9 23:37:49.535657 kubelet[2616]: I0909 23:37:49.535613 2616 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8t9k\" (UniqueName: \"kubernetes.io/projected/338b1ff6-be49-48be-ae1e-09451dbffc93-kube-api-access-b8t9k\") pod \"cilium-operator-5d85765b45-hh5td\" (UID: \"338b1ff6-be49-48be-ae1e-09451dbffc93\") " pod="kube-system/cilium-operator-5d85765b45-hh5td" Sep 9 23:37:49.535762 kubelet[2616]: I0909 23:37:49.535685 2616 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/338b1ff6-be49-48be-ae1e-09451dbffc93-cilium-config-path\") pod \"cilium-operator-5d85765b45-hh5td\" (UID: \"338b1ff6-be49-48be-ae1e-09451dbffc93\") " pod="kube-system/cilium-operator-5d85765b45-hh5td" Sep 9 23:37:49.728908 containerd[1499]: time="2025-09-09T23:37:49.728789048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-stm62,Uid:9429238f-5d82-4efe-9fe6-967ac28d19c7,Namespace:kube-system,Attempt:0,}" Sep 9 23:37:49.735804 containerd[1499]: time="2025-09-09T23:37:49.735573645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t9xsm,Uid:79e730d5-0043-4884-b0d8-8982fce1f5f5,Namespace:kube-system,Attempt:0,}" Sep 9 23:37:49.749414 containerd[1499]: time="2025-09-09T23:37:49.749365470Z" level=info msg="connecting to shim 9d34188968094de8b09ae69a8c5dee31f7a6c5a923b26603fc30e0a31f39ab3f" address="unix:///run/containerd/s/6a68ee0f269bc5f5318466a84be8c18e694bdb9f15b30f73c934826d1960e5bb" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:37:49.754188 containerd[1499]: time="2025-09-09T23:37:49.754148893Z" level=info msg="connecting to shim 4f56e8cdc85715e21d85590d083f064d79f1957f6419d7e2570db4153d10ce3a" address="unix:///run/containerd/s/2fcc485e8675a88a46da0946a0e4b2d5196f77505301df6773af3909feecf781" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:37:49.770553 containerd[1499]: time="2025-09-09T23:37:49.770411242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-hh5td,Uid:338b1ff6-be49-48be-ae1e-09451dbffc93,Namespace:kube-system,Attempt:0,}" Sep 9 23:37:49.779433 systemd[1]: Started cri-containerd-4f56e8cdc85715e21d85590d083f064d79f1957f6419d7e2570db4153d10ce3a.scope - libcontainer container 4f56e8cdc85715e21d85590d083f064d79f1957f6419d7e2570db4153d10ce3a. Sep 9 23:37:49.781195 systemd[1]: Started cri-containerd-9d34188968094de8b09ae69a8c5dee31f7a6c5a923b26603fc30e0a31f39ab3f.scope - libcontainer container 9d34188968094de8b09ae69a8c5dee31f7a6c5a923b26603fc30e0a31f39ab3f. Sep 9 23:37:49.791470 containerd[1499]: time="2025-09-09T23:37:49.791422488Z" level=info msg="connecting to shim 979c9c58a2741bc3c0b09d89d14acd0045939aa60818fcdf8c9886088ddad53e" address="unix:///run/containerd/s/3f4be945f19da2d16d2feba86a07038e36debd8a305d81ac472470ffe4dad0ad" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:37:49.816993 containerd[1499]: time="2025-09-09T23:37:49.816925194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-stm62,Uid:9429238f-5d82-4efe-9fe6-967ac28d19c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d34188968094de8b09ae69a8c5dee31f7a6c5a923b26603fc30e0a31f39ab3f\"" Sep 9 23:37:49.819638 systemd[1]: Started cri-containerd-979c9c58a2741bc3c0b09d89d14acd0045939aa60818fcdf8c9886088ddad53e.scope - libcontainer container 979c9c58a2741bc3c0b09d89d14acd0045939aa60818fcdf8c9886088ddad53e. Sep 9 23:37:49.822370 containerd[1499]: time="2025-09-09T23:37:49.822301064Z" level=info msg="CreateContainer within sandbox \"9d34188968094de8b09ae69a8c5dee31f7a6c5a923b26603fc30e0a31f39ab3f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 23:37:49.826313 containerd[1499]: time="2025-09-09T23:37:49.826230161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t9xsm,Uid:79e730d5-0043-4884-b0d8-8982fce1f5f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f56e8cdc85715e21d85590d083f064d79f1957f6419d7e2570db4153d10ce3a\"" Sep 9 23:37:49.831201 containerd[1499]: time="2025-09-09T23:37:49.831163366Z" level=info msg="Container db66af9931aa3c3fe91fb9ade5721b28da1176657756328c4d6f139e8ec76cb5: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:37:49.831711 containerd[1499]: time="2025-09-09T23:37:49.831676481Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 9 23:37:49.842118 containerd[1499]: time="2025-09-09T23:37:49.842068487Z" level=info msg="CreateContainer within sandbox \"9d34188968094de8b09ae69a8c5dee31f7a6c5a923b26603fc30e0a31f39ab3f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"db66af9931aa3c3fe91fb9ade5721b28da1176657756328c4d6f139e8ec76cb5\"" Sep 9 23:37:49.842836 containerd[1499]: time="2025-09-09T23:37:49.842809916Z" level=info msg="StartContainer for \"db66af9931aa3c3fe91fb9ade5721b28da1176657756328c4d6f139e8ec76cb5\"" Sep 9 23:37:49.844237 containerd[1499]: time="2025-09-09T23:37:49.844205041Z" level=info msg="connecting to shim db66af9931aa3c3fe91fb9ade5721b28da1176657756328c4d6f139e8ec76cb5" address="unix:///run/containerd/s/6a68ee0f269bc5f5318466a84be8c18e694bdb9f15b30f73c934826d1960e5bb" protocol=ttrpc version=3 Sep 9 23:37:49.868435 systemd[1]: Started cri-containerd-db66af9931aa3c3fe91fb9ade5721b28da1176657756328c4d6f139e8ec76cb5.scope - libcontainer container db66af9931aa3c3fe91fb9ade5721b28da1176657756328c4d6f139e8ec76cb5. Sep 9 23:37:49.871351 containerd[1499]: time="2025-09-09T23:37:49.871311863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-hh5td,Uid:338b1ff6-be49-48be-ae1e-09451dbffc93,Namespace:kube-system,Attempt:0,} returns sandbox id \"979c9c58a2741bc3c0b09d89d14acd0045939aa60818fcdf8c9886088ddad53e\"" Sep 9 23:37:49.905981 containerd[1499]: time="2025-09-09T23:37:49.905944950Z" level=info msg="StartContainer for \"db66af9931aa3c3fe91fb9ade5721b28da1176657756328c4d6f139e8ec76cb5\" returns successfully" Sep 9 23:37:50.660119 kubelet[2616]: I0909 23:37:50.660050 2616 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-stm62" podStartSLOduration=1.660034266 podStartE2EDuration="1.660034266s" podCreationTimestamp="2025-09-09 23:37:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:37:50.659606446 +0000 UTC m=+8.133019413" watchObservedRunningTime="2025-09-09 23:37:50.660034266 +0000 UTC m=+8.133447233" Sep 9 23:37:57.472778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1435406688.mount: Deactivated successfully. Sep 9 23:37:58.559741 update_engine[1476]: I20250909 23:37:58.559648 1476 update_attempter.cc:509] Updating boot flags... Sep 9 23:37:58.654441 containerd[1499]: time="2025-09-09T23:37:58.654396641Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:37:58.654441 containerd[1499]: time="2025-09-09T23:37:58.654957532Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 9 23:37:58.662017 containerd[1499]: time="2025-09-09T23:37:58.661729826Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:37:58.666297 containerd[1499]: time="2025-09-09T23:37:58.666133586Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.834381535s" Sep 9 23:37:58.666297 containerd[1499]: time="2025-09-09T23:37:58.666186710Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 9 23:37:58.704486 containerd[1499]: time="2025-09-09T23:37:58.702672500Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 9 23:37:58.733790 containerd[1499]: time="2025-09-09T23:37:58.733749159Z" level=info msg="CreateContainer within sandbox \"4f56e8cdc85715e21d85590d083f064d79f1957f6419d7e2570db4153d10ce3a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 23:37:58.748288 containerd[1499]: time="2025-09-09T23:37:58.748000571Z" level=info msg="Container 9f15a314d4d0c350d56ef1f5981fa645c95740379b3298780156bbc03e6c540c: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:37:58.754266 containerd[1499]: time="2025-09-09T23:37:58.753923228Z" level=info msg="CreateContainer within sandbox \"4f56e8cdc85715e21d85590d083f064d79f1957f6419d7e2570db4153d10ce3a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9f15a314d4d0c350d56ef1f5981fa645c95740379b3298780156bbc03e6c540c\"" Sep 9 23:37:58.755286 containerd[1499]: time="2025-09-09T23:37:58.754671896Z" level=info msg="StartContainer for \"9f15a314d4d0c350d56ef1f5981fa645c95740379b3298780156bbc03e6c540c\"" Sep 9 23:37:58.755735 containerd[1499]: time="2025-09-09T23:37:58.755642944Z" level=info msg="connecting to shim 9f15a314d4d0c350d56ef1f5981fa645c95740379b3298780156bbc03e6c540c" address="unix:///run/containerd/s/2fcc485e8675a88a46da0946a0e4b2d5196f77505301df6773af3909feecf781" protocol=ttrpc version=3 Sep 9 23:37:58.806472 systemd[1]: Started cri-containerd-9f15a314d4d0c350d56ef1f5981fa645c95740379b3298780156bbc03e6c540c.scope - libcontainer container 9f15a314d4d0c350d56ef1f5981fa645c95740379b3298780156bbc03e6c540c. Sep 9 23:37:58.832986 containerd[1499]: time="2025-09-09T23:37:58.832672931Z" level=info msg="StartContainer for \"9f15a314d4d0c350d56ef1f5981fa645c95740379b3298780156bbc03e6c540c\" returns successfully" Sep 9 23:37:58.847551 systemd[1]: cri-containerd-9f15a314d4d0c350d56ef1f5981fa645c95740379b3298780156bbc03e6c540c.scope: Deactivated successfully. Sep 9 23:37:58.884395 containerd[1499]: time="2025-09-09T23:37:58.884326616Z" level=info msg="received exit event container_id:\"9f15a314d4d0c350d56ef1f5981fa645c95740379b3298780156bbc03e6c540c\" id:\"9f15a314d4d0c350d56ef1f5981fa645c95740379b3298780156bbc03e6c540c\" pid:3054 exited_at:{seconds:1757461078 nanos:867400001}" Sep 9 23:37:58.888467 containerd[1499]: time="2025-09-09T23:37:58.888373143Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9f15a314d4d0c350d56ef1f5981fa645c95740379b3298780156bbc03e6c540c\" id:\"9f15a314d4d0c350d56ef1f5981fa645c95740379b3298780156bbc03e6c540c\" pid:3054 exited_at:{seconds:1757461078 nanos:867400001}" Sep 9 23:37:58.919872 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f15a314d4d0c350d56ef1f5981fa645c95740379b3298780156bbc03e6c540c-rootfs.mount: Deactivated successfully. Sep 9 23:37:59.684100 containerd[1499]: time="2025-09-09T23:37:59.683753489Z" level=info msg="CreateContainer within sandbox \"4f56e8cdc85715e21d85590d083f064d79f1957f6419d7e2570db4153d10ce3a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 23:37:59.693415 containerd[1499]: time="2025-09-09T23:37:59.693368078Z" level=info msg="Container 1ee33baa25bd295710877aecb14b34e69568b868bedc64a2ed75531e8111adce: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:37:59.703180 containerd[1499]: time="2025-09-09T23:37:59.703125719Z" level=info msg="CreateContainer within sandbox \"4f56e8cdc85715e21d85590d083f064d79f1957f6419d7e2570db4153d10ce3a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1ee33baa25bd295710877aecb14b34e69568b868bedc64a2ed75531e8111adce\"" Sep 9 23:37:59.703864 containerd[1499]: time="2025-09-09T23:37:59.703828220Z" level=info msg="StartContainer for \"1ee33baa25bd295710877aecb14b34e69568b868bedc64a2ed75531e8111adce\"" Sep 9 23:37:59.704761 containerd[1499]: time="2025-09-09T23:37:59.704735698Z" level=info msg="connecting to shim 1ee33baa25bd295710877aecb14b34e69568b868bedc64a2ed75531e8111adce" address="unix:///run/containerd/s/2fcc485e8675a88a46da0946a0e4b2d5196f77505301df6773af3909feecf781" protocol=ttrpc version=3 Sep 9 23:37:59.731480 systemd[1]: Started cri-containerd-1ee33baa25bd295710877aecb14b34e69568b868bedc64a2ed75531e8111adce.scope - libcontainer container 1ee33baa25bd295710877aecb14b34e69568b868bedc64a2ed75531e8111adce. Sep 9 23:37:59.772650 containerd[1499]: time="2025-09-09T23:37:59.772611792Z" level=info msg="StartContainer for \"1ee33baa25bd295710877aecb14b34e69568b868bedc64a2ed75531e8111adce\" returns successfully" Sep 9 23:37:59.788305 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 23:37:59.788531 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:37:59.788711 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 9 23:37:59.790244 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 23:37:59.792068 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 23:37:59.796388 systemd[1]: cri-containerd-1ee33baa25bd295710877aecb14b34e69568b868bedc64a2ed75531e8111adce.scope: Deactivated successfully. Sep 9 23:37:59.797471 containerd[1499]: time="2025-09-09T23:37:59.797436653Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1ee33baa25bd295710877aecb14b34e69568b868bedc64a2ed75531e8111adce\" id:\"1ee33baa25bd295710877aecb14b34e69568b868bedc64a2ed75531e8111adce\" pid:3101 exited_at:{seconds:1757461079 nanos:796714831}" Sep 9 23:37:59.801758 containerd[1499]: time="2025-09-09T23:37:59.801710502Z" level=info msg="received exit event container_id:\"1ee33baa25bd295710877aecb14b34e69568b868bedc64a2ed75531e8111adce\" id:\"1ee33baa25bd295710877aecb14b34e69568b868bedc64a2ed75531e8111adce\" pid:3101 exited_at:{seconds:1757461079 nanos:796714831}" Sep 9 23:37:59.826321 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ee33baa25bd295710877aecb14b34e69568b868bedc64a2ed75531e8111adce-rootfs.mount: Deactivated successfully. Sep 9 23:37:59.827533 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:38:00.685513 containerd[1499]: time="2025-09-09T23:38:00.685470939Z" level=info msg="CreateContainer within sandbox \"4f56e8cdc85715e21d85590d083f064d79f1957f6419d7e2570db4153d10ce3a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 23:38:00.694879 containerd[1499]: time="2025-09-09T23:38:00.694830267Z" level=info msg="Container 0435f22dd1c4e3b95850a7d8dfa802758b773e0b7946d93281e9244795a69185: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:38:00.712173 containerd[1499]: time="2025-09-09T23:38:00.712118886Z" level=info msg="CreateContainer within sandbox \"4f56e8cdc85715e21d85590d083f064d79f1957f6419d7e2570db4153d10ce3a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0435f22dd1c4e3b95850a7d8dfa802758b773e0b7946d93281e9244795a69185\"" Sep 9 23:38:00.712682 containerd[1499]: time="2025-09-09T23:38:00.712648009Z" level=info msg="StartContainer for \"0435f22dd1c4e3b95850a7d8dfa802758b773e0b7946d93281e9244795a69185\"" Sep 9 23:38:00.714206 containerd[1499]: time="2025-09-09T23:38:00.714179535Z" level=info msg="connecting to shim 0435f22dd1c4e3b95850a7d8dfa802758b773e0b7946d93281e9244795a69185" address="unix:///run/containerd/s/2fcc485e8675a88a46da0946a0e4b2d5196f77505301df6773af3909feecf781" protocol=ttrpc version=3 Sep 9 23:38:00.735439 systemd[1]: Started cri-containerd-0435f22dd1c4e3b95850a7d8dfa802758b773e0b7946d93281e9244795a69185.scope - libcontainer container 0435f22dd1c4e3b95850a7d8dfa802758b773e0b7946d93281e9244795a69185. Sep 9 23:38:00.750798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2575211077.mount: Deactivated successfully. Sep 9 23:38:00.775443 containerd[1499]: time="2025-09-09T23:38:00.775398759Z" level=info msg="StartContainer for \"0435f22dd1c4e3b95850a7d8dfa802758b773e0b7946d93281e9244795a69185\" returns successfully" Sep 9 23:38:00.778565 systemd[1]: cri-containerd-0435f22dd1c4e3b95850a7d8dfa802758b773e0b7946d93281e9244795a69185.scope: Deactivated successfully. Sep 9 23:38:00.790911 containerd[1499]: time="2025-09-09T23:38:00.790872949Z" level=info msg="received exit event container_id:\"0435f22dd1c4e3b95850a7d8dfa802758b773e0b7946d93281e9244795a69185\" id:\"0435f22dd1c4e3b95850a7d8dfa802758b773e0b7946d93281e9244795a69185\" pid:3152 exited_at:{seconds:1757461080 nanos:790659851}" Sep 9 23:38:00.791221 containerd[1499]: time="2025-09-09T23:38:00.790932074Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0435f22dd1c4e3b95850a7d8dfa802758b773e0b7946d93281e9244795a69185\" id:\"0435f22dd1c4e3b95850a7d8dfa802758b773e0b7946d93281e9244795a69185\" pid:3152 exited_at:{seconds:1757461080 nanos:790659851}" Sep 9 23:38:00.812017 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0435f22dd1c4e3b95850a7d8dfa802758b773e0b7946d93281e9244795a69185-rootfs.mount: Deactivated successfully. Sep 9 23:38:01.690291 containerd[1499]: time="2025-09-09T23:38:01.689650032Z" level=info msg="CreateContainer within sandbox \"4f56e8cdc85715e21d85590d083f064d79f1957f6419d7e2570db4153d10ce3a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 23:38:01.710009 containerd[1499]: time="2025-09-09T23:38:01.709956059Z" level=info msg="Container e59b0737d6b88561c87a6ac1c6f053e06129307d00e91c3f45506fab20d6513d: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:38:01.718163 containerd[1499]: time="2025-09-09T23:38:01.718113056Z" level=info msg="CreateContainer within sandbox \"4f56e8cdc85715e21d85590d083f064d79f1957f6419d7e2570db4153d10ce3a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e59b0737d6b88561c87a6ac1c6f053e06129307d00e91c3f45506fab20d6513d\"" Sep 9 23:38:01.718668 containerd[1499]: time="2025-09-09T23:38:01.718640178Z" level=info msg="StartContainer for \"e59b0737d6b88561c87a6ac1c6f053e06129307d00e91c3f45506fab20d6513d\"" Sep 9 23:38:01.719537 containerd[1499]: time="2025-09-09T23:38:01.719504965Z" level=info msg="connecting to shim e59b0737d6b88561c87a6ac1c6f053e06129307d00e91c3f45506fab20d6513d" address="unix:///run/containerd/s/2fcc485e8675a88a46da0946a0e4b2d5196f77505301df6773af3909feecf781" protocol=ttrpc version=3 Sep 9 23:38:01.740452 systemd[1]: Started cri-containerd-e59b0737d6b88561c87a6ac1c6f053e06129307d00e91c3f45506fab20d6513d.scope - libcontainer container e59b0737d6b88561c87a6ac1c6f053e06129307d00e91c3f45506fab20d6513d. Sep 9 23:38:01.765850 systemd[1]: cri-containerd-e59b0737d6b88561c87a6ac1c6f053e06129307d00e91c3f45506fab20d6513d.scope: Deactivated successfully. Sep 9 23:38:01.767362 containerd[1499]: time="2025-09-09T23:38:01.767324662Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e59b0737d6b88561c87a6ac1c6f053e06129307d00e91c3f45506fab20d6513d\" id:\"e59b0737d6b88561c87a6ac1c6f053e06129307d00e91c3f45506fab20d6513d\" pid:3192 exited_at:{seconds:1757461081 nanos:766686892}" Sep 9 23:38:01.768561 containerd[1499]: time="2025-09-09T23:38:01.768520755Z" level=info msg="received exit event container_id:\"e59b0737d6b88561c87a6ac1c6f053e06129307d00e91c3f45506fab20d6513d\" id:\"e59b0737d6b88561c87a6ac1c6f053e06129307d00e91c3f45506fab20d6513d\" pid:3192 exited_at:{seconds:1757461081 nanos:766686892}" Sep 9 23:38:01.776650 containerd[1499]: time="2025-09-09T23:38:01.776590906Z" level=info msg="StartContainer for \"e59b0737d6b88561c87a6ac1c6f053e06129307d00e91c3f45506fab20d6513d\" returns successfully" Sep 9 23:38:01.788707 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e59b0737d6b88561c87a6ac1c6f053e06129307d00e91c3f45506fab20d6513d-rootfs.mount: Deactivated successfully. Sep 9 23:38:02.700896 containerd[1499]: time="2025-09-09T23:38:02.699992176Z" level=info msg="CreateContainer within sandbox \"4f56e8cdc85715e21d85590d083f064d79f1957f6419d7e2570db4153d10ce3a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 23:38:02.726548 containerd[1499]: time="2025-09-09T23:38:02.726502390Z" level=info msg="Container 3d023c97276e637f28c36180424c6087be23323081e1685f424998e3948d6e0a: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:38:02.728551 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4201817921.mount: Deactivated successfully. Sep 9 23:38:02.734063 containerd[1499]: time="2025-09-09T23:38:02.733937064Z" level=info msg="CreateContainer within sandbox \"4f56e8cdc85715e21d85590d083f064d79f1957f6419d7e2570db4153d10ce3a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3d023c97276e637f28c36180424c6087be23323081e1685f424998e3948d6e0a\"" Sep 9 23:38:02.734558 containerd[1499]: time="2025-09-09T23:38:02.734521987Z" level=info msg="StartContainer for \"3d023c97276e637f28c36180424c6087be23323081e1685f424998e3948d6e0a\"" Sep 9 23:38:02.735599 containerd[1499]: time="2025-09-09T23:38:02.735565985Z" level=info msg="connecting to shim 3d023c97276e637f28c36180424c6087be23323081e1685f424998e3948d6e0a" address="unix:///run/containerd/s/2fcc485e8675a88a46da0946a0e4b2d5196f77505301df6773af3909feecf781" protocol=ttrpc version=3 Sep 9 23:38:02.756427 systemd[1]: Started cri-containerd-3d023c97276e637f28c36180424c6087be23323081e1685f424998e3948d6e0a.scope - libcontainer container 3d023c97276e637f28c36180424c6087be23323081e1685f424998e3948d6e0a. Sep 9 23:38:02.794008 containerd[1499]: time="2025-09-09T23:38:02.793970454Z" level=info msg="StartContainer for \"3d023c97276e637f28c36180424c6087be23323081e1685f424998e3948d6e0a\" returns successfully" Sep 9 23:38:02.907280 containerd[1499]: time="2025-09-09T23:38:02.907221568Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3d023c97276e637f28c36180424c6087be23323081e1685f424998e3948d6e0a\" id:\"8772d72c0d7b3748dcc68f40abc198896432e9fa70e7188f76f622d495aeb281\" pid:3265 exited_at:{seconds:1757461082 nanos:906933587}" Sep 9 23:38:02.940191 kubelet[2616]: I0909 23:38:02.940144 2616 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 9 23:38:03.012948 systemd[1]: Created slice kubepods-burstable-pod1760cf85_f391_464c_803c_f4d6f2a8b992.slice - libcontainer container kubepods-burstable-pod1760cf85_f391_464c_803c_f4d6f2a8b992.slice. Sep 9 23:38:03.020818 systemd[1]: Created slice kubepods-burstable-pod7b4a9230_2736_4867_8819_42045de52edd.slice - libcontainer container kubepods-burstable-pod7b4a9230_2736_4867_8819_42045de52edd.slice. Sep 9 23:38:03.039940 kubelet[2616]: I0909 23:38:03.039891 2616 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1760cf85-f391-464c-803c-f4d6f2a8b992-config-volume\") pod \"coredns-7c65d6cfc9-b7gfg\" (UID: \"1760cf85-f391-464c-803c-f4d6f2a8b992\") " pod="kube-system/coredns-7c65d6cfc9-b7gfg" Sep 9 23:38:03.039940 kubelet[2616]: I0909 23:38:03.039939 2616 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg4hr\" (UniqueName: \"kubernetes.io/projected/7b4a9230-2736-4867-8819-42045de52edd-kube-api-access-sg4hr\") pod \"coredns-7c65d6cfc9-2jjvp\" (UID: \"7b4a9230-2736-4867-8819-42045de52edd\") " pod="kube-system/coredns-7c65d6cfc9-2jjvp" Sep 9 23:38:03.040088 kubelet[2616]: I0909 23:38:03.039967 2616 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b4a9230-2736-4867-8819-42045de52edd-config-volume\") pod \"coredns-7c65d6cfc9-2jjvp\" (UID: \"7b4a9230-2736-4867-8819-42045de52edd\") " pod="kube-system/coredns-7c65d6cfc9-2jjvp" Sep 9 23:38:03.040088 kubelet[2616]: I0909 23:38:03.039986 2616 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9dnv\" (UniqueName: \"kubernetes.io/projected/1760cf85-f391-464c-803c-f4d6f2a8b992-kube-api-access-c9dnv\") pod \"coredns-7c65d6cfc9-b7gfg\" (UID: \"1760cf85-f391-464c-803c-f4d6f2a8b992\") " pod="kube-system/coredns-7c65d6cfc9-b7gfg" Sep 9 23:38:03.317435 containerd[1499]: time="2025-09-09T23:38:03.317384300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-b7gfg,Uid:1760cf85-f391-464c-803c-f4d6f2a8b992,Namespace:kube-system,Attempt:0,}" Sep 9 23:38:03.329413 containerd[1499]: time="2025-09-09T23:38:03.328427044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2jjvp,Uid:7b4a9230-2736-4867-8819-42045de52edd,Namespace:kube-system,Attempt:0,}" Sep 9 23:38:03.541149 containerd[1499]: time="2025-09-09T23:38:03.541019463Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:38:03.541827 containerd[1499]: time="2025-09-09T23:38:03.541627546Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 9 23:38:03.542809 containerd[1499]: time="2025-09-09T23:38:03.542770628Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:38:03.544112 containerd[1499]: time="2025-09-09T23:38:03.544076800Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 4.841355616s" Sep 9 23:38:03.544112 containerd[1499]: time="2025-09-09T23:38:03.544113883Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 9 23:38:03.546661 containerd[1499]: time="2025-09-09T23:38:03.546626182Z" level=info msg="CreateContainer within sandbox \"979c9c58a2741bc3c0b09d89d14acd0045939aa60818fcdf8c9886088ddad53e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 9 23:38:03.553721 containerd[1499]: time="2025-09-09T23:38:03.553672202Z" level=info msg="Container a6a44a5dce9b4dbee42b182bb288dcf244c2041e16abd4c9c7244818e3e0487e: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:38:03.559447 containerd[1499]: time="2025-09-09T23:38:03.559379407Z" level=info msg="CreateContainer within sandbox \"979c9c58a2741bc3c0b09d89d14acd0045939aa60818fcdf8c9886088ddad53e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a6a44a5dce9b4dbee42b182bb288dcf244c2041e16abd4c9c7244818e3e0487e\"" Sep 9 23:38:03.559983 containerd[1499]: time="2025-09-09T23:38:03.559957168Z" level=info msg="StartContainer for \"a6a44a5dce9b4dbee42b182bb288dcf244c2041e16abd4c9c7244818e3e0487e\"" Sep 9 23:38:03.561268 containerd[1499]: time="2025-09-09T23:38:03.561070407Z" level=info msg="connecting to shim a6a44a5dce9b4dbee42b182bb288dcf244c2041e16abd4c9c7244818e3e0487e" address="unix:///run/containerd/s/3f4be945f19da2d16d2feba86a07038e36debd8a305d81ac472470ffe4dad0ad" protocol=ttrpc version=3 Sep 9 23:38:03.588467 systemd[1]: Started cri-containerd-a6a44a5dce9b4dbee42b182bb288dcf244c2041e16abd4c9c7244818e3e0487e.scope - libcontainer container a6a44a5dce9b4dbee42b182bb288dcf244c2041e16abd4c9c7244818e3e0487e. Sep 9 23:38:03.632772 containerd[1499]: time="2025-09-09T23:38:03.632735537Z" level=info msg="StartContainer for \"a6a44a5dce9b4dbee42b182bb288dcf244c2041e16abd4c9c7244818e3e0487e\" returns successfully" Sep 9 23:38:03.826801 kubelet[2616]: I0909 23:38:03.826715 2616 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-t9xsm" podStartSLOduration=5.955069263 podStartE2EDuration="14.826695553s" podCreationTimestamp="2025-09-09 23:37:49 +0000 UTC" firstStartedPulling="2025-09-09 23:37:49.829294291 +0000 UTC m=+7.302707258" lastFinishedPulling="2025-09-09 23:37:58.700920581 +0000 UTC m=+16.174333548" observedRunningTime="2025-09-09 23:38:03.826152795 +0000 UTC m=+21.299565762" watchObservedRunningTime="2025-09-09 23:38:03.826695553 +0000 UTC m=+21.300108520" Sep 9 23:38:03.827177 kubelet[2616]: I0909 23:38:03.827135 2616 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-hh5td" podStartSLOduration=1.156190263 podStartE2EDuration="14.827124824s" podCreationTimestamp="2025-09-09 23:37:49 +0000 UTC" firstStartedPulling="2025-09-09 23:37:49.874045664 +0000 UTC m=+7.347458631" lastFinishedPulling="2025-09-09 23:38:03.544980225 +0000 UTC m=+21.018393192" observedRunningTime="2025-09-09 23:38:03.74976953 +0000 UTC m=+21.223182537" watchObservedRunningTime="2025-09-09 23:38:03.827124824 +0000 UTC m=+21.300537791" Sep 9 23:38:07.750890 systemd-networkd[1431]: cilium_host: Link UP Sep 9 23:38:07.751011 systemd-networkd[1431]: cilium_net: Link UP Sep 9 23:38:07.751139 systemd-networkd[1431]: cilium_host: Gained carrier Sep 9 23:38:07.751271 systemd-networkd[1431]: cilium_net: Gained carrier Sep 9 23:38:07.862555 systemd-networkd[1431]: cilium_vxlan: Link UP Sep 9 23:38:07.862561 systemd-networkd[1431]: cilium_vxlan: Gained carrier Sep 9 23:38:08.147320 kernel: NET: Registered PF_ALG protocol family Sep 9 23:38:08.154453 systemd-networkd[1431]: cilium_host: Gained IPv6LL Sep 9 23:38:08.393476 systemd-networkd[1431]: cilium_net: Gained IPv6LL Sep 9 23:38:08.857136 systemd-networkd[1431]: lxc_health: Link UP Sep 9 23:38:08.857738 systemd-networkd[1431]: lxc_health: Gained carrier Sep 9 23:38:08.969472 systemd-networkd[1431]: cilium_vxlan: Gained IPv6LL Sep 9 23:38:09.019545 systemd-networkd[1431]: lxc7e3db2d6ae28: Link UP Sep 9 23:38:09.022310 kernel: eth0: renamed from tmpe269b Sep 9 23:38:09.022848 systemd-networkd[1431]: lxc7e3db2d6ae28: Gained carrier Sep 9 23:38:09.389101 systemd-networkd[1431]: lxc0fd394360c33: Link UP Sep 9 23:38:09.398565 kernel: eth0: renamed from tmpa234d Sep 9 23:38:09.400201 systemd-networkd[1431]: lxc0fd394360c33: Gained carrier Sep 9 23:38:10.121420 systemd-networkd[1431]: lxc_health: Gained IPv6LL Sep 9 23:38:10.569540 systemd-networkd[1431]: lxc0fd394360c33: Gained IPv6LL Sep 9 23:38:10.825459 systemd-networkd[1431]: lxc7e3db2d6ae28: Gained IPv6LL Sep 9 23:38:11.420010 systemd[1]: Started sshd@7-10.0.0.81:22-10.0.0.1:40634.service - OpenSSH per-connection server daemon (10.0.0.1:40634). Sep 9 23:38:11.476008 sshd[3784]: Accepted publickey for core from 10.0.0.1 port 40634 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:38:11.477504 sshd-session[3784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:38:11.481998 systemd-logind[1470]: New session 8 of user core. Sep 9 23:38:11.495481 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 23:38:11.639526 sshd[3786]: Connection closed by 10.0.0.1 port 40634 Sep 9 23:38:11.639865 sshd-session[3784]: pam_unix(sshd:session): session closed for user core Sep 9 23:38:11.643343 systemd[1]: sshd@7-10.0.0.81:22-10.0.0.1:40634.service: Deactivated successfully. Sep 9 23:38:11.645041 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 23:38:11.646472 systemd-logind[1470]: Session 8 logged out. Waiting for processes to exit. Sep 9 23:38:11.648048 systemd-logind[1470]: Removed session 8. Sep 9 23:38:12.885230 containerd[1499]: time="2025-09-09T23:38:12.885190991Z" level=info msg="connecting to shim a234d4937e1e6ae3458a0785d0e1a3703038ea3ee2c7ec0f1ed3de324ef7d892" address="unix:///run/containerd/s/8eb01e37394286693a55e99f2fce6ee5b5c0e4dfc59695f9692e670215153715" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:38:12.886307 containerd[1499]: time="2025-09-09T23:38:12.886222481Z" level=info msg="connecting to shim e269ba5519d9daeceb9cd01f9fb23d2cd392b6a10cefc502ab14103acdeeeb56" address="unix:///run/containerd/s/98943242c34824203764694b318500ab34794739a111f3cd0255e4254f9348da" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:38:12.916487 systemd[1]: Started cri-containerd-a234d4937e1e6ae3458a0785d0e1a3703038ea3ee2c7ec0f1ed3de324ef7d892.scope - libcontainer container a234d4937e1e6ae3458a0785d0e1a3703038ea3ee2c7ec0f1ed3de324ef7d892. Sep 9 23:38:12.917891 systemd[1]: Started cri-containerd-e269ba5519d9daeceb9cd01f9fb23d2cd392b6a10cefc502ab14103acdeeeb56.scope - libcontainer container e269ba5519d9daeceb9cd01f9fb23d2cd392b6a10cefc502ab14103acdeeeb56. Sep 9 23:38:12.930120 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 23:38:12.931181 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 23:38:12.963098 containerd[1499]: time="2025-09-09T23:38:12.963046228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-b7gfg,Uid:1760cf85-f391-464c-803c-f4d6f2a8b992,Namespace:kube-system,Attempt:0,} returns sandbox id \"a234d4937e1e6ae3458a0785d0e1a3703038ea3ee2c7ec0f1ed3de324ef7d892\"" Sep 9 23:38:12.964454 containerd[1499]: time="2025-09-09T23:38:12.964362052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2jjvp,Uid:7b4a9230-2736-4867-8819-42045de52edd,Namespace:kube-system,Attempt:0,} returns sandbox id \"e269ba5519d9daeceb9cd01f9fb23d2cd392b6a10cefc502ab14103acdeeeb56\"" Sep 9 23:38:12.966626 containerd[1499]: time="2025-09-09T23:38:12.966597559Z" level=info msg="CreateContainer within sandbox \"a234d4937e1e6ae3458a0785d0e1a3703038ea3ee2c7ec0f1ed3de324ef7d892\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 23:38:12.967256 containerd[1499]: time="2025-09-09T23:38:12.967215069Z" level=info msg="CreateContainer within sandbox \"e269ba5519d9daeceb9cd01f9fb23d2cd392b6a10cefc502ab14103acdeeeb56\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 23:38:12.982092 containerd[1499]: time="2025-09-09T23:38:12.982052865Z" level=info msg="Container f7afdd1c7101eb45889bd39d454af88ef70127d21f7b8bb991e28521289d1302: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:38:12.982386 containerd[1499]: time="2025-09-09T23:38:12.982363120Z" level=info msg="Container 0bf24b8cc3984a2608ea0de101e48f129b2a489bf1e77033a41ccf050fc39981: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:38:12.990165 containerd[1499]: time="2025-09-09T23:38:12.990113214Z" level=info msg="CreateContainer within sandbox \"e269ba5519d9daeceb9cd01f9fb23d2cd392b6a10cefc502ab14103acdeeeb56\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0bf24b8cc3984a2608ea0de101e48f129b2a489bf1e77033a41ccf050fc39981\"" Sep 9 23:38:12.990394 containerd[1499]: time="2025-09-09T23:38:12.990362426Z" level=info msg="CreateContainer within sandbox \"a234d4937e1e6ae3458a0785d0e1a3703038ea3ee2c7ec0f1ed3de324ef7d892\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f7afdd1c7101eb45889bd39d454af88ef70127d21f7b8bb991e28521289d1302\"" Sep 9 23:38:12.990848 containerd[1499]: time="2025-09-09T23:38:12.990809088Z" level=info msg="StartContainer for \"f7afdd1c7101eb45889bd39d454af88ef70127d21f7b8bb991e28521289d1302\"" Sep 9 23:38:12.990937 containerd[1499]: time="2025-09-09T23:38:12.990910733Z" level=info msg="StartContainer for \"0bf24b8cc3984a2608ea0de101e48f129b2a489bf1e77033a41ccf050fc39981\"" Sep 9 23:38:12.991603 containerd[1499]: time="2025-09-09T23:38:12.991569725Z" level=info msg="connecting to shim f7afdd1c7101eb45889bd39d454af88ef70127d21f7b8bb991e28521289d1302" address="unix:///run/containerd/s/8eb01e37394286693a55e99f2fce6ee5b5c0e4dfc59695f9692e670215153715" protocol=ttrpc version=3 Sep 9 23:38:12.994274 containerd[1499]: time="2025-09-09T23:38:12.993328449Z" level=info msg="connecting to shim 0bf24b8cc3984a2608ea0de101e48f129b2a489bf1e77033a41ccf050fc39981" address="unix:///run/containerd/s/98943242c34824203764694b318500ab34794739a111f3cd0255e4254f9348da" protocol=ttrpc version=3 Sep 9 23:38:13.019505 systemd[1]: Started cri-containerd-0bf24b8cc3984a2608ea0de101e48f129b2a489bf1e77033a41ccf050fc39981.scope - libcontainer container 0bf24b8cc3984a2608ea0de101e48f129b2a489bf1e77033a41ccf050fc39981. Sep 9 23:38:13.020833 systemd[1]: Started cri-containerd-f7afdd1c7101eb45889bd39d454af88ef70127d21f7b8bb991e28521289d1302.scope - libcontainer container f7afdd1c7101eb45889bd39d454af88ef70127d21f7b8bb991e28521289d1302. Sep 9 23:38:13.055647 containerd[1499]: time="2025-09-09T23:38:13.055601676Z" level=info msg="StartContainer for \"f7afdd1c7101eb45889bd39d454af88ef70127d21f7b8bb991e28521289d1302\" returns successfully" Sep 9 23:38:13.083470 containerd[1499]: time="2025-09-09T23:38:13.083431889Z" level=info msg="StartContainer for \"0bf24b8cc3984a2608ea0de101e48f129b2a489bf1e77033a41ccf050fc39981\" returns successfully" Sep 9 23:38:13.765828 kubelet[2616]: I0909 23:38:13.764877 2616 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-2jjvp" podStartSLOduration=24.764857223 podStartE2EDuration="24.764857223s" podCreationTimestamp="2025-09-09 23:37:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:38:13.76242407 +0000 UTC m=+31.235837037" watchObservedRunningTime="2025-09-09 23:38:13.764857223 +0000 UTC m=+31.238270190" Sep 9 23:38:13.864677 kubelet[2616]: I0909 23:38:13.864581 2616 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-b7gfg" podStartSLOduration=24.864561094 podStartE2EDuration="24.864561094s" podCreationTimestamp="2025-09-09 23:37:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:38:13.864454409 +0000 UTC m=+31.337867376" watchObservedRunningTime="2025-09-09 23:38:13.864561094 +0000 UTC m=+31.337974061" Sep 9 23:38:13.873417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1694059998.mount: Deactivated successfully. Sep 9 23:38:16.658959 systemd[1]: Started sshd@8-10.0.0.81:22-10.0.0.1:40640.service - OpenSSH per-connection server daemon (10.0.0.1:40640). Sep 9 23:38:16.724934 sshd[3977]: Accepted publickey for core from 10.0.0.1 port 40640 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:38:16.727112 sshd-session[3977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:38:16.733885 systemd-logind[1470]: New session 9 of user core. Sep 9 23:38:16.743493 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 23:38:16.896806 sshd[3979]: Connection closed by 10.0.0.1 port 40640 Sep 9 23:38:16.897441 sshd-session[3977]: pam_unix(sshd:session): session closed for user core Sep 9 23:38:16.901007 systemd[1]: sshd@8-10.0.0.81:22-10.0.0.1:40640.service: Deactivated successfully. Sep 9 23:38:16.902819 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 23:38:16.904992 systemd-logind[1470]: Session 9 logged out. Waiting for processes to exit. Sep 9 23:38:16.908323 systemd-logind[1470]: Removed session 9. Sep 9 23:38:21.922193 systemd[1]: Started sshd@9-10.0.0.81:22-10.0.0.1:39310.service - OpenSSH per-connection server daemon (10.0.0.1:39310). Sep 9 23:38:21.972081 sshd[3998]: Accepted publickey for core from 10.0.0.1 port 39310 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:38:21.973392 sshd-session[3998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:38:21.978348 systemd-logind[1470]: New session 10 of user core. Sep 9 23:38:21.996453 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 23:38:22.110681 sshd[4000]: Connection closed by 10.0.0.1 port 39310 Sep 9 23:38:22.111043 sshd-session[3998]: pam_unix(sshd:session): session closed for user core Sep 9 23:38:22.114821 systemd[1]: sshd@9-10.0.0.81:22-10.0.0.1:39310.service: Deactivated successfully. Sep 9 23:38:22.117933 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 23:38:22.118858 systemd-logind[1470]: Session 10 logged out. Waiting for processes to exit. Sep 9 23:38:22.120463 systemd-logind[1470]: Removed session 10. Sep 9 23:38:27.127493 systemd[1]: Started sshd@10-10.0.0.81:22-10.0.0.1:39314.service - OpenSSH per-connection server daemon (10.0.0.1:39314). Sep 9 23:38:27.200196 sshd[4020]: Accepted publickey for core from 10.0.0.1 port 39314 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:38:27.202091 sshd-session[4020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:38:27.208947 systemd-logind[1470]: New session 11 of user core. Sep 9 23:38:27.219545 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 23:38:27.363851 sshd[4022]: Connection closed by 10.0.0.1 port 39314 Sep 9 23:38:27.365184 sshd-session[4020]: pam_unix(sshd:session): session closed for user core Sep 9 23:38:27.375021 systemd[1]: sshd@10-10.0.0.81:22-10.0.0.1:39314.service: Deactivated successfully. Sep 9 23:38:27.379950 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 23:38:27.380739 systemd-logind[1470]: Session 11 logged out. Waiting for processes to exit. Sep 9 23:38:27.385007 systemd[1]: Started sshd@11-10.0.0.81:22-10.0.0.1:39316.service - OpenSSH per-connection server daemon (10.0.0.1:39316). Sep 9 23:38:27.386777 systemd-logind[1470]: Removed session 11. Sep 9 23:38:27.439001 sshd[4037]: Accepted publickey for core from 10.0.0.1 port 39316 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:38:27.443188 sshd-session[4037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:38:27.448598 systemd-logind[1470]: New session 12 of user core. Sep 9 23:38:27.466465 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 23:38:27.653359 sshd[4039]: Connection closed by 10.0.0.1 port 39316 Sep 9 23:38:27.652322 sshd-session[4037]: pam_unix(sshd:session): session closed for user core Sep 9 23:38:27.665503 systemd[1]: sshd@11-10.0.0.81:22-10.0.0.1:39316.service: Deactivated successfully. Sep 9 23:38:27.667436 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 23:38:27.672061 systemd-logind[1470]: Session 12 logged out. Waiting for processes to exit. Sep 9 23:38:27.678008 systemd[1]: Started sshd@12-10.0.0.81:22-10.0.0.1:39328.service - OpenSSH per-connection server daemon (10.0.0.1:39328). Sep 9 23:38:27.684807 systemd-logind[1470]: Removed session 12. Sep 9 23:38:27.730018 sshd[4051]: Accepted publickey for core from 10.0.0.1 port 39328 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:38:27.731520 sshd-session[4051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:38:27.736332 systemd-logind[1470]: New session 13 of user core. Sep 9 23:38:27.746467 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 23:38:27.877990 sshd[4053]: Connection closed by 10.0.0.1 port 39328 Sep 9 23:38:27.878358 sshd-session[4051]: pam_unix(sshd:session): session closed for user core Sep 9 23:38:27.882818 systemd[1]: sshd@12-10.0.0.81:22-10.0.0.1:39328.service: Deactivated successfully. Sep 9 23:38:27.884574 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 23:38:27.885302 systemd-logind[1470]: Session 13 logged out. Waiting for processes to exit. Sep 9 23:38:27.887394 systemd-logind[1470]: Removed session 13. Sep 9 23:38:32.893907 systemd[1]: Started sshd@13-10.0.0.81:22-10.0.0.1:34954.service - OpenSSH per-connection server daemon (10.0.0.1:34954). Sep 9 23:38:32.951882 sshd[4067]: Accepted publickey for core from 10.0.0.1 port 34954 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:38:32.953382 sshd-session[4067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:38:32.957839 systemd-logind[1470]: New session 14 of user core. Sep 9 23:38:32.973588 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 23:38:33.105327 sshd[4069]: Connection closed by 10.0.0.1 port 34954 Sep 9 23:38:33.105685 sshd-session[4067]: pam_unix(sshd:session): session closed for user core Sep 9 23:38:33.111021 systemd[1]: sshd@13-10.0.0.81:22-10.0.0.1:34954.service: Deactivated successfully. Sep 9 23:38:33.112905 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 23:38:33.115108 systemd-logind[1470]: Session 14 logged out. Waiting for processes to exit. Sep 9 23:38:33.116648 systemd-logind[1470]: Removed session 14. Sep 9 23:38:38.119236 systemd[1]: Started sshd@14-10.0.0.81:22-10.0.0.1:34956.service - OpenSSH per-connection server daemon (10.0.0.1:34956). Sep 9 23:38:38.164754 sshd[4082]: Accepted publickey for core from 10.0.0.1 port 34956 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:38:38.166244 sshd-session[4082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:38:38.171969 systemd-logind[1470]: New session 15 of user core. Sep 9 23:38:38.181025 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 23:38:38.318877 sshd[4084]: Connection closed by 10.0.0.1 port 34956 Sep 9 23:38:38.320337 sshd-session[4082]: pam_unix(sshd:session): session closed for user core Sep 9 23:38:38.327042 systemd[1]: sshd@14-10.0.0.81:22-10.0.0.1:34956.service: Deactivated successfully. Sep 9 23:38:38.329291 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 23:38:38.330162 systemd-logind[1470]: Session 15 logged out. Waiting for processes to exit. Sep 9 23:38:38.333345 systemd[1]: Started sshd@15-10.0.0.81:22-10.0.0.1:34964.service - OpenSSH per-connection server daemon (10.0.0.1:34964). Sep 9 23:38:38.334429 systemd-logind[1470]: Removed session 15. Sep 9 23:38:38.393451 sshd[4099]: Accepted publickey for core from 10.0.0.1 port 34964 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:38:38.394885 sshd-session[4099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:38:38.402657 systemd-logind[1470]: New session 16 of user core. Sep 9 23:38:38.412496 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 23:38:38.626920 sshd[4101]: Connection closed by 10.0.0.1 port 34964 Sep 9 23:38:38.628751 sshd-session[4099]: pam_unix(sshd:session): session closed for user core Sep 9 23:38:38.637682 systemd[1]: sshd@15-10.0.0.81:22-10.0.0.1:34964.service: Deactivated successfully. Sep 9 23:38:38.639539 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 23:38:38.641819 systemd-logind[1470]: Session 16 logged out. Waiting for processes to exit. Sep 9 23:38:38.644723 systemd[1]: Started sshd@16-10.0.0.81:22-10.0.0.1:34968.service - OpenSSH per-connection server daemon (10.0.0.1:34968). Sep 9 23:38:38.645242 systemd-logind[1470]: Removed session 16. Sep 9 23:38:38.702335 sshd[4112]: Accepted publickey for core from 10.0.0.1 port 34968 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:38:38.703721 sshd-session[4112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:38:38.708314 systemd-logind[1470]: New session 17 of user core. Sep 9 23:38:38.724465 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 23:38:39.948302 sshd[4114]: Connection closed by 10.0.0.1 port 34968 Sep 9 23:38:39.947964 sshd-session[4112]: pam_unix(sshd:session): session closed for user core Sep 9 23:38:39.963735 systemd[1]: sshd@16-10.0.0.81:22-10.0.0.1:34968.service: Deactivated successfully. Sep 9 23:38:39.967289 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 23:38:39.968868 systemd-logind[1470]: Session 17 logged out. Waiting for processes to exit. Sep 9 23:38:39.974785 systemd[1]: Started sshd@17-10.0.0.81:22-10.0.0.1:54138.service - OpenSSH per-connection server daemon (10.0.0.1:54138). Sep 9 23:38:39.976133 systemd-logind[1470]: Removed session 17. Sep 9 23:38:40.028736 sshd[4135]: Accepted publickey for core from 10.0.0.1 port 54138 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:38:40.030056 sshd-session[4135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:38:40.034008 systemd-logind[1470]: New session 18 of user core. Sep 9 23:38:40.045471 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 23:38:40.271317 sshd[4137]: Connection closed by 10.0.0.1 port 54138 Sep 9 23:38:40.271318 sshd-session[4135]: pam_unix(sshd:session): session closed for user core Sep 9 23:38:40.284925 systemd[1]: sshd@17-10.0.0.81:22-10.0.0.1:54138.service: Deactivated successfully. Sep 9 23:38:40.287349 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 23:38:40.289000 systemd-logind[1470]: Session 18 logged out. Waiting for processes to exit. Sep 9 23:38:40.292005 systemd[1]: Started sshd@18-10.0.0.81:22-10.0.0.1:54148.service - OpenSSH per-connection server daemon (10.0.0.1:54148). Sep 9 23:38:40.292885 systemd-logind[1470]: Removed session 18. Sep 9 23:38:40.357105 sshd[4149]: Accepted publickey for core from 10.0.0.1 port 54148 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:38:40.358546 sshd-session[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:38:40.362948 systemd-logind[1470]: New session 19 of user core. Sep 9 23:38:40.373497 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 23:38:40.490398 sshd[4151]: Connection closed by 10.0.0.1 port 54148 Sep 9 23:38:40.490968 sshd-session[4149]: pam_unix(sshd:session): session closed for user core Sep 9 23:38:40.494540 systemd[1]: sshd@18-10.0.0.81:22-10.0.0.1:54148.service: Deactivated successfully. Sep 9 23:38:40.496220 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 23:38:40.497571 systemd-logind[1470]: Session 19 logged out. Waiting for processes to exit. Sep 9 23:38:40.498552 systemd-logind[1470]: Removed session 19. Sep 9 23:38:45.505866 systemd[1]: Started sshd@19-10.0.0.81:22-10.0.0.1:54158.service - OpenSSH per-connection server daemon (10.0.0.1:54158). Sep 9 23:38:45.568579 sshd[4170]: Accepted publickey for core from 10.0.0.1 port 54158 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:38:45.570090 sshd-session[4170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:38:45.574609 systemd-logind[1470]: New session 20 of user core. Sep 9 23:38:45.591451 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 23:38:45.706678 sshd[4172]: Connection closed by 10.0.0.1 port 54158 Sep 9 23:38:45.707010 sshd-session[4170]: pam_unix(sshd:session): session closed for user core Sep 9 23:38:45.710369 systemd[1]: sshd@19-10.0.0.81:22-10.0.0.1:54158.service: Deactivated successfully. Sep 9 23:38:45.712772 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 23:38:45.714030 systemd-logind[1470]: Session 20 logged out. Waiting for processes to exit. Sep 9 23:38:45.715724 systemd-logind[1470]: Removed session 20. Sep 9 23:38:50.724143 systemd[1]: Started sshd@20-10.0.0.81:22-10.0.0.1:54462.service - OpenSSH per-connection server daemon (10.0.0.1:54462). Sep 9 23:38:50.794348 sshd[4188]: Accepted publickey for core from 10.0.0.1 port 54462 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:38:50.796382 sshd-session[4188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:38:50.801991 systemd-logind[1470]: New session 21 of user core. Sep 9 23:38:50.820483 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 23:38:50.957073 sshd[4190]: Connection closed by 10.0.0.1 port 54462 Sep 9 23:38:50.957424 sshd-session[4188]: pam_unix(sshd:session): session closed for user core Sep 9 23:38:50.961234 systemd[1]: sshd@20-10.0.0.81:22-10.0.0.1:54462.service: Deactivated successfully. Sep 9 23:38:50.963699 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 23:38:50.966441 systemd-logind[1470]: Session 21 logged out. Waiting for processes to exit. Sep 9 23:38:50.969211 systemd-logind[1470]: Removed session 21. Sep 9 23:38:55.969147 systemd[1]: Started sshd@21-10.0.0.81:22-10.0.0.1:54470.service - OpenSSH per-connection server daemon (10.0.0.1:54470). Sep 9 23:38:56.023836 sshd[4203]: Accepted publickey for core from 10.0.0.1 port 54470 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:38:56.025771 sshd-session[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:38:56.032440 systemd-logind[1470]: New session 22 of user core. Sep 9 23:38:56.040470 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 23:38:56.161169 sshd[4205]: Connection closed by 10.0.0.1 port 54470 Sep 9 23:38:56.161586 sshd-session[4203]: pam_unix(sshd:session): session closed for user core Sep 9 23:38:56.172407 systemd[1]: sshd@21-10.0.0.81:22-10.0.0.1:54470.service: Deactivated successfully. Sep 9 23:38:56.174090 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 23:38:56.175530 systemd-logind[1470]: Session 22 logged out. Waiting for processes to exit. Sep 9 23:38:56.180560 systemd[1]: Started sshd@22-10.0.0.81:22-10.0.0.1:54482.service - OpenSSH per-connection server daemon (10.0.0.1:54482). Sep 9 23:38:56.181330 systemd-logind[1470]: Removed session 22. Sep 9 23:38:56.235623 sshd[4219]: Accepted publickey for core from 10.0.0.1 port 54482 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:38:56.236976 sshd-session[4219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:38:56.242130 systemd-logind[1470]: New session 23 of user core. Sep 9 23:38:56.262729 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 23:38:57.648649 containerd[1499]: time="2025-09-09T23:38:57.648581613Z" level=info msg="StopContainer for \"a6a44a5dce9b4dbee42b182bb288dcf244c2041e16abd4c9c7244818e3e0487e\" with timeout 30 (s)" Sep 9 23:38:57.660276 containerd[1499]: time="2025-09-09T23:38:57.659500674Z" level=info msg="Stop container \"a6a44a5dce9b4dbee42b182bb288dcf244c2041e16abd4c9c7244818e3e0487e\" with signal terminated" Sep 9 23:38:57.672402 containerd[1499]: time="2025-09-09T23:38:57.672342060Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 23:38:57.675290 systemd[1]: cri-containerd-a6a44a5dce9b4dbee42b182bb288dcf244c2041e16abd4c9c7244818e3e0487e.scope: Deactivated successfully. Sep 9 23:38:57.676116 containerd[1499]: time="2025-09-09T23:38:57.675495866Z" level=info msg="received exit event container_id:\"a6a44a5dce9b4dbee42b182bb288dcf244c2041e16abd4c9c7244818e3e0487e\" id:\"a6a44a5dce9b4dbee42b182bb288dcf244c2041e16abd4c9c7244818e3e0487e\" pid:3386 exited_at:{seconds:1757461137 nanos:675107906}" Sep 9 23:38:57.676116 containerd[1499]: time="2025-09-09T23:38:57.675623187Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a6a44a5dce9b4dbee42b182bb288dcf244c2041e16abd4c9c7244818e3e0487e\" id:\"a6a44a5dce9b4dbee42b182bb288dcf244c2041e16abd4c9c7244818e3e0487e\" pid:3386 exited_at:{seconds:1757461137 nanos:675107906}" Sep 9 23:38:57.679317 kubelet[2616]: E0909 23:38:57.679280 2616 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 23:38:57.680781 containerd[1499]: time="2025-09-09T23:38:57.680280156Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3d023c97276e637f28c36180424c6087be23323081e1685f424998e3948d6e0a\" id:\"bc8f7fd31ffa002c02d1c6e3a11cc4e82bc0d5ad43b1305bb4791630ec163c2a\" pid:4241 exited_at:{seconds:1757461137 nanos:679855315}" Sep 9 23:38:57.683155 containerd[1499]: time="2025-09-09T23:38:57.683112922Z" level=info msg="StopContainer for \"3d023c97276e637f28c36180424c6087be23323081e1685f424998e3948d6e0a\" with timeout 2 (s)" Sep 9 23:38:57.683804 containerd[1499]: time="2025-09-09T23:38:57.683714363Z" level=info msg="Stop container \"3d023c97276e637f28c36180424c6087be23323081e1685f424998e3948d6e0a\" with signal terminated" Sep 9 23:38:57.692656 systemd-networkd[1431]: lxc_health: Link DOWN Sep 9 23:38:57.692661 systemd-networkd[1431]: lxc_health: Lost carrier Sep 9 23:38:57.707973 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a6a44a5dce9b4dbee42b182bb288dcf244c2041e16abd4c9c7244818e3e0487e-rootfs.mount: Deactivated successfully. Sep 9 23:38:57.715746 systemd[1]: cri-containerd-3d023c97276e637f28c36180424c6087be23323081e1685f424998e3948d6e0a.scope: Deactivated successfully. Sep 9 23:38:57.716403 systemd[1]: cri-containerd-3d023c97276e637f28c36180424c6087be23323081e1685f424998e3948d6e0a.scope: Consumed 6.779s CPU time, 123.2M memory peak, 1.1M read from disk, 12.9M written to disk. Sep 9 23:38:57.718722 containerd[1499]: time="2025-09-09T23:38:57.718653273Z" level=info msg="received exit event container_id:\"3d023c97276e637f28c36180424c6087be23323081e1685f424998e3948d6e0a\" id:\"3d023c97276e637f28c36180424c6087be23323081e1685f424998e3948d6e0a\" pid:3234 exited_at:{seconds:1757461137 nanos:718424072}" Sep 9 23:38:57.719018 containerd[1499]: time="2025-09-09T23:38:57.718879793Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3d023c97276e637f28c36180424c6087be23323081e1685f424998e3948d6e0a\" id:\"3d023c97276e637f28c36180424c6087be23323081e1685f424998e3948d6e0a\" pid:3234 exited_at:{seconds:1757461137 nanos:718424072}" Sep 9 23:38:57.728324 containerd[1499]: time="2025-09-09T23:38:57.728075251Z" level=info msg="StopContainer for \"a6a44a5dce9b4dbee42b182bb288dcf244c2041e16abd4c9c7244818e3e0487e\" returns successfully" Sep 9 23:38:57.732444 containerd[1499]: time="2025-09-09T23:38:57.732389820Z" level=info msg="StopPodSandbox for \"979c9c58a2741bc3c0b09d89d14acd0045939aa60818fcdf8c9886088ddad53e\"" Sep 9 23:38:57.738017 containerd[1499]: time="2025-09-09T23:38:57.737956351Z" level=info msg="Container to stop \"a6a44a5dce9b4dbee42b182bb288dcf244c2041e16abd4c9c7244818e3e0487e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 23:38:57.741351 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d023c97276e637f28c36180424c6087be23323081e1685f424998e3948d6e0a-rootfs.mount: Deactivated successfully. Sep 9 23:38:57.750762 systemd[1]: cri-containerd-979c9c58a2741bc3c0b09d89d14acd0045939aa60818fcdf8c9886088ddad53e.scope: Deactivated successfully. Sep 9 23:38:57.751584 containerd[1499]: time="2025-09-09T23:38:57.751491378Z" level=info msg="TaskExit event in podsandbox handler container_id:\"979c9c58a2741bc3c0b09d89d14acd0045939aa60818fcdf8c9886088ddad53e\" id:\"979c9c58a2741bc3c0b09d89d14acd0045939aa60818fcdf8c9886088ddad53e\" pid:2824 exit_status:137 exited_at:{seconds:1757461137 nanos:751016737}" Sep 9 23:38:57.752317 containerd[1499]: time="2025-09-09T23:38:57.752284740Z" level=info msg="StopContainer for \"3d023c97276e637f28c36180424c6087be23323081e1685f424998e3948d6e0a\" returns successfully" Sep 9 23:38:57.753065 containerd[1499]: time="2025-09-09T23:38:57.753010981Z" level=info msg="StopPodSandbox for \"4f56e8cdc85715e21d85590d083f064d79f1957f6419d7e2570db4153d10ce3a\"" Sep 9 23:38:57.753149 containerd[1499]: time="2025-09-09T23:38:57.753127982Z" level=info msg="Container to stop \"9f15a314d4d0c350d56ef1f5981fa645c95740379b3298780156bbc03e6c540c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 23:38:57.753186 containerd[1499]: time="2025-09-09T23:38:57.753148142Z" level=info msg="Container to stop \"1ee33baa25bd295710877aecb14b34e69568b868bedc64a2ed75531e8111adce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 23:38:57.753207 containerd[1499]: time="2025-09-09T23:38:57.753187862Z" level=info msg="Container to stop \"0435f22dd1c4e3b95850a7d8dfa802758b773e0b7946d93281e9244795a69185\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 23:38:57.753413 containerd[1499]: time="2025-09-09T23:38:57.753199222Z" level=info msg="Container to stop \"3d023c97276e637f28c36180424c6087be23323081e1685f424998e3948d6e0a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 23:38:57.753451 containerd[1499]: time="2025-09-09T23:38:57.753415902Z" level=info msg="Container to stop \"e59b0737d6b88561c87a6ac1c6f053e06129307d00e91c3f45506fab20d6513d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 23:38:57.761465 systemd[1]: cri-containerd-4f56e8cdc85715e21d85590d083f064d79f1957f6419d7e2570db4153d10ce3a.scope: Deactivated successfully. Sep 9 23:38:57.780582 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-979c9c58a2741bc3c0b09d89d14acd0045939aa60818fcdf8c9886088ddad53e-rootfs.mount: Deactivated successfully. Sep 9 23:38:57.784588 containerd[1499]: time="2025-09-09T23:38:57.784543844Z" level=info msg="shim disconnected" id=979c9c58a2741bc3c0b09d89d14acd0045939aa60818fcdf8c9886088ddad53e namespace=k8s.io Sep 9 23:38:57.789558 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f56e8cdc85715e21d85590d083f064d79f1957f6419d7e2570db4153d10ce3a-rootfs.mount: Deactivated successfully. Sep 9 23:38:57.807675 containerd[1499]: time="2025-09-09T23:38:57.784584084Z" level=warning msg="cleaning up after shim disconnected" id=979c9c58a2741bc3c0b09d89d14acd0045939aa60818fcdf8c9886088ddad53e namespace=k8s.io Sep 9 23:38:57.807675 containerd[1499]: time="2025-09-09T23:38:57.807648730Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 23:38:57.807898 containerd[1499]: time="2025-09-09T23:38:57.794088583Z" level=info msg="shim disconnected" id=4f56e8cdc85715e21d85590d083f064d79f1957f6419d7e2570db4153d10ce3a namespace=k8s.io Sep 9 23:38:57.807898 containerd[1499]: time="2025-09-09T23:38:57.807772851Z" level=warning msg="cleaning up after shim disconnected" id=4f56e8cdc85715e21d85590d083f064d79f1957f6419d7e2570db4153d10ce3a namespace=k8s.io Sep 9 23:38:57.807898 containerd[1499]: time="2025-09-09T23:38:57.807802171Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 23:38:57.821983 containerd[1499]: time="2025-09-09T23:38:57.821936439Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4f56e8cdc85715e21d85590d083f064d79f1957f6419d7e2570db4153d10ce3a\" id:\"4f56e8cdc85715e21d85590d083f064d79f1957f6419d7e2570db4153d10ce3a\" pid:2766 exit_status:137 exited_at:{seconds:1757461137 nanos:764656045}" Sep 9 23:38:57.822101 containerd[1499]: time="2025-09-09T23:38:57.821936559Z" level=info msg="received exit event sandbox_id:\"4f56e8cdc85715e21d85590d083f064d79f1957f6419d7e2570db4153d10ce3a\" exit_status:137 exited_at:{seconds:1757461137 nanos:764656045}" Sep 9 23:38:57.822134 containerd[1499]: time="2025-09-09T23:38:57.821944279Z" level=info msg="received exit event sandbox_id:\"979c9c58a2741bc3c0b09d89d14acd0045939aa60818fcdf8c9886088ddad53e\" exit_status:137 exited_at:{seconds:1757461137 nanos:751016737}" Sep 9 23:38:57.823562 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-979c9c58a2741bc3c0b09d89d14acd0045939aa60818fcdf8c9886088ddad53e-shm.mount: Deactivated successfully. Sep 9 23:38:57.824276 containerd[1499]: time="2025-09-09T23:38:57.823936203Z" level=info msg="TearDown network for sandbox \"4f56e8cdc85715e21d85590d083f064d79f1957f6419d7e2570db4153d10ce3a\" successfully" Sep 9 23:38:57.824276 containerd[1499]: time="2025-09-09T23:38:57.823967163Z" level=info msg="StopPodSandbox for \"4f56e8cdc85715e21d85590d083f064d79f1957f6419d7e2570db4153d10ce3a\" returns successfully" Sep 9 23:38:57.823685 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4f56e8cdc85715e21d85590d083f064d79f1957f6419d7e2570db4153d10ce3a-shm.mount: Deactivated successfully. Sep 9 23:38:57.824651 containerd[1499]: time="2025-09-09T23:38:57.824622124Z" level=info msg="TearDown network for sandbox \"979c9c58a2741bc3c0b09d89d14acd0045939aa60818fcdf8c9886088ddad53e\" successfully" Sep 9 23:38:57.824700 containerd[1499]: time="2025-09-09T23:38:57.824652964Z" level=info msg="StopPodSandbox for \"979c9c58a2741bc3c0b09d89d14acd0045939aa60818fcdf8c9886088ddad53e\" returns successfully" Sep 9 23:38:57.855213 kubelet[2616]: I0909 23:38:57.855122 2616 scope.go:117] "RemoveContainer" containerID="a6a44a5dce9b4dbee42b182bb288dcf244c2041e16abd4c9c7244818e3e0487e" Sep 9 23:38:57.857083 containerd[1499]: time="2025-09-09T23:38:57.857050389Z" level=info msg="RemoveContainer for \"a6a44a5dce9b4dbee42b182bb288dcf244c2041e16abd4c9c7244818e3e0487e\"" Sep 9 23:38:57.870007 containerd[1499]: time="2025-09-09T23:38:57.869859175Z" level=info msg="RemoveContainer for \"a6a44a5dce9b4dbee42b182bb288dcf244c2041e16abd4c9c7244818e3e0487e\" returns successfully" Sep 9 23:38:57.870217 kubelet[2616]: I0909 23:38:57.870195 2616 scope.go:117] "RemoveContainer" containerID="a6a44a5dce9b4dbee42b182bb288dcf244c2041e16abd4c9c7244818e3e0487e" Sep 9 23:38:57.870475 containerd[1499]: time="2025-09-09T23:38:57.870438896Z" level=error msg="ContainerStatus for \"a6a44a5dce9b4dbee42b182bb288dcf244c2041e16abd4c9c7244818e3e0487e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a6a44a5dce9b4dbee42b182bb288dcf244c2041e16abd4c9c7244818e3e0487e\": not found" Sep 9 23:38:57.872701 kubelet[2616]: E0909 23:38:57.872660 2616 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a6a44a5dce9b4dbee42b182bb288dcf244c2041e16abd4c9c7244818e3e0487e\": not found" containerID="a6a44a5dce9b4dbee42b182bb288dcf244c2041e16abd4c9c7244818e3e0487e" Sep 9 23:38:57.872799 kubelet[2616]: I0909 23:38:57.872705 2616 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a6a44a5dce9b4dbee42b182bb288dcf244c2041e16abd4c9c7244818e3e0487e"} err="failed to get container status \"a6a44a5dce9b4dbee42b182bb288dcf244c2041e16abd4c9c7244818e3e0487e\": rpc error: code = NotFound desc = an error occurred when try to find container \"a6a44a5dce9b4dbee42b182bb288dcf244c2041e16abd4c9c7244818e3e0487e\": not found" Sep 9 23:38:57.872849 kubelet[2616]: I0909 23:38:57.872801 2616 scope.go:117] "RemoveContainer" containerID="3d023c97276e637f28c36180424c6087be23323081e1685f424998e3948d6e0a" Sep 9 23:38:57.875513 containerd[1499]: time="2025-09-09T23:38:57.875471506Z" level=info msg="RemoveContainer for \"3d023c97276e637f28c36180424c6087be23323081e1685f424998e3948d6e0a\"" Sep 9 23:38:57.881994 containerd[1499]: time="2025-09-09T23:38:57.881936239Z" level=info msg="RemoveContainer for \"3d023c97276e637f28c36180424c6087be23323081e1685f424998e3948d6e0a\" returns successfully" Sep 9 23:38:57.882191 kubelet[2616]: I0909 23:38:57.882147 2616 scope.go:117] "RemoveContainer" containerID="e59b0737d6b88561c87a6ac1c6f053e06129307d00e91c3f45506fab20d6513d" Sep 9 23:38:57.883753 containerd[1499]: time="2025-09-09T23:38:57.883726042Z" level=info msg="RemoveContainer for \"e59b0737d6b88561c87a6ac1c6f053e06129307d00e91c3f45506fab20d6513d\"" Sep 9 23:38:57.896091 containerd[1499]: time="2025-09-09T23:38:57.896044147Z" level=info msg="RemoveContainer for \"e59b0737d6b88561c87a6ac1c6f053e06129307d00e91c3f45506fab20d6513d\" returns successfully" Sep 9 23:38:57.896385 kubelet[2616]: I0909 23:38:57.896349 2616 scope.go:117] "RemoveContainer" containerID="0435f22dd1c4e3b95850a7d8dfa802758b773e0b7946d93281e9244795a69185" Sep 9 23:38:57.898722 containerd[1499]: time="2025-09-09T23:38:57.898624512Z" level=info msg="RemoveContainer for \"0435f22dd1c4e3b95850a7d8dfa802758b773e0b7946d93281e9244795a69185\"" Sep 9 23:38:57.903127 containerd[1499]: time="2025-09-09T23:38:57.903087761Z" level=info msg="RemoveContainer for \"0435f22dd1c4e3b95850a7d8dfa802758b773e0b7946d93281e9244795a69185\" returns successfully" Sep 9 23:38:57.903376 kubelet[2616]: I0909 23:38:57.903349 2616 scope.go:117] "RemoveContainer" containerID="1ee33baa25bd295710877aecb14b34e69568b868bedc64a2ed75531e8111adce" Sep 9 23:38:57.904989 containerd[1499]: time="2025-09-09T23:38:57.904962685Z" level=info msg="RemoveContainer for \"1ee33baa25bd295710877aecb14b34e69568b868bedc64a2ed75531e8111adce\"" Sep 9 23:38:57.907846 containerd[1499]: time="2025-09-09T23:38:57.907808691Z" level=info msg="RemoveContainer for \"1ee33baa25bd295710877aecb14b34e69568b868bedc64a2ed75531e8111adce\" returns successfully" Sep 9 23:38:57.908037 kubelet[2616]: I0909 23:38:57.908003 2616 scope.go:117] "RemoveContainer" containerID="9f15a314d4d0c350d56ef1f5981fa645c95740379b3298780156bbc03e6c540c" Sep 9 23:38:57.909629 containerd[1499]: time="2025-09-09T23:38:57.909593854Z" level=info msg="RemoveContainer for \"9f15a314d4d0c350d56ef1f5981fa645c95740379b3298780156bbc03e6c540c\"" Sep 9 23:38:57.912442 containerd[1499]: time="2025-09-09T23:38:57.912395460Z" level=info msg="RemoveContainer for \"9f15a314d4d0c350d56ef1f5981fa645c95740379b3298780156bbc03e6c540c\" returns successfully" Sep 9 23:38:57.912673 kubelet[2616]: I0909 23:38:57.912648 2616 scope.go:117] "RemoveContainer" containerID="3d023c97276e637f28c36180424c6087be23323081e1685f424998e3948d6e0a" Sep 9 23:38:57.912914 containerd[1499]: time="2025-09-09T23:38:57.912855861Z" level=error msg="ContainerStatus for \"3d023c97276e637f28c36180424c6087be23323081e1685f424998e3948d6e0a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3d023c97276e637f28c36180424c6087be23323081e1685f424998e3948d6e0a\": not found" Sep 9 23:38:57.913064 kubelet[2616]: E0909 23:38:57.913025 2616 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3d023c97276e637f28c36180424c6087be23323081e1685f424998e3948d6e0a\": not found" containerID="3d023c97276e637f28c36180424c6087be23323081e1685f424998e3948d6e0a" Sep 9 23:38:57.913107 kubelet[2616]: I0909 23:38:57.913063 2616 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3d023c97276e637f28c36180424c6087be23323081e1685f424998e3948d6e0a"} err="failed to get container status \"3d023c97276e637f28c36180424c6087be23323081e1685f424998e3948d6e0a\": rpc error: code = NotFound desc = an error occurred when try to find container \"3d023c97276e637f28c36180424c6087be23323081e1685f424998e3948d6e0a\": not found" Sep 9 23:38:57.913107 kubelet[2616]: I0909 23:38:57.913084 2616 scope.go:117] "RemoveContainer" containerID="e59b0737d6b88561c87a6ac1c6f053e06129307d00e91c3f45506fab20d6513d" Sep 9 23:38:57.913310 containerd[1499]: time="2025-09-09T23:38:57.913273461Z" level=error msg="ContainerStatus for \"e59b0737d6b88561c87a6ac1c6f053e06129307d00e91c3f45506fab20d6513d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e59b0737d6b88561c87a6ac1c6f053e06129307d00e91c3f45506fab20d6513d\": not found" Sep 9 23:38:57.913433 kubelet[2616]: E0909 23:38:57.913405 2616 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e59b0737d6b88561c87a6ac1c6f053e06129307d00e91c3f45506fab20d6513d\": not found" containerID="e59b0737d6b88561c87a6ac1c6f053e06129307d00e91c3f45506fab20d6513d" Sep 9 23:38:57.913477 kubelet[2616]: I0909 23:38:57.913427 2616 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e59b0737d6b88561c87a6ac1c6f053e06129307d00e91c3f45506fab20d6513d"} err="failed to get container status \"e59b0737d6b88561c87a6ac1c6f053e06129307d00e91c3f45506fab20d6513d\": rpc error: code = NotFound desc = an error occurred when try to find container \"e59b0737d6b88561c87a6ac1c6f053e06129307d00e91c3f45506fab20d6513d\": not found" Sep 9 23:38:57.913477 kubelet[2616]: I0909 23:38:57.913443 2616 scope.go:117] "RemoveContainer" containerID="0435f22dd1c4e3b95850a7d8dfa802758b773e0b7946d93281e9244795a69185" Sep 9 23:38:57.913658 containerd[1499]: time="2025-09-09T23:38:57.913598982Z" level=error msg="ContainerStatus for \"0435f22dd1c4e3b95850a7d8dfa802758b773e0b7946d93281e9244795a69185\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0435f22dd1c4e3b95850a7d8dfa802758b773e0b7946d93281e9244795a69185\": not found" Sep 9 23:38:57.913767 kubelet[2616]: E0909 23:38:57.913740 2616 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0435f22dd1c4e3b95850a7d8dfa802758b773e0b7946d93281e9244795a69185\": not found" containerID="0435f22dd1c4e3b95850a7d8dfa802758b773e0b7946d93281e9244795a69185" Sep 9 23:38:57.913807 kubelet[2616]: I0909 23:38:57.913773 2616 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0435f22dd1c4e3b95850a7d8dfa802758b773e0b7946d93281e9244795a69185"} err="failed to get container status \"0435f22dd1c4e3b95850a7d8dfa802758b773e0b7946d93281e9244795a69185\": rpc error: code = NotFound desc = an error occurred when try to find container \"0435f22dd1c4e3b95850a7d8dfa802758b773e0b7946d93281e9244795a69185\": not found" Sep 9 23:38:57.913807 kubelet[2616]: I0909 23:38:57.913790 2616 scope.go:117] "RemoveContainer" containerID="1ee33baa25bd295710877aecb14b34e69568b868bedc64a2ed75531e8111adce" Sep 9 23:38:57.913967 containerd[1499]: time="2025-09-09T23:38:57.913939743Z" level=error msg="ContainerStatus for \"1ee33baa25bd295710877aecb14b34e69568b868bedc64a2ed75531e8111adce\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1ee33baa25bd295710877aecb14b34e69568b868bedc64a2ed75531e8111adce\": not found" Sep 9 23:38:57.914107 kubelet[2616]: E0909 23:38:57.914076 2616 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1ee33baa25bd295710877aecb14b34e69568b868bedc64a2ed75531e8111adce\": not found" containerID="1ee33baa25bd295710877aecb14b34e69568b868bedc64a2ed75531e8111adce" Sep 9 23:38:57.914147 kubelet[2616]: I0909 23:38:57.914110 2616 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1ee33baa25bd295710877aecb14b34e69568b868bedc64a2ed75531e8111adce"} err="failed to get container status \"1ee33baa25bd295710877aecb14b34e69568b868bedc64a2ed75531e8111adce\": rpc error: code = NotFound desc = an error occurred when try to find container \"1ee33baa25bd295710877aecb14b34e69568b868bedc64a2ed75531e8111adce\": not found" Sep 9 23:38:57.914147 kubelet[2616]: I0909 23:38:57.914127 2616 scope.go:117] "RemoveContainer" containerID="9f15a314d4d0c350d56ef1f5981fa645c95740379b3298780156bbc03e6c540c" Sep 9 23:38:57.914471 containerd[1499]: time="2025-09-09T23:38:57.914371864Z" level=error msg="ContainerStatus for \"9f15a314d4d0c350d56ef1f5981fa645c95740379b3298780156bbc03e6c540c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9f15a314d4d0c350d56ef1f5981fa645c95740379b3298780156bbc03e6c540c\": not found" Sep 9 23:38:57.914537 kubelet[2616]: E0909 23:38:57.914513 2616 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9f15a314d4d0c350d56ef1f5981fa645c95740379b3298780156bbc03e6c540c\": not found" containerID="9f15a314d4d0c350d56ef1f5981fa645c95740379b3298780156bbc03e6c540c" Sep 9 23:38:57.914617 kubelet[2616]: I0909 23:38:57.914550 2616 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9f15a314d4d0c350d56ef1f5981fa645c95740379b3298780156bbc03e6c540c"} err="failed to get container status \"9f15a314d4d0c350d56ef1f5981fa645c95740379b3298780156bbc03e6c540c\": rpc error: code = NotFound desc = an error occurred when try to find container \"9f15a314d4d0c350d56ef1f5981fa645c95740379b3298780156bbc03e6c540c\": not found" Sep 9 23:38:57.970809 kubelet[2616]: I0909 23:38:57.970772 2616 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8t9k\" (UniqueName: \"kubernetes.io/projected/338b1ff6-be49-48be-ae1e-09451dbffc93-kube-api-access-b8t9k\") pod \"338b1ff6-be49-48be-ae1e-09451dbffc93\" (UID: \"338b1ff6-be49-48be-ae1e-09451dbffc93\") " Sep 9 23:38:57.970809 kubelet[2616]: I0909 23:38:57.970816 2616 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/79e730d5-0043-4884-b0d8-8982fce1f5f5-lib-modules\") pod \"79e730d5-0043-4884-b0d8-8982fce1f5f5\" (UID: \"79e730d5-0043-4884-b0d8-8982fce1f5f5\") " Sep 9 23:38:57.970952 kubelet[2616]: I0909 23:38:57.970833 2616 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/79e730d5-0043-4884-b0d8-8982fce1f5f5-xtables-lock\") pod \"79e730d5-0043-4884-b0d8-8982fce1f5f5\" (UID: \"79e730d5-0043-4884-b0d8-8982fce1f5f5\") " Sep 9 23:38:57.970952 kubelet[2616]: I0909 23:38:57.970851 2616 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/79e730d5-0043-4884-b0d8-8982fce1f5f5-etc-cni-netd\") pod \"79e730d5-0043-4884-b0d8-8982fce1f5f5\" (UID: \"79e730d5-0043-4884-b0d8-8982fce1f5f5\") " Sep 9 23:38:57.970952 kubelet[2616]: I0909 23:38:57.970871 2616 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/79e730d5-0043-4884-b0d8-8982fce1f5f5-cilium-config-path\") pod \"79e730d5-0043-4884-b0d8-8982fce1f5f5\" (UID: \"79e730d5-0043-4884-b0d8-8982fce1f5f5\") " Sep 9 23:38:57.970952 kubelet[2616]: I0909 23:38:57.970884 2616 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/79e730d5-0043-4884-b0d8-8982fce1f5f5-host-proc-sys-net\") pod \"79e730d5-0043-4884-b0d8-8982fce1f5f5\" (UID: \"79e730d5-0043-4884-b0d8-8982fce1f5f5\") " Sep 9 23:38:57.970952 kubelet[2616]: I0909 23:38:57.970899 2616 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/79e730d5-0043-4884-b0d8-8982fce1f5f5-hubble-tls\") pod \"79e730d5-0043-4884-b0d8-8982fce1f5f5\" (UID: \"79e730d5-0043-4884-b0d8-8982fce1f5f5\") " Sep 9 23:38:57.970952 kubelet[2616]: I0909 23:38:57.970915 2616 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/79e730d5-0043-4884-b0d8-8982fce1f5f5-bpf-maps\") pod \"79e730d5-0043-4884-b0d8-8982fce1f5f5\" (UID: \"79e730d5-0043-4884-b0d8-8982fce1f5f5\") " Sep 9 23:38:57.971079 kubelet[2616]: I0909 23:38:57.970935 2616 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/79e730d5-0043-4884-b0d8-8982fce1f5f5-cni-path\") pod \"79e730d5-0043-4884-b0d8-8982fce1f5f5\" (UID: \"79e730d5-0043-4884-b0d8-8982fce1f5f5\") " Sep 9 23:38:57.971079 kubelet[2616]: I0909 23:38:57.970952 2616 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zj44j\" (UniqueName: \"kubernetes.io/projected/79e730d5-0043-4884-b0d8-8982fce1f5f5-kube-api-access-zj44j\") pod \"79e730d5-0043-4884-b0d8-8982fce1f5f5\" (UID: \"79e730d5-0043-4884-b0d8-8982fce1f5f5\") " Sep 9 23:38:57.971079 kubelet[2616]: I0909 23:38:57.970970 2616 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/79e730d5-0043-4884-b0d8-8982fce1f5f5-host-proc-sys-kernel\") pod \"79e730d5-0043-4884-b0d8-8982fce1f5f5\" (UID: \"79e730d5-0043-4884-b0d8-8982fce1f5f5\") " Sep 9 23:38:57.971079 kubelet[2616]: I0909 23:38:57.970984 2616 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/79e730d5-0043-4884-b0d8-8982fce1f5f5-hostproc\") pod \"79e730d5-0043-4884-b0d8-8982fce1f5f5\" (UID: \"79e730d5-0043-4884-b0d8-8982fce1f5f5\") " Sep 9 23:38:57.971079 kubelet[2616]: I0909 23:38:57.971000 2616 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/338b1ff6-be49-48be-ae1e-09451dbffc93-cilium-config-path\") pod \"338b1ff6-be49-48be-ae1e-09451dbffc93\" (UID: \"338b1ff6-be49-48be-ae1e-09451dbffc93\") " Sep 9 23:38:57.971079 kubelet[2616]: I0909 23:38:57.971015 2616 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/79e730d5-0043-4884-b0d8-8982fce1f5f5-cilium-cgroup\") pod \"79e730d5-0043-4884-b0d8-8982fce1f5f5\" (UID: \"79e730d5-0043-4884-b0d8-8982fce1f5f5\") " Sep 9 23:38:57.971195 kubelet[2616]: I0909 23:38:57.971032 2616 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/79e730d5-0043-4884-b0d8-8982fce1f5f5-clustermesh-secrets\") pod \"79e730d5-0043-4884-b0d8-8982fce1f5f5\" (UID: \"79e730d5-0043-4884-b0d8-8982fce1f5f5\") " Sep 9 23:38:57.971195 kubelet[2616]: I0909 23:38:57.971047 2616 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/79e730d5-0043-4884-b0d8-8982fce1f5f5-cilium-run\") pod \"79e730d5-0043-4884-b0d8-8982fce1f5f5\" (UID: \"79e730d5-0043-4884-b0d8-8982fce1f5f5\") " Sep 9 23:38:57.974332 kubelet[2616]: I0909 23:38:57.974294 2616 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79e730d5-0043-4884-b0d8-8982fce1f5f5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "79e730d5-0043-4884-b0d8-8982fce1f5f5" (UID: "79e730d5-0043-4884-b0d8-8982fce1f5f5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 23:38:57.975694 kubelet[2616]: I0909 23:38:57.974454 2616 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79e730d5-0043-4884-b0d8-8982fce1f5f5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "79e730d5-0043-4884-b0d8-8982fce1f5f5" (UID: "79e730d5-0043-4884-b0d8-8982fce1f5f5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 23:38:57.975694 kubelet[2616]: I0909 23:38:57.974505 2616 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79e730d5-0043-4884-b0d8-8982fce1f5f5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "79e730d5-0043-4884-b0d8-8982fce1f5f5" (UID: "79e730d5-0043-4884-b0d8-8982fce1f5f5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 23:38:57.975694 kubelet[2616]: I0909 23:38:57.974531 2616 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79e730d5-0043-4884-b0d8-8982fce1f5f5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "79e730d5-0043-4884-b0d8-8982fce1f5f5" (UID: "79e730d5-0043-4884-b0d8-8982fce1f5f5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 23:38:57.975694 kubelet[2616]: I0909 23:38:57.974542 2616 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79e730d5-0043-4884-b0d8-8982fce1f5f5-cni-path" (OuterVolumeSpecName: "cni-path") pod "79e730d5-0043-4884-b0d8-8982fce1f5f5" (UID: "79e730d5-0043-4884-b0d8-8982fce1f5f5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 23:38:57.975694 kubelet[2616]: I0909 23:38:57.975329 2616 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79e730d5-0043-4884-b0d8-8982fce1f5f5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "79e730d5-0043-4884-b0d8-8982fce1f5f5" (UID: "79e730d5-0043-4884-b0d8-8982fce1f5f5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 23:38:57.975855 kubelet[2616]: I0909 23:38:57.975539 2616 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79e730d5-0043-4884-b0d8-8982fce1f5f5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "79e730d5-0043-4884-b0d8-8982fce1f5f5" (UID: "79e730d5-0043-4884-b0d8-8982fce1f5f5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 23:38:57.975855 kubelet[2616]: I0909 23:38:57.975585 2616 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79e730d5-0043-4884-b0d8-8982fce1f5f5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "79e730d5-0043-4884-b0d8-8982fce1f5f5" (UID: "79e730d5-0043-4884-b0d8-8982fce1f5f5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 23:38:57.975855 kubelet[2616]: I0909 23:38:57.975604 2616 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79e730d5-0043-4884-b0d8-8982fce1f5f5-hostproc" (OuterVolumeSpecName: "hostproc") pod "79e730d5-0043-4884-b0d8-8982fce1f5f5" (UID: "79e730d5-0043-4884-b0d8-8982fce1f5f5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 23:38:57.975855 kubelet[2616]: I0909 23:38:57.975621 2616 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79e730d5-0043-4884-b0d8-8982fce1f5f5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "79e730d5-0043-4884-b0d8-8982fce1f5f5" (UID: "79e730d5-0043-4884-b0d8-8982fce1f5f5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 23:38:57.977306 kubelet[2616]: I0909 23:38:57.977275 2616 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79e730d5-0043-4884-b0d8-8982fce1f5f5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "79e730d5-0043-4884-b0d8-8982fce1f5f5" (UID: "79e730d5-0043-4884-b0d8-8982fce1f5f5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 9 23:38:57.977691 kubelet[2616]: I0909 23:38:57.977664 2616 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/338b1ff6-be49-48be-ae1e-09451dbffc93-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "338b1ff6-be49-48be-ae1e-09451dbffc93" (UID: "338b1ff6-be49-48be-ae1e-09451dbffc93"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 9 23:38:57.978217 kubelet[2616]: I0909 23:38:57.978186 2616 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79e730d5-0043-4884-b0d8-8982fce1f5f5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "79e730d5-0043-4884-b0d8-8982fce1f5f5" (UID: "79e730d5-0043-4884-b0d8-8982fce1f5f5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 23:38:57.978453 kubelet[2616]: I0909 23:38:57.978366 2616 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79e730d5-0043-4884-b0d8-8982fce1f5f5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "79e730d5-0043-4884-b0d8-8982fce1f5f5" (UID: "79e730d5-0043-4884-b0d8-8982fce1f5f5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 9 23:38:57.978453 kubelet[2616]: I0909 23:38:57.978397 2616 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/338b1ff6-be49-48be-ae1e-09451dbffc93-kube-api-access-b8t9k" (OuterVolumeSpecName: "kube-api-access-b8t9k") pod "338b1ff6-be49-48be-ae1e-09451dbffc93" (UID: "338b1ff6-be49-48be-ae1e-09451dbffc93"). InnerVolumeSpecName "kube-api-access-b8t9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 23:38:57.979971 kubelet[2616]: I0909 23:38:57.979937 2616 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79e730d5-0043-4884-b0d8-8982fce1f5f5-kube-api-access-zj44j" (OuterVolumeSpecName: "kube-api-access-zj44j") pod "79e730d5-0043-4884-b0d8-8982fce1f5f5" (UID: "79e730d5-0043-4884-b0d8-8982fce1f5f5"). InnerVolumeSpecName "kube-api-access-zj44j". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 23:38:58.071295 kubelet[2616]: I0909 23:38:58.071238 2616 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8t9k\" (UniqueName: \"kubernetes.io/projected/338b1ff6-be49-48be-ae1e-09451dbffc93-kube-api-access-b8t9k\") on node \"localhost\" DevicePath \"\"" Sep 9 23:38:58.071620 kubelet[2616]: I0909 23:38:58.071459 2616 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/79e730d5-0043-4884-b0d8-8982fce1f5f5-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 9 23:38:58.071620 kubelet[2616]: I0909 23:38:58.071478 2616 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/79e730d5-0043-4884-b0d8-8982fce1f5f5-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 9 23:38:58.071620 kubelet[2616]: I0909 23:38:58.071487 2616 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/79e730d5-0043-4884-b0d8-8982fce1f5f5-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 9 23:38:58.071620 kubelet[2616]: I0909 23:38:58.071496 2616 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/79e730d5-0043-4884-b0d8-8982fce1f5f5-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 23:38:58.071620 kubelet[2616]: I0909 23:38:58.071504 2616 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/79e730d5-0043-4884-b0d8-8982fce1f5f5-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 9 23:38:58.071620 kubelet[2616]: I0909 23:38:58.071514 2616 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/79e730d5-0043-4884-b0d8-8982fce1f5f5-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 9 23:38:58.071620 kubelet[2616]: I0909 23:38:58.071522 2616 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/79e730d5-0043-4884-b0d8-8982fce1f5f5-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 9 23:38:58.071620 kubelet[2616]: I0909 23:38:58.071529 2616 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/79e730d5-0043-4884-b0d8-8982fce1f5f5-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 9 23:38:58.071785 kubelet[2616]: I0909 23:38:58.071537 2616 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zj44j\" (UniqueName: \"kubernetes.io/projected/79e730d5-0043-4884-b0d8-8982fce1f5f5-kube-api-access-zj44j\") on node \"localhost\" DevicePath \"\"" Sep 9 23:38:58.071785 kubelet[2616]: I0909 23:38:58.071546 2616 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/79e730d5-0043-4884-b0d8-8982fce1f5f5-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 9 23:38:58.071785 kubelet[2616]: I0909 23:38:58.071567 2616 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/79e730d5-0043-4884-b0d8-8982fce1f5f5-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 9 23:38:58.071785 kubelet[2616]: I0909 23:38:58.071575 2616 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/338b1ff6-be49-48be-ae1e-09451dbffc93-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 23:38:58.071785 kubelet[2616]: I0909 23:38:58.071583 2616 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/79e730d5-0043-4884-b0d8-8982fce1f5f5-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 9 23:38:58.071785 kubelet[2616]: I0909 23:38:58.071590 2616 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/79e730d5-0043-4884-b0d8-8982fce1f5f5-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 9 23:38:58.071785 kubelet[2616]: I0909 23:38:58.071599 2616 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/79e730d5-0043-4884-b0d8-8982fce1f5f5-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 9 23:38:58.159999 systemd[1]: Removed slice kubepods-besteffort-pod338b1ff6_be49_48be_ae1e_09451dbffc93.slice - libcontainer container kubepods-besteffort-pod338b1ff6_be49_48be_ae1e_09451dbffc93.slice. Sep 9 23:38:58.166667 systemd[1]: Removed slice kubepods-burstable-pod79e730d5_0043_4884_b0d8_8982fce1f5f5.slice - libcontainer container kubepods-burstable-pod79e730d5_0043_4884_b0d8_8982fce1f5f5.slice. Sep 9 23:38:58.166952 systemd[1]: kubepods-burstable-pod79e730d5_0043_4884_b0d8_8982fce1f5f5.slice: Consumed 6.879s CPU time, 123.6M memory peak, 2M read from disk, 12.9M written to disk. Sep 9 23:38:58.606662 kubelet[2616]: I0909 23:38:58.606596 2616 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="338b1ff6-be49-48be-ae1e-09451dbffc93" path="/var/lib/kubelet/pods/338b1ff6-be49-48be-ae1e-09451dbffc93/volumes" Sep 9 23:38:58.607148 kubelet[2616]: I0909 23:38:58.607104 2616 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79e730d5-0043-4884-b0d8-8982fce1f5f5" path="/var/lib/kubelet/pods/79e730d5-0043-4884-b0d8-8982fce1f5f5/volumes" Sep 9 23:38:58.708001 systemd[1]: var-lib-kubelet-pods-338b1ff6\x2dbe49\x2d48be\x2dae1e\x2d09451dbffc93-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db8t9k.mount: Deactivated successfully. Sep 9 23:38:58.708108 systemd[1]: var-lib-kubelet-pods-79e730d5\x2d0043\x2d4884\x2db0d8\x2d8982fce1f5f5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzj44j.mount: Deactivated successfully. Sep 9 23:38:58.708158 systemd[1]: var-lib-kubelet-pods-79e730d5\x2d0043\x2d4884\x2db0d8\x2d8982fce1f5f5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 9 23:38:58.708219 systemd[1]: var-lib-kubelet-pods-79e730d5\x2d0043\x2d4884\x2db0d8\x2d8982fce1f5f5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 9 23:38:59.595262 sshd[4221]: Connection closed by 10.0.0.1 port 54482 Sep 9 23:38:59.596354 sshd-session[4219]: pam_unix(sshd:session): session closed for user core Sep 9 23:38:59.608764 systemd[1]: sshd@22-10.0.0.81:22-10.0.0.1:54482.service: Deactivated successfully. Sep 9 23:38:59.610564 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 23:38:59.611372 systemd-logind[1470]: Session 23 logged out. Waiting for processes to exit. Sep 9 23:38:59.614070 systemd[1]: Started sshd@23-10.0.0.81:22-10.0.0.1:54488.service - OpenSSH per-connection server daemon (10.0.0.1:54488). Sep 9 23:38:59.614777 systemd-logind[1470]: Removed session 23. Sep 9 23:38:59.672317 sshd[4377]: Accepted publickey for core from 10.0.0.1 port 54488 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:38:59.673672 sshd-session[4377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:38:59.678033 systemd-logind[1470]: New session 24 of user core. Sep 9 23:38:59.687469 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 23:39:00.505259 sshd[4379]: Connection closed by 10.0.0.1 port 54488 Sep 9 23:39:00.505633 sshd-session[4377]: pam_unix(sshd:session): session closed for user core Sep 9 23:39:00.519022 systemd[1]: sshd@23-10.0.0.81:22-10.0.0.1:54488.service: Deactivated successfully. Sep 9 23:39:00.522287 kubelet[2616]: E0909 23:39:00.522190 2616 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="79e730d5-0043-4884-b0d8-8982fce1f5f5" containerName="mount-bpf-fs" Sep 9 23:39:00.522287 kubelet[2616]: E0909 23:39:00.522229 2616 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="338b1ff6-be49-48be-ae1e-09451dbffc93" containerName="cilium-operator" Sep 9 23:39:00.522287 kubelet[2616]: E0909 23:39:00.522238 2616 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="79e730d5-0043-4884-b0d8-8982fce1f5f5" containerName="mount-cgroup" Sep 9 23:39:00.522287 kubelet[2616]: E0909 23:39:00.522244 2616 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="79e730d5-0043-4884-b0d8-8982fce1f5f5" containerName="apply-sysctl-overwrites" Sep 9 23:39:00.522287 kubelet[2616]: E0909 23:39:00.522270 2616 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="79e730d5-0043-4884-b0d8-8982fce1f5f5" containerName="clean-cilium-state" Sep 9 23:39:00.522287 kubelet[2616]: E0909 23:39:00.522275 2616 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="79e730d5-0043-4884-b0d8-8982fce1f5f5" containerName="cilium-agent" Sep 9 23:39:00.522287 kubelet[2616]: I0909 23:39:00.522303 2616 memory_manager.go:354] "RemoveStaleState removing state" podUID="79e730d5-0043-4884-b0d8-8982fce1f5f5" containerName="cilium-agent" Sep 9 23:39:00.522695 kubelet[2616]: I0909 23:39:00.522308 2616 memory_manager.go:354] "RemoveStaleState removing state" podUID="338b1ff6-be49-48be-ae1e-09451dbffc93" containerName="cilium-operator" Sep 9 23:39:00.526032 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 23:39:00.530356 systemd-logind[1470]: Session 24 logged out. Waiting for processes to exit. Sep 9 23:39:00.536654 systemd[1]: Started sshd@24-10.0.0.81:22-10.0.0.1:60752.service - OpenSSH per-connection server daemon (10.0.0.1:60752). Sep 9 23:39:00.542894 systemd-logind[1470]: Removed session 24. Sep 9 23:39:00.560202 systemd[1]: Created slice kubepods-burstable-pod1e9b6919_d5e0_44cf_b311_838a2417fd5b.slice - libcontainer container kubepods-burstable-pod1e9b6919_d5e0_44cf_b311_838a2417fd5b.slice. Sep 9 23:39:00.598912 sshd[4391]: Accepted publickey for core from 10.0.0.1 port 60752 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:39:00.600371 sshd-session[4391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:39:00.604697 systemd-logind[1470]: New session 25 of user core. Sep 9 23:39:00.615481 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 9 23:39:00.668327 sshd[4393]: Connection closed by 10.0.0.1 port 60752 Sep 9 23:39:00.668896 sshd-session[4391]: pam_unix(sshd:session): session closed for user core Sep 9 23:39:00.682986 systemd[1]: sshd@24-10.0.0.81:22-10.0.0.1:60752.service: Deactivated successfully. Sep 9 23:39:00.685929 systemd[1]: session-25.scope: Deactivated successfully. Sep 9 23:39:00.687536 systemd-logind[1470]: Session 25 logged out. Waiting for processes to exit. Sep 9 23:39:00.688565 kubelet[2616]: I0909 23:39:00.688401 2616 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1e9b6919-d5e0-44cf-b311-838a2417fd5b-cilium-run\") pod \"cilium-n4lqh\" (UID: \"1e9b6919-d5e0-44cf-b311-838a2417fd5b\") " pod="kube-system/cilium-n4lqh" Sep 9 23:39:00.688565 kubelet[2616]: I0909 23:39:00.688446 2616 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1e9b6919-d5e0-44cf-b311-838a2417fd5b-cilium-ipsec-secrets\") pod \"cilium-n4lqh\" (UID: \"1e9b6919-d5e0-44cf-b311-838a2417fd5b\") " pod="kube-system/cilium-n4lqh" Sep 9 23:39:00.688565 kubelet[2616]: I0909 23:39:00.688473 2616 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1e9b6919-d5e0-44cf-b311-838a2417fd5b-hubble-tls\") pod \"cilium-n4lqh\" (UID: \"1e9b6919-d5e0-44cf-b311-838a2417fd5b\") " pod="kube-system/cilium-n4lqh" Sep 9 23:39:00.688565 kubelet[2616]: I0909 23:39:00.688494 2616 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcqj4\" (UniqueName: \"kubernetes.io/projected/1e9b6919-d5e0-44cf-b311-838a2417fd5b-kube-api-access-jcqj4\") pod \"cilium-n4lqh\" (UID: \"1e9b6919-d5e0-44cf-b311-838a2417fd5b\") " pod="kube-system/cilium-n4lqh" Sep 9 23:39:00.688565 kubelet[2616]: I0909 23:39:00.688514 2616 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1e9b6919-d5e0-44cf-b311-838a2417fd5b-cni-path\") pod \"cilium-n4lqh\" (UID: \"1e9b6919-d5e0-44cf-b311-838a2417fd5b\") " pod="kube-system/cilium-n4lqh" Sep 9 23:39:00.688565 kubelet[2616]: I0909 23:39:00.688531 2616 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1e9b6919-d5e0-44cf-b311-838a2417fd5b-clustermesh-secrets\") pod \"cilium-n4lqh\" (UID: \"1e9b6919-d5e0-44cf-b311-838a2417fd5b\") " pod="kube-system/cilium-n4lqh" Sep 9 23:39:00.688774 kubelet[2616]: I0909 23:39:00.688548 2616 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1e9b6919-d5e0-44cf-b311-838a2417fd5b-host-proc-sys-kernel\") pod \"cilium-n4lqh\" (UID: \"1e9b6919-d5e0-44cf-b311-838a2417fd5b\") " pod="kube-system/cilium-n4lqh" Sep 9 23:39:00.688774 kubelet[2616]: I0909 23:39:00.688565 2616 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1e9b6919-d5e0-44cf-b311-838a2417fd5b-bpf-maps\") pod \"cilium-n4lqh\" (UID: \"1e9b6919-d5e0-44cf-b311-838a2417fd5b\") " pod="kube-system/cilium-n4lqh" Sep 9 23:39:00.688774 kubelet[2616]: I0909 23:39:00.688580 2616 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e9b6919-d5e0-44cf-b311-838a2417fd5b-xtables-lock\") pod \"cilium-n4lqh\" (UID: \"1e9b6919-d5e0-44cf-b311-838a2417fd5b\") " pod="kube-system/cilium-n4lqh" Sep 9 23:39:00.688774 kubelet[2616]: I0909 23:39:00.688596 2616 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1e9b6919-d5e0-44cf-b311-838a2417fd5b-cilium-cgroup\") pod \"cilium-n4lqh\" (UID: \"1e9b6919-d5e0-44cf-b311-838a2417fd5b\") " pod="kube-system/cilium-n4lqh" Sep 9 23:39:00.688774 kubelet[2616]: I0909 23:39:00.688612 2616 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e9b6919-d5e0-44cf-b311-838a2417fd5b-cilium-config-path\") pod \"cilium-n4lqh\" (UID: \"1e9b6919-d5e0-44cf-b311-838a2417fd5b\") " pod="kube-system/cilium-n4lqh" Sep 9 23:39:00.688774 kubelet[2616]: I0909 23:39:00.688627 2616 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1e9b6919-d5e0-44cf-b311-838a2417fd5b-host-proc-sys-net\") pod \"cilium-n4lqh\" (UID: \"1e9b6919-d5e0-44cf-b311-838a2417fd5b\") " pod="kube-system/cilium-n4lqh" Sep 9 23:39:00.688906 kubelet[2616]: I0909 23:39:00.688653 2616 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1e9b6919-d5e0-44cf-b311-838a2417fd5b-etc-cni-netd\") pod \"cilium-n4lqh\" (UID: \"1e9b6919-d5e0-44cf-b311-838a2417fd5b\") " pod="kube-system/cilium-n4lqh" Sep 9 23:39:00.688906 kubelet[2616]: I0909 23:39:00.688672 2616 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1e9b6919-d5e0-44cf-b311-838a2417fd5b-hostproc\") pod \"cilium-n4lqh\" (UID: \"1e9b6919-d5e0-44cf-b311-838a2417fd5b\") " pod="kube-system/cilium-n4lqh" Sep 9 23:39:00.689458 kubelet[2616]: I0909 23:39:00.689355 2616 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e9b6919-d5e0-44cf-b311-838a2417fd5b-lib-modules\") pod \"cilium-n4lqh\" (UID: \"1e9b6919-d5e0-44cf-b311-838a2417fd5b\") " pod="kube-system/cilium-n4lqh" Sep 9 23:39:00.692477 systemd[1]: Started sshd@25-10.0.0.81:22-10.0.0.1:60754.service - OpenSSH per-connection server daemon (10.0.0.1:60754). Sep 9 23:39:00.693181 systemd-logind[1470]: Removed session 25. Sep 9 23:39:00.775317 sshd[4400]: Accepted publickey for core from 10.0.0.1 port 60754 ssh2: RSA SHA256:dVGL2zumnWizGzsOSYID+1qjGEdZrqRTZUf8FmvVils Sep 9 23:39:00.777566 sshd-session[4400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:39:00.784755 systemd-logind[1470]: New session 26 of user core. Sep 9 23:39:00.801564 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 9 23:39:00.867457 containerd[1499]: time="2025-09-09T23:39:00.867397496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n4lqh,Uid:1e9b6919-d5e0-44cf-b311-838a2417fd5b,Namespace:kube-system,Attempt:0,}" Sep 9 23:39:00.892912 containerd[1499]: time="2025-09-09T23:39:00.892856221Z" level=info msg="connecting to shim 081697f3f9791825cdc9048554916587a134beab229043f17bac71ec3fc4ba3f" address="unix:///run/containerd/s/56d2c4906152b00736c9681441e9dbf16fa227a9391df9207f377946bce8ef60" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:39:00.935631 systemd[1]: Started cri-containerd-081697f3f9791825cdc9048554916587a134beab229043f17bac71ec3fc4ba3f.scope - libcontainer container 081697f3f9791825cdc9048554916587a134beab229043f17bac71ec3fc4ba3f. Sep 9 23:39:00.971773 containerd[1499]: time="2025-09-09T23:39:00.971717566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n4lqh,Uid:1e9b6919-d5e0-44cf-b311-838a2417fd5b,Namespace:kube-system,Attempt:0,} returns sandbox id \"081697f3f9791825cdc9048554916587a134beab229043f17bac71ec3fc4ba3f\"" Sep 9 23:39:00.975190 containerd[1499]: time="2025-09-09T23:39:00.975132058Z" level=info msg="CreateContainer within sandbox \"081697f3f9791825cdc9048554916587a134beab229043f17bac71ec3fc4ba3f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 23:39:00.996080 containerd[1499]: time="2025-09-09T23:39:00.995911048Z" level=info msg="Container c85fd6e3a7c02f09d94ed995b9c3b822792a3356cb120d9a0e2c5c6ef7bd1726: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:39:01.005503 containerd[1499]: time="2025-09-09T23:39:01.005432401Z" level=info msg="CreateContainer within sandbox \"081697f3f9791825cdc9048554916587a134beab229043f17bac71ec3fc4ba3f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c85fd6e3a7c02f09d94ed995b9c3b822792a3356cb120d9a0e2c5c6ef7bd1726\"" Sep 9 23:39:01.006189 containerd[1499]: time="2025-09-09T23:39:01.005971443Z" level=info msg="StartContainer for \"c85fd6e3a7c02f09d94ed995b9c3b822792a3356cb120d9a0e2c5c6ef7bd1726\"" Sep 9 23:39:01.007146 containerd[1499]: time="2025-09-09T23:39:01.007116728Z" level=info msg="connecting to shim c85fd6e3a7c02f09d94ed995b9c3b822792a3356cb120d9a0e2c5c6ef7bd1726" address="unix:///run/containerd/s/56d2c4906152b00736c9681441e9dbf16fa227a9391df9207f377946bce8ef60" protocol=ttrpc version=3 Sep 9 23:39:01.036471 systemd[1]: Started cri-containerd-c85fd6e3a7c02f09d94ed995b9c3b822792a3356cb120d9a0e2c5c6ef7bd1726.scope - libcontainer container c85fd6e3a7c02f09d94ed995b9c3b822792a3356cb120d9a0e2c5c6ef7bd1726. Sep 9 23:39:01.065579 containerd[1499]: time="2025-09-09T23:39:01.065541909Z" level=info msg="StartContainer for \"c85fd6e3a7c02f09d94ed995b9c3b822792a3356cb120d9a0e2c5c6ef7bd1726\" returns successfully" Sep 9 23:39:01.072381 systemd[1]: cri-containerd-c85fd6e3a7c02f09d94ed995b9c3b822792a3356cb120d9a0e2c5c6ef7bd1726.scope: Deactivated successfully. Sep 9 23:39:01.075570 containerd[1499]: time="2025-09-09T23:39:01.075527667Z" level=info msg="received exit event container_id:\"c85fd6e3a7c02f09d94ed995b9c3b822792a3356cb120d9a0e2c5c6ef7bd1726\" id:\"c85fd6e3a7c02f09d94ed995b9c3b822792a3356cb120d9a0e2c5c6ef7bd1726\" pid:4471 exited_at:{seconds:1757461141 nanos:75152465}" Sep 9 23:39:01.075776 containerd[1499]: time="2025-09-09T23:39:01.075630307Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c85fd6e3a7c02f09d94ed995b9c3b822792a3356cb120d9a0e2c5c6ef7bd1726\" id:\"c85fd6e3a7c02f09d94ed995b9c3b822792a3356cb120d9a0e2c5c6ef7bd1726\" pid:4471 exited_at:{seconds:1757461141 nanos:75152465}" Sep 9 23:39:01.877769 containerd[1499]: time="2025-09-09T23:39:01.877727106Z" level=info msg="CreateContainer within sandbox \"081697f3f9791825cdc9048554916587a134beab229043f17bac71ec3fc4ba3f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 23:39:01.891277 containerd[1499]: time="2025-09-09T23:39:01.890892315Z" level=info msg="Container 0416fd35a4f759b584e63b8a881d5e138ec1de4f674cff34633fcfb31d1a3487: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:39:01.898683 containerd[1499]: time="2025-09-09T23:39:01.898469064Z" level=info msg="CreateContainer within sandbox \"081697f3f9791825cdc9048554916587a134beab229043f17bac71ec3fc4ba3f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0416fd35a4f759b584e63b8a881d5e138ec1de4f674cff34633fcfb31d1a3487\"" Sep 9 23:39:01.899746 containerd[1499]: time="2025-09-09T23:39:01.899679229Z" level=info msg="StartContainer for \"0416fd35a4f759b584e63b8a881d5e138ec1de4f674cff34633fcfb31d1a3487\"" Sep 9 23:39:01.900876 containerd[1499]: time="2025-09-09T23:39:01.900790633Z" level=info msg="connecting to shim 0416fd35a4f759b584e63b8a881d5e138ec1de4f674cff34633fcfb31d1a3487" address="unix:///run/containerd/s/56d2c4906152b00736c9681441e9dbf16fa227a9391df9207f377946bce8ef60" protocol=ttrpc version=3 Sep 9 23:39:01.927450 systemd[1]: Started cri-containerd-0416fd35a4f759b584e63b8a881d5e138ec1de4f674cff34633fcfb31d1a3487.scope - libcontainer container 0416fd35a4f759b584e63b8a881d5e138ec1de4f674cff34633fcfb31d1a3487. Sep 9 23:39:01.955588 containerd[1499]: time="2025-09-09T23:39:01.955547800Z" level=info msg="StartContainer for \"0416fd35a4f759b584e63b8a881d5e138ec1de4f674cff34633fcfb31d1a3487\" returns successfully" Sep 9 23:39:01.963506 systemd[1]: cri-containerd-0416fd35a4f759b584e63b8a881d5e138ec1de4f674cff34633fcfb31d1a3487.scope: Deactivated successfully. Sep 9 23:39:01.963780 containerd[1499]: time="2025-09-09T23:39:01.963530071Z" level=info msg="received exit event container_id:\"0416fd35a4f759b584e63b8a881d5e138ec1de4f674cff34633fcfb31d1a3487\" id:\"0416fd35a4f759b584e63b8a881d5e138ec1de4f674cff34633fcfb31d1a3487\" pid:4517 exited_at:{seconds:1757461141 nanos:963302950}" Sep 9 23:39:01.963780 containerd[1499]: time="2025-09-09T23:39:01.963719871Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0416fd35a4f759b584e63b8a881d5e138ec1de4f674cff34633fcfb31d1a3487\" id:\"0416fd35a4f759b584e63b8a881d5e138ec1de4f674cff34633fcfb31d1a3487\" pid:4517 exited_at:{seconds:1757461141 nanos:963302950}" Sep 9 23:39:02.680511 kubelet[2616]: E0909 23:39:02.680467 2616 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 23:39:02.803983 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0416fd35a4f759b584e63b8a881d5e138ec1de4f674cff34633fcfb31d1a3487-rootfs.mount: Deactivated successfully. Sep 9 23:39:02.897274 containerd[1499]: time="2025-09-09T23:39:02.895687571Z" level=info msg="CreateContainer within sandbox \"081697f3f9791825cdc9048554916587a134beab229043f17bac71ec3fc4ba3f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 23:39:02.910242 containerd[1499]: time="2025-09-09T23:39:02.909219668Z" level=info msg="Container d8732a0cadb73d2abada5c7fc5e8153f8167a70a20012a2885273267407bdb35: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:39:02.924448 containerd[1499]: time="2025-09-09T23:39:02.924402412Z" level=info msg="CreateContainer within sandbox \"081697f3f9791825cdc9048554916587a134beab229043f17bac71ec3fc4ba3f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d8732a0cadb73d2abada5c7fc5e8153f8167a70a20012a2885273267407bdb35\"" Sep 9 23:39:02.925389 containerd[1499]: time="2025-09-09T23:39:02.925359256Z" level=info msg="StartContainer for \"d8732a0cadb73d2abada5c7fc5e8153f8167a70a20012a2885273267407bdb35\"" Sep 9 23:39:02.926775 containerd[1499]: time="2025-09-09T23:39:02.926747421Z" level=info msg="connecting to shim d8732a0cadb73d2abada5c7fc5e8153f8167a70a20012a2885273267407bdb35" address="unix:///run/containerd/s/56d2c4906152b00736c9681441e9dbf16fa227a9391df9207f377946bce8ef60" protocol=ttrpc version=3 Sep 9 23:39:02.950437 systemd[1]: Started cri-containerd-d8732a0cadb73d2abada5c7fc5e8153f8167a70a20012a2885273267407bdb35.scope - libcontainer container d8732a0cadb73d2abada5c7fc5e8153f8167a70a20012a2885273267407bdb35. Sep 9 23:39:02.996231 systemd[1]: cri-containerd-d8732a0cadb73d2abada5c7fc5e8153f8167a70a20012a2885273267407bdb35.scope: Deactivated successfully. Sep 9 23:39:02.997633 containerd[1499]: time="2025-09-09T23:39:02.997469719Z" level=info msg="received exit event container_id:\"d8732a0cadb73d2abada5c7fc5e8153f8167a70a20012a2885273267407bdb35\" id:\"d8732a0cadb73d2abada5c7fc5e8153f8167a70a20012a2885273267407bdb35\" pid:4560 exited_at:{seconds:1757461142 nanos:996970556}" Sep 9 23:39:02.997858 containerd[1499]: time="2025-09-09T23:39:02.997757320Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d8732a0cadb73d2abada5c7fc5e8153f8167a70a20012a2885273267407bdb35\" id:\"d8732a0cadb73d2abada5c7fc5e8153f8167a70a20012a2885273267407bdb35\" pid:4560 exited_at:{seconds:1757461142 nanos:996970556}" Sep 9 23:39:03.001289 containerd[1499]: time="2025-09-09T23:39:03.001259574Z" level=info msg="StartContainer for \"d8732a0cadb73d2abada5c7fc5e8153f8167a70a20012a2885273267407bdb35\" returns successfully" Sep 9 23:39:03.804144 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8732a0cadb73d2abada5c7fc5e8153f8167a70a20012a2885273267407bdb35-rootfs.mount: Deactivated successfully. Sep 9 23:39:03.886846 containerd[1499]: time="2025-09-09T23:39:03.886727489Z" level=info msg="CreateContainer within sandbox \"081697f3f9791825cdc9048554916587a134beab229043f17bac71ec3fc4ba3f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 23:39:03.901271 containerd[1499]: time="2025-09-09T23:39:03.900783193Z" level=info msg="Container bf8b2f05f85884c5154e2f49e18750ac1b52aca1dee231d0d878823cf693302a: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:39:03.909499 containerd[1499]: time="2025-09-09T23:39:03.909459233Z" level=info msg="CreateContainer within sandbox \"081697f3f9791825cdc9048554916587a134beab229043f17bac71ec3fc4ba3f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bf8b2f05f85884c5154e2f49e18750ac1b52aca1dee231d0d878823cf693302a\"" Sep 9 23:39:03.910023 containerd[1499]: time="2025-09-09T23:39:03.910001716Z" level=info msg="StartContainer for \"bf8b2f05f85884c5154e2f49e18750ac1b52aca1dee231d0d878823cf693302a\"" Sep 9 23:39:03.911120 containerd[1499]: time="2025-09-09T23:39:03.911080001Z" level=info msg="connecting to shim bf8b2f05f85884c5154e2f49e18750ac1b52aca1dee231d0d878823cf693302a" address="unix:///run/containerd/s/56d2c4906152b00736c9681441e9dbf16fa227a9391df9207f377946bce8ef60" protocol=ttrpc version=3 Sep 9 23:39:03.932460 systemd[1]: Started cri-containerd-bf8b2f05f85884c5154e2f49e18750ac1b52aca1dee231d0d878823cf693302a.scope - libcontainer container bf8b2f05f85884c5154e2f49e18750ac1b52aca1dee231d0d878823cf693302a. Sep 9 23:39:03.956467 systemd[1]: cri-containerd-bf8b2f05f85884c5154e2f49e18750ac1b52aca1dee231d0d878823cf693302a.scope: Deactivated successfully. Sep 9 23:39:03.958620 containerd[1499]: time="2025-09-09T23:39:03.958570779Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bf8b2f05f85884c5154e2f49e18750ac1b52aca1dee231d0d878823cf693302a\" id:\"bf8b2f05f85884c5154e2f49e18750ac1b52aca1dee231d0d878823cf693302a\" pid:4601 exited_at:{seconds:1757461143 nanos:958267778}" Sep 9 23:39:03.959987 containerd[1499]: time="2025-09-09T23:39:03.959963266Z" level=info msg="received exit event container_id:\"bf8b2f05f85884c5154e2f49e18750ac1b52aca1dee231d0d878823cf693302a\" id:\"bf8b2f05f85884c5154e2f49e18750ac1b52aca1dee231d0d878823cf693302a\" pid:4601 exited_at:{seconds:1757461143 nanos:958267778}" Sep 9 23:39:03.967093 containerd[1499]: time="2025-09-09T23:39:03.967051618Z" level=info msg="StartContainer for \"bf8b2f05f85884c5154e2f49e18750ac1b52aca1dee231d0d878823cf693302a\" returns successfully" Sep 9 23:39:03.978632 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf8b2f05f85884c5154e2f49e18750ac1b52aca1dee231d0d878823cf693302a-rootfs.mount: Deactivated successfully. Sep 9 23:39:04.048307 kubelet[2616]: I0909 23:39:04.048245 2616 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-09T23:39:04Z","lastTransitionTime":"2025-09-09T23:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 9 23:39:04.894023 containerd[1499]: time="2025-09-09T23:39:04.893979508Z" level=info msg="CreateContainer within sandbox \"081697f3f9791825cdc9048554916587a134beab229043f17bac71ec3fc4ba3f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 23:39:04.905927 containerd[1499]: time="2025-09-09T23:39:04.905880368Z" level=info msg="Container f8ce6f479092aba3ed6b22a2318ad59a3a1263419b06958c5eff7c8bbeee6571: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:39:04.906793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount562556701.mount: Deactivated successfully. Sep 9 23:39:04.908997 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3492399575.mount: Deactivated successfully. Sep 9 23:39:04.927069 containerd[1499]: time="2025-09-09T23:39:04.925842787Z" level=info msg="CreateContainer within sandbox \"081697f3f9791825cdc9048554916587a134beab229043f17bac71ec3fc4ba3f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f8ce6f479092aba3ed6b22a2318ad59a3a1263419b06958c5eff7c8bbeee6571\"" Sep 9 23:39:04.927651 containerd[1499]: time="2025-09-09T23:39:04.927616636Z" level=info msg="StartContainer for \"f8ce6f479092aba3ed6b22a2318ad59a3a1263419b06958c5eff7c8bbeee6571\"" Sep 9 23:39:04.928595 containerd[1499]: time="2025-09-09T23:39:04.928558761Z" level=info msg="connecting to shim f8ce6f479092aba3ed6b22a2318ad59a3a1263419b06958c5eff7c8bbeee6571" address="unix:///run/containerd/s/56d2c4906152b00736c9681441e9dbf16fa227a9391df9207f377946bce8ef60" protocol=ttrpc version=3 Sep 9 23:39:04.956774 systemd[1]: Started cri-containerd-f8ce6f479092aba3ed6b22a2318ad59a3a1263419b06958c5eff7c8bbeee6571.scope - libcontainer container f8ce6f479092aba3ed6b22a2318ad59a3a1263419b06958c5eff7c8bbeee6571. Sep 9 23:39:05.013179 containerd[1499]: time="2025-09-09T23:39:05.013131027Z" level=info msg="StartContainer for \"f8ce6f479092aba3ed6b22a2318ad59a3a1263419b06958c5eff7c8bbeee6571\" returns successfully" Sep 9 23:39:05.086916 containerd[1499]: time="2025-09-09T23:39:05.086873703Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f8ce6f479092aba3ed6b22a2318ad59a3a1263419b06958c5eff7c8bbeee6571\" id:\"b2f8c1df983862968b8108826d4d9d6cbff207fe57de9c0fad9fc1084c77436a\" pid:4668 exited_at:{seconds:1757461145 nanos:86527581}" Sep 9 23:39:05.289303 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 9 23:39:05.919850 kubelet[2616]: I0909 23:39:05.919758 2616 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-n4lqh" podStartSLOduration=5.919727571 podStartE2EDuration="5.919727571s" podCreationTimestamp="2025-09-09 23:39:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:39:05.91775052 +0000 UTC m=+83.391163487" watchObservedRunningTime="2025-09-09 23:39:05.919727571 +0000 UTC m=+83.393140498" Sep 9 23:39:07.221145 containerd[1499]: time="2025-09-09T23:39:07.220758190Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f8ce6f479092aba3ed6b22a2318ad59a3a1263419b06958c5eff7c8bbeee6571\" id:\"5bfb18589c3e9b7a38c31df70c7927e078b37ec984365fd3fc9c180f34d8a3c6\" pid:4845 exit_status:1 exited_at:{seconds:1757461147 nanos:220225587}" Sep 9 23:39:08.327347 systemd-networkd[1431]: lxc_health: Link UP Sep 9 23:39:08.333793 systemd-networkd[1431]: lxc_health: Gained carrier Sep 9 23:39:09.357676 containerd[1499]: time="2025-09-09T23:39:09.357620725Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f8ce6f479092aba3ed6b22a2318ad59a3a1263419b06958c5eff7c8bbeee6571\" id:\"f971f53058b325e8ae9bc7e7ff64589a6b44eee3cf4bda10174ffb07fafddb62\" pid:5204 exited_at:{seconds:1757461149 nanos:357222963}" Sep 9 23:39:09.769431 systemd-networkd[1431]: lxc_health: Gained IPv6LL Sep 9 23:39:11.530623 containerd[1499]: time="2025-09-09T23:39:11.530566334Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f8ce6f479092aba3ed6b22a2318ad59a3a1263419b06958c5eff7c8bbeee6571\" id:\"0f4d735188b432de3f7fda2ffe095468365647a38f4057af7ec9f852d73b4e3b\" pid:5238 exited_at:{seconds:1757461151 nanos:529731088}" Sep 9 23:39:13.643588 containerd[1499]: time="2025-09-09T23:39:13.643522696Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f8ce6f479092aba3ed6b22a2318ad59a3a1263419b06958c5eff7c8bbeee6571\" id:\"ee1442964314d0884876ce54d37f66a5975431611cae31a8a719769d81b1d959\" pid:5269 exited_at:{seconds:1757461153 nanos:643042892}" Sep 9 23:39:13.650365 sshd[4405]: Connection closed by 10.0.0.1 port 60754 Sep 9 23:39:13.651588 sshd-session[4400]: pam_unix(sshd:session): session closed for user core Sep 9 23:39:13.655909 systemd[1]: sshd@25-10.0.0.81:22-10.0.0.1:60754.service: Deactivated successfully. Sep 9 23:39:13.657995 systemd[1]: session-26.scope: Deactivated successfully. Sep 9 23:39:13.660023 systemd-logind[1470]: Session 26 logged out. Waiting for processes to exit. Sep 9 23:39:13.661455 systemd-logind[1470]: Removed session 26.